date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,515,439,233,000
What is the difference between find . and /home/user/* as an input to for command.? For example: for var in $(find .) do echo "$var" done or for var in /home/user/* do echo "$var" done In the first case for command breaks up the file with names containing spaces. While in the second case it does not. Why?
This is standard practice for shells. The order of operations is command substitution ($(find .)), then word splitting, then glob expansion (/home/user/*). From the POSIX standard (word splitting = field splitting; glob expansion = pathname expansion): The order of word expansion shall be as follows: Tilde expansion (see Tilde Expansion), parameter expansion (see Parameter Expansion), command substitution (see Command Substitution), and arithmetic expansion (see Arithmetic Expansion) shall be performed, beginning to end. See item 5 in Token Recognition. Field splitting (see Field Splitting) shall be performed on the portions of the fields generated by step 1, unless IFS is null. Pathname expansion (see Pathname Expansion) shall be performed, unless set -f is in effect. Quote removal (see Quote Removal) shall always be performed last. For this reason, it is always recommended to use globs where possible, so that word splitting does not interfere with your file names. The $(find) construct is actually an example of Bash Pitfall #1.
what is the difference between `find .` and /home/user/* as an input to for command
1,515,439,233,000
I want to use my script on many files sometimes. I used this for the purpose: for etminput in $1 do #process done But this just gives the first input. How can I do the process on every wildcard matches?
If you want to loop over all the arguments to your script, in any Bourne like shell, it's: for i do something with "$i" done You could also do: for i in "$@"; do something with "$i" done but it's longer and not as portable (though is for modern shells). Note that: for i; do something with "$i" done is neither Bourne nor POSIX so should be avoided (though it works in many shells) For completeness, in non-Bourne shells: csh/tcsh @ i = 1 while ($i <= $#argv) something with $argv[$i]:q @ i++ end You cannot use: foreach i ($argv:q) something with $i:q end because that skips empty arguments rc/akanga for (i) something with $i (rc generally is what shells should be like). es for (i=$*) something with $i (es is rc on steroids). fish for i in $argv something with $i end zsh Though it will accept the Bourne syntax, it also supports shorter ones like: for i ("$@") something with "$i"
Loop in WildCard as Input of Script
1,515,439,233,000
I have several servers which have several files with deploy date info I need to parse, and get a local copy of the files which are 2 months old or older. #!/bin/bash # split on new line not space # don't want to quote everything # globbing desired IFS=$'\n' servers=( blue red ) parseDate() { grep 'deploy_date:' $file | \ sed ... | \ # much more in here... } getOldFiles() { local #all vars currentDateInSecs=$(date +%s) twoMonthsInSecs=526000 for server in ${servers[@]}; do oldfile=$( ssh $server " $(declare -f parseDate) for file in /dir/$server/file*; do dateInSecs=$(date -jf '%b/%e/%Y' $(parseDate) +%s) timeDiff=$((\$currentDateInSecs - \$dateInSecs)) ((\$timeDiff >= \$twoMonthsInSecs)) && cat \$file done " ) [ -n $oldFile ] && cat $oldFile ~/oldFiles/${server}-${file}.txt done Problem: Current issue is with dateInSecs=$(date -jf '%b/%e/%Y' $(parseDate \$file) +%s). When parseDate \$file is in $() the $file variable is not expanded, it works fine without command substitution, but I need it. How do I fix this? Info: This isn't a script, they're in my ~/.bash_profile This is a script (among other scripts) in a git repo which is sourced from ~/bash_profile (I have an install script which sets sources using $PWD) so people can use these commands directly instead of cd'ing to the git repo (which has many other things not applicable to the them). Runs from Macos to CentOS servers.
Everything that can be expanded inside your oldfile=$(...) command substitution is being expanded before you ssh. You need to escape the various internal $() if you want them to run on the remote instead of the local. Anything you want to be evaluated on the remote, needs escaping. Try: oldfile=$( ssh $server " $(declare -f parseDate) for file in /dir/$server/file*; do dateInSecs=\$(date -jf '%b/%e/%Y' \$(parseDate \$file) +%s) timeDiff=\$(($currentDateInSecs - \$dateInSecs)) ((\$timeDiff >= \$twoMonthsInSecs)) && cat \$file done " ) Also, while BSD date has the -j option, GNU date does not. Since your are connecting to a CentOS machine, you probably want to change dateInSecs=\$(date -jf '%b/%e/%Y' \$(parseDate \$file) +%s) to dateInSecs=\$(date \$(parseDate \$file) +%s) But we would need to see the exact format you get from parseDate to be sure.
SSH for loop: parameter passed to function captured in variable is not expanded
1,515,439,233,000
The code below is an mwe based on a script mwe I've written where I specify flags in what I think is the usual way. But I'm seeing really wierd behavior. If I type mwe -e or mwe -n it thinks there are no arguments and returns no arg. If I type mwe -k or mwe -i it thinks argType is not "-" and returns breaking. If I comment out the four lines that end with a # the code works as expected. That suggests the problem is being caused by the while loop. Could somebody please explain what's happening? #!/bin/bash foo=0 argType=`echo "$1" | cut -c 1` while [ 1 -gt 0 ] ; # do # if [ $# -eq 0 ] ; then echo no arg exit elif [ "$argType" != "-" ] ; then #No more flags echo breaking break # elif [ "$1" = "-n" ] ; then foo=1 shift elif [ "$1" = "-e" ] ; then foo=2 shift elif [ "$1" = "-i" ] ; then foo=3 shift elif [ "$1" = "-k" ] ; then foo=4 shift fi done # echo This is foo: $foo
From your question it is not clear what you want ! Anyhow it seems that you want the number corresponding to the last argument #!/bin/bash foo=0; while [[ $# -gt 0 ]]; do case "${1}" in '-n') foo=1; shift ;; '-e') foo=2; shift ;; '-i') foo=3; shift ;; '-k') foo=4; shift ;; *) echo "Invalid flag"; exit 1; ;; esac done echo "This is foo: $foo" If instead you want a mechanism that treats and validates arguments before being processed, you can use something like #!/bin/bash inputf=''; outputf=''; text=''; format=''; while [[ $# -gt 0 ]];do case "${1}" in '-i') inputf="${2}"; shift 2 ;; '-o') outputf="${2}"; shift 2 ;; '-t') text="${2}"; shift 2 ;; '-f') format="${2}"; shift 2 ;; esac done
bash scripts with option flags inconsistent behavior
1,515,439,233,000
I wanted to make a list of similar clear commands clearer to read, so I've made a little terminal loop for what in \ cache \ thumbs \ ; do my template $what:clear; done It works great, however I want to achieve an equivalent of my template cache:clear && my template thumbs:clear Is there a way how to easily and clearly put && to work in such loop, without the complexity of adding if/else statement to the loop body checking for last command exit code and calling break if not 0? If condition is enaugh for you: for what in \ cache \ thumbs \ ; do my template $what:clear; if [ $? -ne 0 ]; then break; fi \ ; done
You could use set -e: (set -e; for what in \ cache \ thumbs; do my template $what:clear; done) set -e causes the shell to exit when a command exits with a non-zero exit code, and the failure isn’t handled in some other way. In the above, the whole subshell exits if my ... fails, and the subshell’s exit code is the failed command’s exit code. This works in shell scripts and on the command line. Another approach is to use a disjunction: for what in \ cache \ thumbs; do my template $what:clear || break done This has the disadvantage of requiring || break after every command, and of swallowing the exit code indicating failure; the for loop will exit with a zero exit code every time. You can avoid this by making break fail, which will cause the for loop to exit with an exit code of 1 (which is still not as good as the subshell approach above): for what in \ cache \ thumbs; do my template $what:clear || break -1 2>/dev/null done (break -1 causes break to exit with a non-zero exit code, and 2>/dev/null throws away its error message). As far as I know you can’t use && to exit a loop, as in && done or something similar.
Is it possible to use && operator in terminal command template loop?
1,515,439,233,000
I would like to perform the same aggregation operations on each of several bunches of files where each bunch is matched by one glob pattern. What I would not like to do is pipe each file name into the aggregation function separately. My first attempt failed because the file names got globbed in the outer loop, flattening the whole collection into a flat list and not treating them as separate batches: for fileglob in /path/to/bunch*1 /path/to/bunch*2 ... ; do stat "$fileglob" | awk [aggregation] done So I hid the * from the loop by escaping it, then unescaped it for the function: for fileglob in /path/to/bunch*1 /path/to/bunch*2 ... ; do realglob=`echo "$fileglob" | sed 's/\\//g'` stat "$realglob" | awk [aggregation] done There has to be a better way. What is it? GNU bash, version 3.2.51
This requires a careful use of quotes: for fileglob in '/path/to/bunch*1' '/path/to/bunch*2' ... ; do stat $fileglob | awk [aggregation] done But that may fail on filenames with spaces (or newlines). Better to use this: fileglobs=("/path/to/bunch*1" "/path/to/bunch*2") for aglob in "${fileglobs[@]}" ; do set -- $aglob stat "$@" | awk [aggregation] done The glob gets correctly expanded and placed in the positional parameters with: set -- $aglob Then, each parameter is placed as an argument to stat in: stat "$@" And the output of stat goes (as one output) to awk.
Iterating over glob patterns, not the files in them
1,515,439,233,000
I'm trying to copy a bunch of files with the same name, but in different subdirectories, to a single directory, changing the names to ones based on the paths to the original files. I use a for loop that does what I intend in bash, but behaves very oddly in zsh. Command (linebreaks added for legibility): for f in */Flavors/*/stuff1/filename.txt; do l=$(echo $f | cut -d'/' --output-delimiter '' -f1,3); dest=/stuff2/${l}.utf8.txt; echo $f $dest; cp -v -- $f $dest; done Output in zsh. I intend the output file names to be e.g. "EnglishAU.utf8.txt" but instead they are just "English", "French" and "Spanish". Note that the echo in the loop shows $dest containing the correct path, and then the cp uses the wrong one! English/Flavors/AU/stuff1/filename.txt /stuff2/EnglishAU.utf8.txt `English/Flavors/AU/stuff1/filename.txt' -> `/stuff2/English' English/Flavors/UK/stuff1/filename.txt /stuff2/EnglishUK.utf8.txt `English/Flavors/UK/stuff1/filename.txt' -> `/stuff2/English' English/Flavors/US/stuff1/filename.txt /stuff2/EnglishUS.utf8.txt `English/Flavors/US/stuff1/filename.txt' -> `/stuff2/English' French/Flavors/CA/stuff1/filename.txt /stuff2/FrenchCA.utf8.txt `French/Flavors/CA/stuff1/filename.txt' -> `/stuff2/French' French/Flavors/FR/stuff1/filename.txt /stuff2/FrenchFR.utf8.txt `French/Flavors/FR/stuff1/filename.txt' -> `/stuff2/French' Spanish/Flavors/ES/stuff1/filename.txt /stuff2/SpanishES.utf8.txt `Spanish/Flavors/ES/stuff1/filename.txt' -> `/stuff2/Spanish' Spanish/Flavors/OT/stuff1/filename.txt /stuff2/SpanishOT.utf8.txt `Spanish/Flavors/OT/stuff1/filename.txt' -> `/stuff2/Spanish' As mentioned above, this works as intended in bash. What's zsh doing?
This is happening because cut is outputting NULL characters in the output. You can't pass a program arguments which contain a null character (see this). In bash this works because bash can't handle NULL characters in strings, and it strips them out. Zsh is a bit more powerful, and it can handle NULL characters. However when it comes time to pass the string to the program, it still contains the null, which signals the end of the argument. Let's look at this in detail. $ echo 'English/Flavors/AU/stuff1/filename.txt' | cut -d'/' --output-delimiter '' -f1,3 | xxd 0000000: 456e 676c 6973 6800 4155 0a English.AU. Here we simulated one of your files, passing the path through cut. Notice the xxd output which has a NULL character between English and AU. Now lets run through and simulate the rest of the script. $ l=$(echo 'English/Flavors/AU/stuff1/filename.txt' | cut -d'/' --output-delimiter '' -f1,3) $ dest=/stuff2/${l}.utf8.txt $ echo "$dest" | xxd 0000000: 2f73 7475 6666 322f 456e 676c 6973 6800 /stuff2/English. 0000010: 4155 2e75 7466 382e 7478 740a AU.utf8.txt. Notice the NULL after the English. The echo displays it properly because echo is a shell built-in. If we use an external echo, it also exhibits the issue. $ /bin/echo "$dest" | xxd 0000000: 2f73 7475 6666 322f 456e 676c 6973 680a /stuff2/English. P.S. You really should be quoting too :-) The solution is to not use cut, use awk instead. $ echo 'English/Flavors/AU/stuff1/filename.txt' | awk -F/ '{ print $1$3 }' | xxd 0000000: 456e 676c 6973 6841 550a EnglishAU.
Why does this command to copy files in a for loop work in bash but not in zsh?
1,515,439,233,000
I am trying to read user and server details from file tempo.txt and then check the disk space usage of the file system on that unix account using another script server_disk_space.sh.But I am not able to figure out why while loop is only working for first line and for loop is working fine.Please help me understand this. Using while loop #!/usr/bin/ksh while read line do r1=`echo $line | cut -d"#" -f1`; r2=`echo $line | cut -d"#" -f2`; apx_server_disk_space.sh $r2 $r1 done<tempo.txt Output 8 Using for loop #!/usr/bin/ksh for line in $(cat tempo.txt) do r1=`echo $line | cut -d"#" -f1`; r2=`echo $line | cut -d"#" -f2`; apx_server_disk_space.sh $r2 $r1 done Output 8 23 54 89 12 Contents of server_disk_space.sh #!/usr/bin/ksh user=$1 server=$2 count=`ssh ${user}@${server} "df -h ."` echo ${count} | awk '{print $12}' | tr -d % Above script outputs the value of Use percentage of Disk Usage on any server . Contents of tempo.txt abclin542#abcwrk47# abclin540#abcwrk1# abclin541#abcwrk2# abclin543#abcwrk3# abclin544#abcwrk33#
Unless you add the -n option to ssh, ssh will read from its standard input, which in the case of the while loop is the tempo.txt file. Alternatively, you can use a different file descriptor to read the tempo.txt file: #! /usr/bin/ksh - while IFS='#' read <&3 -r r1 r2 rest; do apx_server_disk_space.sh "$r2" "$r1" done 3< tempo.txt If those servers are GNU/Linux servers, your ssh script could be: #! /bin/sh - ssh -n "$1@$2" 'stat -fc "scale=2;100*(1-%a/%b)" .' | bc Which would probably be more robust and future-proof.
Why the behavior of while loop and for loop is different?
1,515,439,233,000
In the following script, the first for loop executes as expected, but not the second. I do not get any error, it seems that the script just hangs. HOME=/root/mydir DIR=$HOME/var DIRWORK=$HOME/Local for f in $(find $DIR -type f); do lsof -n $f | grep [a-z] > /dev/null if [ $? != 0 ]; then echo "hi" fi done for i in $(find $DIRWORK -type d -name work); do echo "2" done
Your script is coded in a dangerous way. First, I assume you are using the Bash shell since you tagged it '/bash' and '/for'. In my answer I will quote this great Bash guide, which is probably the best source to learn Bash from out there. 1) Never use a Command Substitution, of either kind, without quotes. There is a major issue here: using an unquoted expansion to split output into arguments. Specifically speaking, this $(find $DIRWORK -type d -name work) and $(find $DIR -type f) will undergo Word Splitting, thus if find finds a file with spaces in its name, i.e. "file name", the word splitting result of Bash will pass 2 argument for the for command to iterate over, i.e. one for "file" and one for "name". In this case you want to hope that you'll get a "file: No such file or directory" and "name: No such file or directory", instead of potentially causing damage to them if they truly exist. 2) By convention, environment variables (PATH, EDITOR, SHELL, ...) and internal shell variables (BASH_VERSION, RANDOM, ...) are fully capitalized. All other variable names should be lowercase. Since variable names are case-sensitive, this convention avoids accidentally overriding environmental and internal variables. Your $DIRWORK directory breaks that convention, and it also unquoted, thus if we let DIRWORK='/path/to/dir1 /path/to/dir2', find will look into two different directories when $DIRWORK is unquoted. The subject of using quotes is very important in Bash, so you should "Double quote" every expansion, as well as anything that could possibly contain a special character, e.g. "$var", "$@", "${array[@]}", "$(command)". Bash treats everything within 'single quotes' as literal. Learn the difference between ' and " and `. See Quotes, Arguments and you might also want to take a look at this link: http://wiki.bash-hackers.org/syntax/words This is a safer version of your script, which I recommend you to use instead: my_home="/root/mydir" my_dir="$my_home/var" dir_work="$my_home/Local" while IFS= read -r -d '' f; do # I'm guessing that you also want to ignore stderr; # this is where the 2>&1 came from. if lsof -n "$f" | grep '[a-z]' > /dev/null 2>&1; then echo "hey, I'm safer now!" fi done < <(find "$dir_work" -type f -print0) while IFS= read -r -d '' f; do echo "2" done < <(find "$dir_work" -type d -name 'work' -print0) As you can see, the IFS variable is set to be emtpy, thus preventing read from trimming leading and trailing spaces from a line. The read command uses an empty string ( -d '' ) as a delimiter, to read until it reaches a \0. find needs to be modified accordingly, therefore it uses the -print0 option to delimit its data with a \0 instead of a new line - which, amazingly and maliciously, can be a part of a file name. Splitting such a file by \n into two pieces will break our code. You might want to read about Process Substitution if you don't understand my script completely. The previous answer which stated that find ... | while read name; do ...; done should be used for reading finds output can be also bad. The while loop above is executed in a new subshell with its own copy of variables copied from the parent. This copy then is used for whatever you like. When the while loop is finished, the subshell copy is discarded, and the original variables of the parent did not change. If you aim on modifying some variables inside this while loop and use them afterwards in the parent, consider using the safer script above which will prevent data loss.
Why won't the for loop execute on directory
1,515,439,233,000
I am trying search for all .mkv files in my current folder, then using mediainfo I want to put the height from its meta data to a variable, but it seems it is failing. This snippet: height=$(mediainfo "input.mkv" | grep -E 'Height'); echo $height; height=${height//[!0-9]/}; echo $height; is working great, it outputs the result I want if I manually insert the file name: # height=$(mediainfo 'input.mkv' | grep -E 'Height'); echo $height; height=${height//[!0-9]/}; echo $height; Height : 720 pixels 720 But, when I try to put it in my for file in so that I won't need to manually insert the file name, it keeps failing: for file in *.{mkv}; do height=$(mediainfo "$file" | grep -E 'Height'); echo $height; height=${height//[!0-9]/}; echo $height; done The output of the variable $height is empty. # for file in *.{mkv}; do height=$(mediainfo "$file" | grep -E 'Height'); echo $height; height=${height//[!0-9]/}; echo $height; done (null) (null) I already tried, changing "$file" to '$file' and $file, but none of them works, am I missing something?
You wrote *.{mkv} instead of *.mkv. Therefore the loop will only loop about the one "file" *.{mkv} which does not exist. In this case the output of mediainfo is simply empty. Add something like echo "$file" in your loop to verify.
using the variable "file" obtained from `for "file" in` and pass to another script fails [closed]
1,515,439,233,000
I have a bash script that classifies and retrieves files from a remote server. I have trouble with the classification step, which is based on the file names. I am correctly able to identify the different families of files that are defined by the start of the filenames. Filenames can be: ala-olo_ulu-1602915797.txt ala-olo_ulu-1602915799.txt ili-olo-1602915897.txt ili-olo-1602915997.txt ili-olo-pip-1602925797.txt ili-olo-pip-1602935797.txt In this example, I have 3 families: ala-olo_ulu ili-olo ili-olo-pip (pure examples :)) Each family is treated in one iteration of a loop. In such an iteration, I have the family name available in an variable BASE_NAME (for instance ili-olo). My trouble is the taring step, before rsync'ing the files to local. I am managing it with the following ssh command. ssh root@"${RMT_IP}" ' for FILE in "'${BASE_NAME}'*'${FILE_EXTENSION}'"; do tar -rf "'${BASE_NAME}'.tar" ${FILE} --remove-files done' < /dev/null With this script, unfortunately, if ili-olo is managed before ili-olo-pip, then, the archive will contain both families (they both share the same start). And when ili-olo-pip will be then managed, they won't be any file anymore, and the tar command ends in error. (which is how I could spot the trouble). I guess, I should rather use regex to specify that the variable part of the file name is the digit part. Please, how can I change the for loop definition so that the families starting with the same string do not get into the same tar? for FILE in "'${BASE_NAME}'*'${FILE_EXTENSION}'"; do ? The digit part always has the same number of digits (it is a timestamp, with second precision), for instance 1602915797 I thank you for your help. Have a good day, Bests, Pierre
It's easier if you can use zsh as both the local and remote shell: ssh root@$RMT_IP zsh << EOF set -o extendedglob # for (#c10) for file in ${(qq)BASE_NAME}-[0-9](#c10).${(qq)FILE_EXTENSION}(N); do tar -rf ${(qq)BASE_NAME}.tar \$file --remove-files done EOF [0-9](#c10) matches a sequence of 10 decimal digits. See also [0-9]## same as [0-9](#c1,) for one or more digits or <100000-9999999999> (which doesn't require extendedglob) for sequences of decimal digits making up numbers in that range. sshd on the server runs the login shell of the user to interpret the code passed as argument. Since we don't know what it is (often for root, that's just sh), we just make that code zsh, to start a zsh shell and pass the zsh code on stdin. Using a here-document like that makes it easier to construct the shell code to be interpreted by the remote shell there. As the EOF is not quoted, the local shell will perform expansions locally. It's important to keep track of which expansions are meant to be done locally and which are meant to be done by the remote shell. Above ${(qq)BASE_NAME} is expanded by the local shell, we use the (qq) parameter expansion flag to quote the result with single quotes, so that the remote shell takes it as a literal string. $file has to be expanded by the remote shell, so we prefix it with \ so that a literal $file be passed to the remote shell. If zsh is not available on the remote machine, but bash is, you could do (still using zsh locally): ssh root@$RMT_IP bash --norc << EOF shopt -s extglob nullglob # for +(...) export LC_ALL=C for file in ${(qq)BASE_NAME}-+([0-9]).${(qq)FILE_EXTENSION}; do tar -rf ${(qq)BASE_NAME}.tar "\$file" --remove-files done EOF bash doesn't have the equivalent of zsh's x(#c10) glob operator, but with extglob, it supports a subset of the ksh ones (not {10}(x) though unfortunately here), including +(x) which matches one or more x. So that +([0-9]) will match one or more digits instead of just 10. To match 10 digits, you could still do [0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9].
What appropriate regex for recognizing a variable length file name?
1,515,439,233,000
I created this loop that activate my script only on 1 month (20170301 - 20170331): for ((i = 20170301; i<=20170331; i++)) ; do /home/jul/exp/prod/client/apps/scripts/runCer client-layer-name $i; done but I want it to run on 5 month (between 20170301 -20170831), how can I do that ?
Assuming GNU date is available, you can use this bash/ksh93/zsh script: start=$(date -ud 20170301 "+%s") # start time in epoch time (seconds since Jan. 1st, 1970) end=$(date -ud 20170831 "+%s") # end time for ((i=start; i <= end; i+=86400)); do # 86400 is 24 hours runCerclient-layer-name "$(date -ud "@$i" +%Y%m%d)" done
How run a for loop on 5 month?
1,515,439,233,000
I want to read all java Versions on my system. for i in 'find / -name java 2>/dev/null' do echo $i checking $i -version done I receive an error: find: paths must precede expression: 2>/dev/null Usage: find [-H] [-L] [-P] [-Olevel] [-D help|tree|search|stat|rates|opt|exec] [path...] [expression] What is the problem?
You're receiving that error from the for loop because your for loop is actually looping over one element -- the string, not the command: "find / -name java 2>/dev/null", so it is running: echo find / -name java 2>/dev/null checking find / -name java 2>/dev/null -version ... which is where find's error arises. You might be trying to do: for i in `find / -name java 2>/dev/null` do echo $i checking $i -version done ... (with backticks instead of single quotes), in which case I would suggest something more along the lines of: find / -name java -exec sh -c '"$1" -version' sh {} \; 2>/dev/null Thanks to don_crissti for pointing out Stéphane's better version of find ... exec and for indirectly reminding me of a bash method that is one better way to find and execute results than looping over find: shopt -s globstar dotglob for match in /**/java do echo match is "$match" "$match" -version done
Using for loop with find command
1,515,439,233,000
I am teaching myself bash scripting with the book 'Learning the Bash Shell' by Newbam. I'm getting on OK with the book, but am a bit confused by the following script. It is a script which uses a for loop to go through the directories in $PATH, and print out information about them. The script is IFS=: for dir in $PATH; do if [ -z "$dir" ]; then dir=.; fi if ! [ -e "$dir" ]; then echo "$dir doesn't exist" elif ! [ -d "$dir" ]; then echo "$dir isn't a directory" else ls -ld $dir done What I'm confused about is why if dir is zero, or doesn't exist, are we then setting this null value to be the current directory? Or are we changing dir to our current directory (whatever directory we happen to be in) ? The script then seems to test if dir exists, and then tests if it is a directory. Can anybody shed light on this aspect of the script for me? I would have thought that if [ -z "$dir" ] was true, that this would indicate that the directory doesn't exist, and isn't a directory to begin with?
The -z test is testing for a zero-length value in $dir, not for anything related to an actual directory. That condition is triggered if you have a :: sequence in your PATH variable, at which point we set the value of the $dir variable to ., meaning the current directory, and we then conduct the standard two tests on it.
Help with bash script
1,515,439,233,000
I want to recursively convert files from .docx to .doc in a folder. The problem is that all the output files are created in the folder where I run the following command, not in the location of the source files: find -type f -name "*.docx"-exec libreoffice --convert-to doc {} \; I understand that find gives source files to the libreoffice command and the output obviously has to be in the current location, so how can I use a command to loop recursively into a folder and execute the command from the location where the files is found, not the initial one?
That's what -execdir is for, if your find supports it. The default find on Linux systems, GNU find, does have it. From man find: -execdir command ; -execdir command {} + Like -exec, but the specified command is run from the subdirectory containing the matched file, which is not normally the directory in which you started find. [ . . . ] So you want: find . -type f -name "*.docx" -execdir libreoffice --convert-to doc {} \; The file name will be inserted at each instance of {}. If -exec is used, {} will include the relative path starting with ./. If -execdir is used, {} will only be the file name.
execute a command recursively on the files of a folder in the matching location, not in the original one
1,515,439,233,000
I wanted to extract specific lines from multiple bam (a binary file format) files. I can select the lines from a single bam file using this command: samtools view -c TCGA-BH-A0BW-11A.sorted.bam "5:13744354-13744380" 550 I have a directory with 100 bam files like below: TCGA-AC-A2FB-11A.sorted.bam TCGA-AC-A2FF-11A.sorted.bam TCGA-AC-A2FM-11B.sorted.bam TCGA-AC-A2QH-01A.sorted.bam TCGA-AC-A2QJ-01A.sorted.bam TCGA-BH-A0BW-11A.sorted.bam TCGA-BH-A0BW-01A.sorted.bam TCGA-CH-A0BW-11A.sorted.bam How do I apply the command to multiple bam files and save the output in a single file with the first column as the file name without the extension and the second column being the result of the samtools command on this file? For example: It should somehow look like this TCGA-BH-A0BW-11A 550 TCGA-BH-A0BW-01A 220 TCGA-CH-A0BW-11A 100 I am working on a Linux system.
If you are using bash, you can iterate over all files with the appropriate extension, and process them as follows: for file in *.sorted.bam do key="${file%.sorted.bam}" value="$(samtools view -c "$file" "5:13744354-13744380")" echo "$key $value" done > output.txt In the loop, we generate the file key by removing .sorted.bam from the end of the filename (and store it in a shell variable key) perform the processing as shown in your single-file example and store the output in a shell variable value print both key and value and redirect the overall output of the loop to a file output.txt.
How to create a new file based on results from multiple files and keep the filenames as first column?
1,515,439,233,000
I started a for loop in an interactive bash session. For this question we can assume the loop was something like for i in dir/*; do program "$i" done > log The command takes a lot longer than expected and still runs. How can I see the current progress of the running for loop. Non-Solutions: Look at log. Doesn't work because: program is expected to run silently. You can think of program as a validator. If the loop completes without any output then I'm happy. Stop the loop. Add some kind of progress indication (for instance echo "$i"). Start the loop again. Doesn't work because: The loop already ran for hours. I don't want to throw away all the time and energy invested in the current run. I assume everything works fine. I'm just curious and want to know the current progress. Ctrl+Z then set -x; fg Doesn't work because: bash doesn't continue the loop when using fg. After fg only the current/next command inside the loop will run, then the loop exits. You can try it yourself using for i in {1..200}; do printf '%s ' $i; /usr/bin/sleep 0.5; done.
Wildcards as dir/* always expand in the same order. This feature together with the current value of $i can be used to show a progress information. The current value of $i can be retrieved by looking at the processes running on your system. The following command prints the currently running instance of program and its arguments. If there are multiple program processes you may want to use the --parent ... option for pgrep to only match processes started by the shell running the for loop you want to inspect. ps -o args= -fp $(pgrep program) Extract the value of $i manually, then get your progress using a=(dir/*) n=$(printf %s\\0 "${a[@]}" | grep -zFxnm1 'the exact value of $i' | cut -f1 -d:) echo "Progress: $n / ${#a[@]}" This works under two assumptions The content of dir/ does not change while running the loop. If program creates, renames, or deletes files then the numbers are likely wrong. Calling program starts a new process. If program is a bash function then there won't be a sub-process for program. If the function function itself calls another program that starts a new process then you can look for that sub-program. However, if program is a bash built-in (commands that are listed by help) or a pure bash function (one that uses only bash built-ins) then you are out of luck. If you have problem 2 or your for loop has a different structure than the one in the question (for example program < "$1" or a very long loop body with many different programs) then you might be able to get some progress information from lsof by looking at the files opened by your current shell session or its child processes.
Show progress of a for loop after it was started
1,515,439,233,000
I want to define the function cpfromserver in bash so that when I run $ cpfromserver xxx yyy zzz the result is the same as if I had typed $ scp [email protected]:"/some/location/xxx/xxx.txt /some/location/xxx/xxx.pdf /some/location/yyy/yyy.txt /some/location/yyy/yyy.pdf /some/location/zzz/zzz.txt /some/location/zzz/zzz.pdf" /somewhere/else/ where it works for any number of arguments. (That is, the function should copy filename.txt and filename.pdf from the directory /some/location/filename/ on the remote.server to the local directory /somewhere/else/ for every filename I specify as an argument to the function. And do it all in a single ssh connection.) Currently, I have written a function that works for a single argument, and I just loop over it, but this establishes separate ssh connections for each argument, which is undesirable. My difficulty is that I only know how to use function arguments individually by their position ($1, $2, etc.) — not how to manipulate the whole list. [Note that I am writing this function as a convenience tool for my own use only, and so I would prioritize my own ease of understanding over handling pathological cases like filenames with quotation marks or linebreaks in them and whatnot. I know that the filenames I will be using this with are well-behaved.]
Try this way: cpfromserver () { files='' for x in "$@" do files="$files /some/location/$x/$x.txt /some/location/$x/$x.pdf" done scp [email protected]:"$files" /somewhere/else/ } Important caveat from comments: "It's worth noting for posterity that this solution definitely won't work for complicated filenames. If a filename contains a space, or a newline, or quotes, this approach will definitely fail."
Write bash function which operates on list of filenames
1,515,439,233,000
Within my parent_directory, I have subdirectories labeled E-11_G, and E-10_G. Within each of those subdirectories, I have more subdirectories labeled E-2_U, E-1_U, and E0_U. In each of those folders, I'm performing commands on these files: ander, ander.band, ander.data, ander.in, and ander.log. Here's a better picture: parent_directory ↓ E-11_G/ E-10_G/ ↓ E-2_U/ E-1_U/ E0_U/ ↓ ander ander.band ander.data ander.in ander.log I want to write a more efficient version of this: #!/bin/bash cd parent_directory/E-11_G/E-2_U/; ls -l ander ander.band ander.data; cat ander.in; cat ander.log; pwd; cd ../E-1_U; ls -l ander ander.band ander.data; cat ander.in; cat ander.log; pwd; cd ../E0_U; ls -l ander ander.band ander.data; cat ander.in; cat ander.log; pwd; cd ../../E-10_G/E-2_U/; ls -l ander ander.band ander.data; cat ander.in; cat ander.log; pwd; cd ../E-1_U; ls -l ander ander.band ander.data; cat ander.in; cat ander.log; pwd; cd ../E0_U; ls -l ander ander.band ander.data; cat ander.in; cat ander.log; pwd; I've already been able to successfully run this script as is, however, I really need a much more simpler and efficient way of writing it because I'm actually working with a lot more directories than I presented. My reason for doing this is because I need to keep certain records of each of those 'ander' files for each subdirectory. I'm basically planning on running this script and exporting everything that shows up onto the terminal window into a text file using shell > export text as. This is why I want the records for each directory in numerical order. How can I make my script more efficient? This is the kind of thing I'm aiming for: #!/bin/bash cd parent_directory/; do i= -11, -10 do j= -2, 0 cd E-i_G/E-j_U/; ls -l ander ander.band ander.data; cat ander.in; cat ander.log; pwd; end end I know that’s pretty much written in Fortran, but is there anything equivalent or similar to this in bash scripting? I've been trying to use 'for loops', but I just can't seem to write them in a way that would give me the same results as my first script.
cd parent_directory/ for i in {-11..-10} do for j in -2 0 do ( cd "E${i}_G/E${j}_U/" ls -l ander ander.band ander.data cat ander.in cat ander.log pwd ) done done Notes: You can loop over a range of numbers by using the braces notation: for i in {-11..-10} You can also loop over an explicit list of items: for j in -2 0 You can change the directory so some place that depends on variables: cd "E${i}_G/E${j}_U/" The argument to the cd command is a directory specified relative to the current directory. After we have done our work in that directory, we want to change back to the parent_directory. There are many ways to deal with this but one simple way is to do that is to put the cd command and commands to be performed in that directory into a subshell, denoted by parens. After we exit the parens, the directory is automatically restored to what it was before.
How do I perform the same set of commands within multiple subdirectories, in a numerical order?
1,515,439,233,000
I'm currently refactoring a script which has slowly grown beyond control. I'm trying to spin off repetition into functions. However, I have a repeated test being called from a loop, and want it to return continue. Shellcheck says SC2104: In functions, use `return` instead of `continue`. And the shellcheck wiki says don't do it. But is there a way? Below is an example: #!/bin/sh AFunction () { if [[ "${RED}" -lt 3 ]]; then echo "cont" continue else echo "nope" fi } for i in 1 2 3 4 5 do RED=${i} AFunction echo ${i} done This is the output: cont 1 cont 2 nope 3 nope 4 nope 5 But I would expect cont cont nope 3 nope 4 nope 5 Thanks everyone for the answers so far. I'm close but now have a spin off question. Hopefully that's ok? If I use a combination of @sudodus answer and @alecxs tips. Do I need to then always "return" a 0 at the end of the function? Seems like good practice now, but is it implied if I don't explicitly do it? #!/bin/sh AFunction () { ##Script doing loads of other stuff if [[ "${RED}" -lt 3 ]]; then echo "cont" ## This only happening because something has gone wrong return 1 else echo "nope" fi ##Script doing loads of more stuff } for i in 1 2 3 4 5 do RED=${i} AFunction || continue echo ${i} done
You can use 'return` with a parameter for example like the following, #!/bin/bash AFunction () { if [[ "${RED}" -lt 3 ]]; then echo "cont" return 1 else echo "nope" return 0 fi } for i in 1 2 3 4 5 do RED=${i} if AFunction then echo ${i} fi done
Return "continue" from function called from loop
1,515,439,233,000
I have found this error alongside couple more. I believe that the nesting is not quite all right or the for is not correctly indented or without some semicolon. Either way I have tried for quite some time to figure it out, but to no avail. Here is the code: if [ "${runcmd}" != "echo" ]; then statusmsg "Kernels built from ${kernelconf}:" kernlist=$(awk '$1 == "config" { print $2 }' ${kernelconfpath}) for kern in ${kernlist:-netbsd}; do [ -f "${kernelbuildpath}/${kern}" ] && \ echo " ${kernelbuildpath}/${kern}" done | tee -a "${results}" fi It is part of a build.sh file.
The var=value cmd ar1 syntax doesn't quite seem to work if cmd is a for loop (in neither bash nor sh). Both sh and bash give syntax errors for: foo=bar for f in ${foo:-BAR}; do echo $f; done which is what you're esentially doing. ( sh: 1: for: not found bash: syntax error near unexpected token `do' ) And that error seems to result in a couldn't find fi error in your case; Setting the variable on a separate line fixes the syntax error: kernlist=$(awk '$1 == "config" { print $2 }' ${kernelconfpath}) for kern in ${kernlist:-netbsd}; do #... Note: I would just write "$kernelbuildpath/$kern" instead of "${kernelbuildpath}/${kern}" if I were you. There's no technical reason for the curlies.
Shell script - Couldn't find 'fi' for this 'if'
1,515,439,233,000
My aim is to find $files on some $devices and save them to one or more folders which are namend corresponding to a timestamp of each file. To get the timestamp I wanted to use stat. In contrast to echo "$files", stat doesn't process each file separately. Instead it seems to process all files at once when I use "". Do you have any suggestions on how to improve quoting or any other hints on how to make stat able to do what i want it to do? Thank you very much for your help. #!/bin/bash for devices in "$(ls /media/*/)"; do for files in "$(find /media/*/$devices -iname *.jpg)"; do echo "$files" # prints all files in separate lines stat "$files" # seems to process all files at once stat $files # splits file paths at spaces done done
The problem is not with stat but with the source data in your for loop; because you have enclosed it all in quotes it becomes one long single entity. To fix this, remove the quotes around it, thus: for files in $(find /media/*/$devices -iname "*.jpg"); It works provided that none of the files or paths have spaces. But there is a more elegant solution which works with spaces and even with weird filenames (for instance ones that include quotes or newlines): while IFS= read -r -d '' file; do # operations on each "$file" here done < <(find /media/*/$devices -type f -iname *.jpg -print0)
get timestamp of files presented by FOR loop
1,515,439,233,000
How do I iterate for nth file in a for loop in unix? below some code I have tried but not succeeded #!/bin/bash # n=2 array=( "CTL_MLPOSDTLP1_1.ctl" "CTL_MLPOSDTLP1_2.ctl" "CTL_MLPOSDTLP1_3.ctl" ) for x in "${array[@]}" for ((x=${array[@]}; x<=n; x++)); do echo "array[x]" done
Here are two ways to loop over an array: #!/bin/bash array=( "CTL_MLPOSDTLP1_1.ctl" "CTL_MLPOSDTLP1_2.ctl" "CTL_MLPOSDTLP1_3.ctl" ) echo Loop 1 for x in "${array[@]}" do echo "$x" done echo Loop 2 for ((x=0; x<${#array[@]}; x++)); do echo "${array[x]}" done Looping over selected items This scripts allows you to specify on the command line which elements from the array ate to be processed: #!/bin/bash array=( "CTL_MLPOSDTLP1_1.ctl" "CTL_MLPOSDTLP1_2.ctl" "CTL_MLPOSDTLP1_3.ctl" ) for x in "$@" do echo "${array[x]}" done Suppose we want to loop over the first two and skip the third. (Since bash uses zero-based indexing, the first two are number 0 and number 1.) Use: $ bash script.sh 0 1 CTL_MLPOSDTLP1_1.ctl CTL_MLPOSDTLP1_2.ctl To run just the third: $ bash s.sh 2 CTL_MLPOSDTLP1_3.ctl To iterate from 0 to n-1 for ((x=0; x<n; x++)); do echo "${array[x]}" done
for loop to iterate through some file nth position
1,515,439,233,000
I have a 3 column text file (XYZ coordinates) and I need to iteratively add constants to the first column while appending to the end of the original file. I have tried several options but this is the one that gives the clearest idea of what I'm trying to do: awk ' { for ( i=-4; i<=4; i+=2 ) $1+=i }' coords.txt >> new_coords.txt This command results in a blank new_coords.txt file. How do I go about getting the output of the for loop into the new text file? Minimal Input 4 5 6 7 8 9 Minimal Output 0 5 6 3 8 9 2 5 6 5 8 9 4 5 6 7 8 9 6 5 6 9 8 9 8 5 6 11 8 9
I think this may be one of the (rare) cases when it actually makes sense to put the loop outside of Awk: $ for ((i=-4;i<=4;i+=2)); do awk -v i="$i" '{$1+=i} 1' Input; done 0 5 6 3 8 9 2 5 6 5 8 9 4 5 6 7 8 9 6 5 6 9 8 9 8 5 6 11 8 9 Otherwise: $ awk '{a[NR] = $1; b[NR] = $2 FS $3} END{for(i=-4;i<=4;i+=2){for(j=1;j<=NR;j++) print a[j]+i, b[j]}}' Input 0 5 6 3 8 9 2 5 6 5 8 9 4 5 6 7 8 9 6 5 6 9 8 9 8 5 6 11 8 9
Iteratively adding value to column of text file
1,515,439,233,000
Simple question: if I have a for loop (zsh) over an unreliable list, by which I mean the list contains entries that can't be predicted beforehand, then can I reset the for loop counter? This demonstrates what I'm asking for: # e.g. list=(1 5 2 9) for i in $list ; do [[ $i = 2 ]] && i=${list[1]} done (This example will obviously loop forever if it worked.) I can only think of doing it like this: for ((j=1; j<=${#list}; j++)); do [[ ${list[$j]} = 2 ]] && j=1 done Is there a simpler way of doing it? ("Simpler" meaning easier to follow with the eye what you're doing.)
You could do something like: alias forever='while ((1))' \ try-again='continue 2' \ ok-done='break' forever { for i ("$list[@]") { (( i == 2 )) && try-again } ok-done } Note that you need "$list[@]" instead of $list if you don't want to omit the empty elements. Not a lot more legible than: for ((i = 1; i <= $#list; i++)) { (( list[i] == 2 )) && i=1 } though.
reset for loop counter
1,515,439,233,000
I want to create a large number of folders and do some operations in them. The folder names are based on permutations of several chemical elements which I define as variables in a for loop: for Element in Cr Hf Mo Nb Ta Ti V W Zr I want a folder for all permutations of 4 of the elements in alphabetical order, so that I get subfolders containing the letters CrHfMoNb, CrHfMoTa, ... and so on. I tried to do this with 4 nestled for loops, but for simplicity I will demonstrate it here with just 2. The code I have come up with is: for Element in Cr Hf Mo Nb Ta Ti V W Zr; do for Elemen in Hf Mo Nb Ta Ti V W Zr; do mkdir "$Element""$Elemen"N # the N at the end is intended done done This yields the folders I want but a lot of unnecessary ones too, because I also get combinations like TiNbN or ZrVN which are not alphabetic and also duplicates like HfHfN. I can get rid of the duplicates by adding an if statement to the third line do [ "$Element" != "$Elemen" ] && mkdir "$Element""$Elemen"N although these duplicate folders do not disappear completely but become "phantom" files in my directory, meaning that they are called HfHfN etc. but have no file extension. The real problem however is the rest of the folders. I tried adding more if statements like do [ "$Element" != "$Elemen" ] && [ "$Element" > "$Elemen" ] && mkdir "$Element""$Elemen"N to decrease the allowed number of permutations but this does not get rid of anything. I also tried separating the if statements into their own respective for loops but that does not change anything aswell: for Element in Cr Hf Mo Nb Ta Ti V W Zr; do [ "$Element" != "$Elemen" ] && [ "$Element" > "$Elemen" ] && for Elemen in Hf Mo Nb Ta Ti V W Zr; do... I'm not entirely sure if > is the right if command, but from this list http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_07_01.html it seems the most reasonable one. Using commands like -ne, -lt, -le, -gt does not work as well, because they demand an integer, so the letters are not accepted. In the end I want to combine 4 loops together so it becomes a bit difficult to see through. What am I missing?
Spend a couple steps on skipping redundancies. It'll speed up the process overall. declare -a lst=( Cr Hf Mo Nb Ta Ti V W Zr ) # make an array for a in ${lst[@]} # for each element do for b in ${lst[@]:1} # for each but the 1st do [[ "$b" > "$a" ]] || continue # keep them alphabetical and skip wasted work for c in ${lst[@]:2} # for each but the first 2 do [[ "$c" > "$b" ]] || continue # keep them alphabetical and skip wasted work for d in ${lst[@]:3} # for each but the first 3 do [[ "$d" > "$c" ]] || continue # keep them alphabetical and skip wasted work mkdir "$a$b$c$d" && echo "Made: $a$b$c$d" || echo "Fail: $a$b$c$d" done done done done The redundancy skips are for when later loops are starting, such as when the outer loop is on element 4 but the second loop is still on 3 or 4. These skip those, because they wouldn't be alphabetic combinations. Doing that also guarantees no repeats. This generated 126 distinct dirs with no errors in 0m8.126s in git bash on my laptop with no subshells other than mkdir.
Force alphabetical order in for loop with if conditions
1,634,614,391,000
This is syntactically correct: for f in *bw;do echo $f;done But How would I add an extension to loop through one or the other? The following doesn't work: for f in *bw|*txt;do echo $f;done And this doesn't work either: for f in *bw or *txt;do echo $f;done
In Bash, with shopt -s extglob, and also zsh with KSH_GLOB enabled: for f in *.@(bw|txt) Note that in bash, if there are no matches, the loop will run with $f set to the literal string *.@(bw|txt). To avoid this, in bash: shopt -s nullglob for f in *.@(bw|txt) In zsh, by default, you'll get an error if there are no matches. To avoid this, add the N glob qualifier. for f in *.@(bw|txt)(N) In zsh, there's a simpler solution that works with default options, again with (N) to do nothing if there are no matches: for f in *.(bw|txt)(N) All of those will order all the entries alphabetically, with files with both extensions intermingled (that is, duplicate names that differ only in the extensions will (likely¹) be consecutive). You can list as many pipe-separated pattern entries within the parentheses as required, and they can include further globs (e.g. (zip|tar.?z)). ¹ foo.bw foot.bw foot.txt foo.txt however would sort in that order in locales where . is ignored in first instance in the collation algorithm as is common these days (as footb or foott come before footx and after foobw).
loop through one file extension or the other in bash or zsh
1,634,614,391,000
I want to read a number of strings from command line and read all strings in a loo. How to process (like Print) them in a loop, further. Sorry I am a newbie, please excuse my novice question. #!/usr/bin/bash echo "Enter the number of names of students" read classcount for ((i=1;i<=classcount;i++)); do read names[${i}] done for ((i=1;i<=classcount;i++)); do echo $names[${i}] done
Apart from various ways to improve the data input as shown in the other answer, a small change will make your script work as intended. In the line echo $names[${i}] the shell will expand $names and ${i}, probably resulting in [1] [2] etc, provided that names is not defined. Otherwise it will prepend the value of names before the opening brackets. (As mentioned in Stéphane Chazelas' comment, the resulting [1] etc could be replaced with matching file names 1 etc because of missing quotes, if such a files exist.) Instead, use echo "${names[${i}]}" Edit: As stated in ilkkachu's comment, in the script from the question it is sufficient to use echo "${names[i]}" since the index is an arithmetic expansion, and you don't need the $ there when referencing variables.
How to read variable number of variables in bash
1,634,614,391,000
I have a bunch of folders which are labelled in this way: conf1 conf2 ... But the order in the home directory is like conf1 conf10 conf100 conf101 ... conf2 conf20 conf200 conf201 ... Because each folder contains a file named "distance.txt", I would like to be able to print the content of the distance.txt file, from each single folder, but in order, going from folder 1-->2-->3... to the final folder 272. I tried several attempts, but every time the final file contains the all set of values in the wrong order; this is the piece of code I set: ls -v | for d in ./*/; do (cd "$d" && cat distance.txt >> /path/to/folder/d.txt ); done As you can see I tried to "order" the folders with the command ls -v and then to couple the cycle to iteratively save each file. Can you kindly help me?
For such a relatively small set of folders you could use a numerical loop for n in {1..272} do d="conf$n" test-d "$d" && cat "$d/distance.txt" >> /path/to/folder/d.txt done
Iterating through folders in numerical order
1,634,614,391,000
Please review these codes that are aimed to do basically the same task (finding the main .htaccess of the site and changing it): for dir in "$HOME"/public_html/*.{com,co.il}/; do if pushd "$dir"; then chmod 644 .htaccess popd fi done 2>/dev/null and: find "$HOME"/public_html/*.{com,co.il} -name ".htaccess" -exec chmod 644 {} \; Should these also affect .htaccess files in subdirectories of each directory under public_html?
Indeed, the first loop only considers files in the immediate directory structure you've listed (with the wildcards filled in, of course). The find command that you've listed does not restrict itself to any level of directory structure, and would run chmod on any and all files named .htaccess in the directory tree. To restrict the find command to only the directory structure it starts in (noting that you've given it, via globs, multiple starting directories), add the -maxdepth 1 predicate: find "$HOME"/public_html/*.{com,co.il} -maxdepth 1 -name ".htaccess" -exec chmod 644 {} \;
Ensuring that subdirectories won't be affected by a loop (or by find)
1,634,614,391,000
I have to make a script on Ubuntu, which will rename all of files in specified by path directory to be uppercase. I've already found a loop which will rename files, but I'm not sure how I can pass a path to this loop. Can you help me? Here's the loop: for f in *; do mv "$f" "$f.tmp"; mv "$f.tmp" "`echo $f | tr "[:lower:]" "[:upper:]"`"; done I've tried to pass the path like for f in path_to_dir* ... but what it does, it's just somehow making a directory .tmp
I suspect you did something like for f in /path/to/dir* instead of /path/to/dir/*. The former will look for files and directories in /path/to whose name starts with dir while the latter will iterate over the contents of the direrctory. In any case, that wouldn't help you because the tr command you are using would change everything, including /path/to/dir to upper case, leaving you with /PATH/TO/DIR which doesn't exist. The good news is that you can do it in a much simpler way. If you're using bash, you can use ${var^^} to make a variable's contents upper case. So you can just do: for f in /path/to/dir/*; do ## get the file's name name=$(basename -- "$f") ## make the new name mv -- "$f" /path/to/dir/"${name^^}" done Or, to avoid typing out the directory name twice, you could save it in a variable: targetPath="/path/to/dir" for f in "$targetPath"/*; do ## get the file's name name=$(basename -- "$f") ## make the new name mv -- "$f" "$targetPath"/"${name^^}" done However, this is one of the cases where it is simpler and cleaner to just cd into the target directory and work there: targetPath="/path/to/dir" cd -- "$targetPath" && for f in *; do mv -- "$f" "${f^^}" done Or, to avoid ending up in the new directory, run the loop in a subshell: targetPath="/path/to/dir" ( cd -- "$targetPath" && for f in *; do mv -- "$f" "${f^^}" done ) Finally, you can also do all this using perl-rename (known as rename on Ubuntu and installable with sudo apt install rename): rename -n 's|([^/]+)$|uc($1)|e' /path/to/dir/* The -n makes rename just print out what it would do without doing it. If you're satisfied it works, run it again without the -n to actually rename the files.
Script which will rename all files in specified by path directory
1,634,614,391,000
The default variable for loops in Perl is $_. Is there any equivalent of this in Bash?
There's no such thing in Bash. Perl is specific in the way that it's been created by a linguist, Larry Wall, and it has the natural language's smoothness built in on purpose. Bash in this respect is dumb. But on a higher level, the pipelines are a sort of loops that operate on default objects. These are not represented by any symbol, and so they're implicitly default. (The pipelines are present in both Perl and Bash, in case you don't know.)
Default variables in Bash
1,634,614,391,000
I have the following shell code: for value in 10 5 27 33 14 25 do echo $value done But what if I want to manipulate the output later? I want to store the entire output in one variable. Is this possible?
It's no different with for loops than with any other commands, you'd use command substitution: variable=$( for value in 10 5 27 33 14 25 do echo "$value" done ) Would store the whole output minus the last newline character added by the last echo as a string into the scalar $variable variable. You'd then do for instance: printf '%s\n' "$variable" | further-processing Or: futher-processing << EOF $variable EOF In the bash shell, you can also store each line of the output into an element of an array with: readarray -t array < <( for value in 10 5 27 33 14 25 do echo "$value" done ) To store each space/tab/newline delimited word (assuming the default value of $IFS) of the output into an array, you can use the split+glob operator with the glob part disabled set -o noglob # not needed in zsh which does only split upon unquoted # command substitution. zsh also splits on NULs by default # whilst other shells either removes them or choke on them. array=( $( for value in 10 5 27 33 14 25 do echo "$value" done ) ) With zsh/bash/ksh93, you can also do things like: array=() for value in 10 5 27 33 14 25 do array+=( "$(cmd "$value")" ) done To build the array. Then in all those, you'd do: further-processing "${array[@]}" To pass all the elements of the array as arguments to futher-processing or: printf '%s\n' "${array[@]}" | further-processing To print each element on one line, piped to further-processing Beware however that if the array is empty, that still prints an empty line. You can avoid that by using print -rC1 -- instead of printf '%s\n' in zsh or in any Bourne-like shell, define a function such as: println() { [ "$#" -eq 0 ] || printf '%s\n' "$@" }
How to Store the Output of a for Loop to a Variable
1,634,614,391,000
I want to run a bash numeric 'for' loop but I want to skip some numbers in between. Example: for num in {1..4, 7..11, 23..34}; do (echo num $num); done or for num in {17..24, 41..48}; do (echo num $num); done Is this possible? How?
for num in {17..24} {41..48}; do (echo num $num); done , and see the documentation for Brace Expansion in bash.
bash for loop multiple number ranges
1,634,614,391,000
Suppose I have a folder which contains a lot of audio files. How can I write a for loop so that for each file audioname.mp3 in the folder, these commads are run: convert -size 300x200 xc:lightblue -font Bookman-DemiItalic -pointsize 40 -fill blue -gravity center -draw "text 0,0 'audioname'" audioname.png ffmpeg -i audioname.png -i audioname.mp3 audioname.flv ?
for file in ~/Main_dir/*.mp3; do convert -background lightblue -size 300x200 -fill blue -pointsize 40 -gravity center label:"$(basename "$file" .mp3)" "${file%.*}.png"; avconv -i "${file%.*}.png" -i "${file%.*}.mp3" "${file%.*}.flv"; done for the discription of first convert command see my answer on AskUbuntu Explanation $(basename "$file" .mp3): With $(basename "$file") command I tried to get only filename with extension and with $(basename "$file" .mp3) I removed its extension too. $ for file in ~/Main_dir/*.mp3; do echo $(basename "$file" .mp3);done 039 - Del Nevesht - noraei Eluveitie - Meet The Enemy $ for file in ~/Main_dir/*.mp3; do echo $(basename "$file");done 039 - Del Nevesht - noraei.mp3 Eluveitie - Meet The Enemy.mp3 Explanation ${file%.*}: I use this for getting the full filepath without its extension. $ for file in ~/Main_dir/*.mp3; do echo "${file%.*}" ;done /home/username/Main_dir/039 - Del Nevesht - noraei /home/username/Main_dir/Eluveitie - Meet The Enemy And with next line in script you will add a created .png label into your .mp3 files. Note: I used avconv instead of ffmpeg. You can use that if you have not ffmpeg package installed. see the demo convert
for loop for running a command for all files in a folder
1,634,614,391,000
How can I mimic the tree command and iterate through all the files and subdirectories of a directory and echo all the file names? I thought that a subdirectory within a directory is still counted as a directory, so I did: for FILE in *; do if [ -d $FILE ] then echo "$FILE is a directory"; else echo "$FILE is a file" fi done How can I make this recursive and loop inside the subdirectories and iterate through all the files and print their file name + path? This is a shell script using bash. Thanks!
Here's an equivalent to terdon's shell function implemented as a find command: find . -exec sh -c 'for f; do [ -d "$f" ] && is_dir="" || is_dir="not " printf "\"%s\" is %sa directory\n" "$f" "$is_dir" done' sh {} + Or with find & perl (indenting two spaces per directory level and using NUL as the filename separator): find . -print0 | \ perl -0ne '$is_dir = -d $_ ? "" : "not "; $f = $_; $indent = ($f =~ s=/==g); printf " " x $indent . "%s is %sa directory\n", $_, $is_dir' Most of the perl one-liner is straight-forward and obvious, but one line might require some explanation: $f = $_; $indent = ($f =~ s=/==g); This copies the current input record (i.e. the filename, $_) into variable $f. $f is then modified to remove all / characters while the count of changes is stored in variable $indent. $f isn't used again, it's just a throwaway variable to avoid changing $_. The $indent variable is used with the repetition operator (x - similar to multiplication but for strings, see man perlop) to construct the printf format string. To count the files and directories, with 3-digit wide numbering of each line: $ find . -print0 | perl -0ne ' if ( -d $_ ) { $is_dir = ""; $dirs++ } else { $is_dir = "not "; $files++ }; $f = $_; $indent = ($f =~ s=/==g); #printf " " x $indent . "%s is %sa directory\n", $_, $is_dir; printf "%3i: " . " " x $indent . "%s is %sa directory\n", $., $_, $is_dir; END { # simple method to determine singular or plural word forms # without using the Lingua::EN::Inflect module # (see https://metacpan.org/release/Lingua-EN-Inflect) my $d_word = ($dirs != 1) ? "directories" : "directory"; my $f_word = ($files != 1) ? "files" : "file"; print "\n$dirs $d_word, $files $f_word\n"; }' The perl part of this one-liner has got to the point where it is no longer reasonable to edit on the command line, because dealing with absurdly long lines is a complete PITA - it's much easier to use a text editor. It should be saved to a script file somewhere in your $PATH (e.g. ~/bin/ or /usr/local/bin - add to your PATH if it's not already there) with #!/usr/bin/perl -0ne as the first line and made executable with chmod +x.
How to loop through all the files in a directory and print all the file names, like the command tree
1,634,614,391,000
folks- I'm a bit stumped, on this one. I'm trying to write a bash script that will use csplit to take multiple input files and split them according to the same pattern. (For context: I have multiple TeX files with questions in them, separated by the \question command. I want to extract each question into their own file.) The code I have so far: #!/bin/bash # This script uses csplit to run through an input TeX file (or list of TeX files) to separate out all the questions into their own files. # This line is for the user to input the name of the file they need questions split from. read -ep "Type the directory and/or name of the file needed to split. If there is more than one file, enter the files separated by a space. " files read -ep "Type the directory where you would like to save the split files: " save read -ep "What unit do these questions belong to?" unit # This is a check for the user to confirm the file list, and proceed if true: echo "The file(s) being split is/are $files. Please confirm that you wish to split this file, or cancel." select ynf in "Yes" "No"; do case $ynf in No ) exit;; Yes ) echo "The split files will be saved to $save. Please confirm that you wish to save the files here." select ynd in "Yes" "No"; do case $ynd in Yes ) # This line will create a loop to conduct the script over all the files in the list. for i in ${files[@]} do # Mass re-naming is formatted to give "guestion###.tex' to enable processing a large number of questions quickly. # csplit is the utility used here; run "man csplit" to learn more of its functionality. # the structure is "csplit [name of file] [output options] [search filter] [separator(s)]. # this script calls csplit, will accept the name of the file in the argument, searches the files for calls of "question", splits the file everywhere it finds a line with "question", and renames it according to the scheme [prefix]#[suffix] (the %03d in the suffix-format is what increments the numbering automatically). # the '\\question' allows searching for \question, which eliminates the split for \end{questions}; eliminating the \begin{questions} split has not yet been understood. csplit $i --prefix=$save'/'$unit'q' --suffix-format='%03d.tex' /'\\question'/ '{*}' done; exit;; No ) exit;; esac done esac done return I can confirm it does do the loop as I intended for the input files I have. However, the behavior I'm noticing is that it'll split the first file into "q1.tex q2.tex q3.tex" as expected, and when it moves on to the next file in the list, it'll split the questions and overwrite the old files, and the third file it will overwrite the second file's splits, etc. What I would like to happen is that, say, if File1 has 3 questions, it will output: q1.tex q2.tex q3.tex And then if File2 has 4 questions, it will then continue incrementing to: q4.tex q5.tex q6.tex q7.tex Is there a way for csplit to detect the numbering that has already been done in this loop, and increment appropriately? Thanks for any help you folks can offer!
The csplit command has no saved context (and nor should it), so it always starts its counting from 1. There's no way to fix this, but you could maintain your own counted value that you interpolate into the prefix string. Alternatively, try replacing read -ep "Type the directory and/or name of the file needed to split. If there is more than one file, enter the files separated by a space. " files ... for i in ${files[@]} do csplit $i --prefix=$save'/'$unit'q' --suffix-format='%03d.tex' /'\\question'/ '{*}' done with read -a files -ep 'Type the directory and/or name of the file needed to split. If there is more than one file, enter the files separated by a space. ' ... cat "${files[@]}" | csplit - --prefix="$save/${unit}q" --suffix-format='%03d.tex' '/\\question/' '{*}' This is one of the relatively rare instances where one really does need to use cat {file} | ... as csplit takes only a single file argument (or - for stdin). I've changed your read action to use an array variable since that's what you are (correctly) trying to use in your for ... do csplit ... loop. Regardless of what you finally decide to do, I'd strongly recommend you double-quote all your variables where you use them, particularly any further use of an array list such as "${files[@]}".
csplit multiple files into multiple files
1,634,614,391,000
I try to generate a command line for a backup script on bash shell. Simple example: EXCLUDES="/home/*/.cache/* /var/cache/* /var/tmp/* /var/lib/lxcfs/cgroup/*"; for FOLDER in $EXCLUDES; do printf -- '--exclude %b\n' "$FOLDER" ; done Should result in: --exclude '/home/*/.cache/*' --exclude '/var/cache/*' --exclude '/var/tmp/*' --exclude '/var/lib/lxcfs/cgroup/*' But the problem is, that the folders get expanded from shell. I did try many examples with echo / printf / quoting / IFS... but without the right result. Any way to fix this?
Whenever you have to specify a list of pathnames or pathnames with filename globs, or just generally a list that you are intending to loop over and/or use as a list of arguments to some command, use an array. If you don't use an array but a string, you will not be able to process things with spaces in them (because you use spaces as the delimiter in the string). It also makes it hard to loop over the contents of the string as you will have to invoke word splitting (by not quoting the variable expansion). But this will also cause filename globbing to happen, unless you explicitly turn this off with set -f. In most shells, even in plain /bin/sh, use an array instead. In sh, use the array of positional parameters ($@). For bash, use an array of quoted strings, like excludes=( '/home/*/.cache/*' '/var/cache/*' '/var/tmp/*' '/var/lib/lxcfs/cgroup/*' ) Then, rsync_opts=( --verbose --archive ) for excl in "${excludes[@]}"; do rsync_opts+=( --exclude="$excl" ) done Later, rsync "${rsync_opts[@]}" source/ target Note that the quoting is important in all of the above variable expansions. The expansion "${array[@]}" (as well as "$@" below) results in a list of individually quoted elements of the array in question (but only if double quoted!). For any /bin/sh shell: set -- '/home/*/.cache/*' \ '/var/cache/*' \ '/var/tmp/*' \ '/var/lib/lxcfs/cgroup/*' for excl do set -- "$@" --exclude="$excl" shift done set -- --verbose --archive "$@" rsync "$@" source/ target
for loop folder list without expansion
1,634,614,391,000
Background I'm running a larger one-line command. It is unexpectedly outputting (twice per iteration) the following: __bp_preexec_invoke_exec "$_" Here is the pared down command (removed other activity in loop): for i in `seq 1 3`; do sleep .1 ; done note: after i have played with this a few times it inexplicably stops printing the unexpected output What I've tried If I remove sleep .5 I do not get the unexpected output If I simply run sleep .5 the prompt returns but there is no output I have googled around for __bp_preexec_invoke_exec, but I am unable to determine how it applies to what I'm doing Question What is __bp_preexec_invoke_exec "$_"? How can I run this without the unwanted output? More info on solution thanks to @gina2x: Here is the output of declare -f | grep preexec preexec_functions+=(preexec); __bp_preexec_interactive_mode="on" __bp_preexec_invoke_exec () if [[ -z "$__bp_preexec_interactive_mode" ]]; then __bp_preexec_interactive_mode=""; __bp_preexec_interactive_mode=""; local preexec_function; local preexec_ret_value=0; for preexec_function in "${preexec_functions[@]}"; if type -t "$preexec_function" > /dev/null; then $preexec_function "$this_command"; preexec_ret_value="$?"; __bp_set_ret_value "$preexec_ret_value" "$__bp_last_argument_prev_command" if [[ -z "${iterm2_ran_preexec:-}" ]]; then __iterm2_preexec ""; iterm2_ran_preexec=""; __iterm2_preexec () iterm2_ran_preexec="yes"; I see in there a lot of "iterm2" information (I'm on a Mac and using iTerm2.app). In fact, when I try to reproduce using Terminal.app, I am unable to reproduce the unexpected output. Excellent sleuthing with declare -f - thank you!
Seems like __bp_preexec_invoke_exec is part of https://github.com/rcaloras/bash-preexec/blob/master/bash-preexec.sh. And it seems like that there is a bug in that script. That project adds 'preexec' functionality to bash by adding DEBUG trap, I did not test, but I can imagine that it might not work properly in the way you see it. Check if it is installed in you environment - you could do so by declare -f. Seems like with newer bash you can use PS0 instead of that project, which probably would do the same without problems you see.
Bash `sleep` outputs __bp_preexec_invoke_exec
1,634,614,391,000
I'm trying to put together what I guess could be called a bash script (not that my bash-scripting abilities are anything to brag about). What I'm trying to do is feed a line from a text file--a text file whose content changes on a regular basis--to a program I'm invoking (imagemagick). It's so far working, but only if there is a single word or number sequence in the text file--I think because I'm trying to use a for loop, and the loop somehow results in white space being treated as line breaks. So, when the file's content exceeds one word or numeral sequence, instead of the content from the file being fed to the program as a line of text, it looks like it gets fed one line at a time, and only the last line--pretty much a single word--gets incorporated in the end into the result. Not what I want. I'll give an example using echo, since I think this will be simpler to demonstrate and understand that way. So I've got a text file, and let's say it's called myfile.txt. It contains a single line with several words and numeric seqeunces. It might look as follows: 'Sep 09, 2016 - 01:00 PM EDT\nconditions: mostly cloudy\n34 F\nHumidity: 39%' The single quotes are supposed to get the program I'm feeding this material to to treat it as a whole and ignore white space. The \n bits are required by the program I'm feeding it to, and they function in that program as line break indicators. So, using this example with a for loop and the echo command, a line like the following for i in `cat myfile.txt`; do echo $i; done produces output not 'Sep 09, 2016 - 01:00 PM EDT\nconditions: mostly cloudy\n34 F\nHumidity: 39%' but 'Sep 09, 2016 - 01:00 PM EDT\nconditions: mostly cloudy\n34 F\nHumidity: 39%' From what I'm reading, it seems like using a loop that invokes cat is not a very good way to accomplish what I'm after, and that the loop may well be the cause of the content ending up spread across several lines rather than all together. So, can anyone suggest a way of doing this that will cause all those words and integer groups to end up on the same line? PS : The script I'm trying to create starts off with #!/bin/bash and consists in a series of commands. It downloads a text file from which most of the content gets deleted and is renamed to myfile.txt. After that it downloads a weather map and performs some operations on it (mainly cropping). The idea is to use imagemagick to juxtapose some text onto that map/image. In fact, it's already working, so long as I do not try to include multiple words or integer sequences divided by white space. If I exceed a single word or integer sequence in the text file, only the last word or integer sequence gets juxtaposed onto the image.
I guess you are trying to render some text using Imagemagick. If so, why not say convert -background lightblue -fill blue -font Helvetica -size 160x label:"$(<input.txt)" output.gif where input.txt is the file you want to render.
bash script that incorporates content from a file as part of a command
1,634,614,391,000
I am trying to remove large amount of mails (mostly mail delivery failed) from my server using rm -rf /home/*/mail/new/* And I am getting -bash: /usr/bin/rm: Argument list too long I tried using find find /home/*/mail/new/ -mindepth 1 -delete But after 20 minutes it looks like it's not doing anything. How do I use for loop to delete everything (directories, files, dotfiles) within /home/*/mail/new/ Something like this for f in /home/*/mail/new/*.*~; do # if it is a file, delete it if [ -f $f ] then rm "$f" fi done Please help me rewrite this command to delete files AND folders and everything within /home/*/mail/new/ EDIT: My question is unique because it's about doing that in FOR loop.
The problem is that /home/*/mail/new/* expands to too many file names. The simplest solution is to delete the directory instead: rm -rf /home/*/mail/new/ Alternatively, use your find command. It should work, it will just be slower. Or, if you need the new directories use a loop to find them, delete and recreate them: for d in /home/*/mail/new/; do rm -rf "$d" && mkdir "$d" done The loop you were trying to write (but don't use this, it is very slow and inefficient) is something like: for f in /home/*/mail/new/* /home/*/mail/new/.*; do rm -rf "$f" done No need to test for files if you want to delete everything, just use rm -rf and both directories and files can be deleted by the same command. It will complain about not being ab;e to delete . and .. but you can ignore that. Or, if you want to be super clean and avoid the errors, you can do this in bash: shopt -s dotglob for f in /home/*/mail/new/*; do rm -rf "$f" done
Remove everything within directory using for loop
1,634,614,391,000
I'm trying to loop a file using a delimiter with a ',' and the print out those values in a "list" but I'm not sure how to get all the values of delimiter. I have a file with emails like this (all in one line): [email protected],[email protected],[email protected] and my script is like this: EmailsFile="/dev/fs/C/Users/myuser/Desktop/EMAILSTOREAD.txt" for email in $(cat ${EmailsFile} | cut -d "," -f 1-100) do echo "${email}\n" done I did 1-100 due I'm not sure how many values could have the file. the output that I'm getting is: [email protected],[email protected],[email protected] Expected output: [email protected] [email protected] [email protected] Any idea?
A solution is: awk '{ gsub(",","\n"); print $0 }' $EmailsFile
How to loop a line of values using ',' and the print it as a list
1,634,614,391,000
I just wanted to brute-force my old router but the for-loop was really amateur style. How to write a nice for-loop, if I only know the charaters included? Found already that page but it does not include my case. I though of something like the following, but obviously it does not work: for word in $(cat charList)$(cat charlist); do echo ${word}; done
Brace expansion: Only consecutive characters are allowed Hirachical for-loops: This is a waste of cmd-lines I think I got a nice way: Use eval and brace expansion $ cat charList a,b,_,X,5,1,' ',-,')',3 $ eval echo "{$(cat charList)}{$(cat charList)}{$(cat charList)}" Unfortunately I have no bash now, but this should do it: $ eval "for word in {$(cat charList)}{$(cat charList)}; do echo '${word}'; done"
Using for loop to brute-force a password
1,634,614,391,000
I have many directories and I want to zip them all. $ mkdir -p one two three $ touch one/one.txt two/two.txt three/three.txt $ ls -F one/ three/ two/ I use zip and it works as intended: $ zip -r one.zip one adding: one/ (stored 0%) adding: one/one.txt (stored 0%) $ ls -F one/ one.zip three/ two/ But when I used this in a loop using zsh, zip files are created elsewhere. $ for dir in */; do for> echo "$dir"; for> zip -r "$dir.zip" "$dir"; for> done one/ adding: one/ (stored 0%) adding: one/one.txt (stored 0%) three/ adding: three/ (stored 0%) adding: three/three.txt (stored 0%) two/ adding: two/ (stored 0%) adding: two/two.txt (stored 0%) $ find . -name "*.zip" ./three/.zip ./two/.zip ./one/.zip $ ls -F one/ three/ two/ I expected an output like this: $ ls -F one/ one.zip three/ three.zip two/ two.zip What's going on?
You can see it in your output: for dir in */; do for> echo "$dir"; for> zip -r "$dir.zip" "$dir"; for> done one/ [ . . . ] Since you are doing for dir in */, the variable includes the trailing slash. So your $dir isn't one, it is one/. Therefore, when you run zip -r "$dir.zip" "$dir";, you are running this: zip -r "one/.zip" "one"; So zip is doing exactly what you tell it to do. What I think you want is something like this instead: $ for dir in */; do dir=${dir%/}; echo zip -r "$dir.zip" "$dir"; done zip -r one.zip one zip -r three.zip three zip -r two.zip two
zip outputs in the wrong place when used in a loop
1,634,614,391,000
So I'm creating a function that does a for loop in all the files in a directory as a given argument and prints out all the files and directories: #!/bin/bash List () { for item in $1 do echo "$item" done } List ~/* However when I run the script it only prints out the first file in the directory. Any ideas?
If you're trying to iterate over files in a directory you need to glob the directory like so: #!/bin/bash List () { for item in "${1}/"* do echo "$item" done } Then call it like: $ list ~ Alternatively, if you want to pass multiple files as arguments you can write your for loop like this: List () { for item do echo "$item" done } Which can then be called as: $ list ~/* What's wrong with your current function: When you call it with a glob, it passes each file in the directory as a separate argument. Let's say your home directory contains file1, file2, and file3. When you call list ~/*, you are essentially calling: list ~/file1 ~/file2 ~/file3 Then your for loop is only being passed positional parameter 1 so for item in ~/file1 and the other positional parameters are unused. Also thanks Ilkkachu for pointing out that you also forgot a / in your hashbang, which I completely missed.
For loop not working in a function with arguments
1,634,614,391,000
I have a set of files in a structure like so; regions ├── ap-northeast-1 │   └── sg-66497903 │   ├── sg-66497903-2017-10-03-Tue-12.39.json │   ├── sg-66497903-2017-10-03-Tue-12.42.json │   ├── sg-66497903-2017-10-03-Tue-12.49.json │   ├── sg-66497903-2017-10-03-Tue-12.53.json │   └── sg-66497903-2017-10-03-Tue-13.12.json ├── ap-northeast-2 │   └── sg-824282eb │   ├── sg-824282eb-2017-10-03-Tue-12.39.json │   ├── sg-824282eb-2017-10-03-Tue-12.42.json │   ├── sg-824282eb-2017-10-03-Tue-12.49.json │   ├── sg-824282eb-2017-10-03-Tue-12.53.json │   └── sg-824282eb-2017-10-03-Tue-13.12.json ├── ap-south-1 │   └── sg-4fec0526 │   ├── sg-4fec0526-2017-10-03-Tue-12.39.json │   ├── sg-4fec0526-2017-10-03-Tue-12.42.json │   ├── sg-4fec0526-2017-10-03-Tue-12.49.json │   ├── sg-4fec0526-2017-10-03-Tue-12.53.json │   └── sg-4fec0526-2017-10-03-Tue-13.12.json The list is longer but you get the idea. I am trying to find the oldest json file in each directory to use as at the standard to set in an array and diff the other files in the individual directories. I have this so far, but it's not right. #!/bin/bash mapfile -t awsReg < <(ls ~/regions) for awsrg in "${awsReg[@]}" do mapfile -t awsSG < <(ls regions/"$awsrg") for sg in "${awsSG[@]}" do find "$sg" -mindepth 2 -type f -printf '%T+ %p\n' | sort | head -n 1 diff -q ##oldest file## ##all other files### done done For example I would like in regions -> ap-northeast-1 -> sg-66497903 to find the file sg-66497903-2017-10-03-Tue-12.39.json, set it as the file to diff the other files in the directory with a diff -q. Move on to next directory, find the oldest in that directory....etc
It's a lot easier in zsh: for sg in ~/region/ap-*/sg-*(/); do files=($sg/sg-*.json(N.Om)) # list of json regular files sorted # from oldest to newest if (($#files >= 2)); then oldest=$files[1] files[1]=() for file in $files; do cmp -s $oldest $file || printf '"%s" differs from "%s"\n' $file $oldest done fi done With ksh93 or bash and GNU ls, you could translate that to: for sg in ~/region/ap-*/sg-*/; do eval "files=($(ls --quoting-style=shell-always -rtd -- "$sg"sg-*.json))" if ((${#files[@]} >= 2)); then oldest=${files[1]} files=("${files[@]:1}") for file in "${files[@]}"; do cmp -s "$oldest" "$file" || printf '"%s" differs from "%s"\n' "$file" "$oldest" done fi done
list oldest file in directories in a loop
1,634,614,391,000
I am working on an undergraduate research project that is heavy in bioinformatics, and I am going down a pipeline of file processing. Some background: I am working with shotgun metagenomic data which is very large swatches of A,T,G,C (nucleotides in a DNA sample), and from what I gather, some qualifiers. I have gone through a few steps of the pipeline already which trimmed and cleaned up the files some, along with adding some qualifiers. The important thing is that these reads are mostly paired end reads, meaning two files reading the nucleotides right to left and left to right. Prior to this, I had crammed my head with basically only biology and ecology so I don't really have any context for coding or how/why to do things or common practices/functions, etc. You get the point. That said, I taught myself very basic for loops and string manipulation in UNIX, making some bash files that ran through different folders using different modules and functions. Here is example code: cd ~/ncbi/public/sra/indian for forward_read_file in *_1.fastq do rev=_2 reverse_read_file=${forward_read_file/_1/$rev} perl /home/gomeza/shared/sharm646-2021-02-24-09_22/Softwares/NGSQCToolkit_v2.3.3/Trimming/AmbiguityFiltering.pl -i ${forward_read_file} -irev ${reverse_read_file} -c 1 -t5 -t3 rm ${forward_read_file} ${reverse_read_file} done #CAMEROON cd ~/ncbi/public/sra/cameroon for forward_read_file in *_1.fastq do rev=_2 reverse_read_file=${forward_read_file/_1/$rev} perl /home/gomeza/shared/sharm646-2021-02-24-09_22/Softwares/NGSQCToolkit_v2.3.3/Trimming/AmbiguityFiltering.pl -i ${forward_read_file} -irev ${reverse_read_file} -c 1 -t5 -t3 rm ${forward_read_file} ${reverse_read_file} done and so on for many folders. I used string manipulation to get each iteration of the for loop to call the paired end files, and then some arguments and parameters for the module I'm using. The big issue I am running into now is that I can't think of a way to pair the paired end files for my next step in the pipeline as they have four random characters right before the extension, and I can't predict them. They don't contain meaningful data, so my plan is to delete them from the filenames and proceed as I have been. Here are examples of the problem files; the issue is the four characters at the end of the string. If I get rid of those I can do the string manipulation as usual. SRR5898908_1_prinseq_good_ZsSX.fastq SRR5898928_2_prinseq_good_VygO.fastq SRR5898979_1_prinseq_good_CRzI.fastq SRR6166642_2_prinseq_good_nqVP.fastq SRR6166693_2_prinseq_good_y_OD.fastq SRR5898908_2_prinseq_good_HPTU.fastq SRR5898929_1_prinseq_good_p2mS.fastq SRR5898979_2_prinseq_good_vYcE.fastq SRR6166643_1_prinseq_good_fc8y.fastq SRR6166694_1_prinseq_good_Ka1C.fastq SRR5898909_1_prinseq_good_X41r.fastq SRR5898929_2_prinseq_good_uO8g.fastq SRR5898980_1_prinseq_good_WuPS.fastq SRR6166643_2_prinseq_good_QUUK.fastq SRR6166694_2_prinseq_good_ZlNk.fastq SRR5898909_2_prinseq_good_GbmA.fastq SRR5898930_1_prinseq_good_3qyA.fastq Where the beginning SRRxxxxx is the sample, and the 1 or 2 are forward and reverse reads respectively, hence my string manipulation. The issue is the four characters at the end of the string. If I get rid of those I can do the string manipulation as usual. My mentor recommended I use the FIND or CUT functions somehow, and talked about using the return of the find as a variable to manipulate, but I feel like that would still run into the same issue. How can I remove these characters safely using a for loop? Or whatever you think would work best. Thank you!
Try something like this: for forward_read_file in *_1*.fastq; do srr=$(echo "$forward_read_file" | cut -d_ -f1) rrf_array=( $(find . -name "${srr}_2_*.fastq") ) case "${#rrf_array[@]}" in 0) echo "Warning: No reverse read file found for $forward_read_file" > /dev/stderr ;; 1) reverse_read_file="${rrf_array[1]}" perl /home/gomeza/shared/sharm646-2021-02-24-09_22/Softwares/NGSQCToolkit_v2.3.3/Trimming/AmbiguityFiltering.pl -i "$forward_read_file" -irev "$reverse_read_file" -c 1 -t5 -t3 ;; *) echo "Error: multiple reverse read files found for $forward_read_file" > /dev/stderr ;; esac done This iterates over all the _1 files. It uses cut to extract the SRR sample id, and then uses that with the find command to find any matching _2 files. find's output is stored in an array because we don't know how many results might be returned. It handles three possible outcomes - no matches (not good), exactly 1 match (good, that's what we want), and more than 1 match (again, not good). If there's only one result, extract the matching file from the array and process it with your perl script. If there is zero or more than one result, print a warning message to stderr and continue on to the next _1 filename. You could add ; exit 1 (or other code to handle the error) before the ;; for those cases if you wanted to. This ignores all parts of the filenames except for the SRR sample id at the beginning and the _1 or _2 that identifies it as a forward or reverse pairing file. BTW, this could have been done with an if; then; else instead of a case statement but I thought it was useful to handle zero and more-than-one cases differently. e.g. if [ "${#rrf_array[@]}" == 1 ]; reverse_read_file="${rrf_array[1]}" perl /home/gomeza/shared/sharm646-2021-02-24-09_22/Softwares/NGSQCToolkit_v2.3.3/Trimming/AmbiguityFiltering.pl -i "$forward_read_file" -irev "$reverse_read_file" -c 1 -t5 -t3 else echo "Warning: unknown problem with reverse read file for $forward_read_file" > /dev/stderr fi If you just want to ignore the "problem" files, delete the else block. BTW, to make your script more readable, I suggest doing something like this near the top of your script: AFilter='/home/gomeza/shared/sharm646-2021-02-24-09_22/Softwares/NGSQCToolkit_v2.3.3/Trimming/AmbiguityFiltering.pl' and, later: perl "$AFilter" -i "$forward_read_file" -irev "$reverse_read_file" -c 1 -t5 -t3 Alternatively, if the perl script(s) are executable (i.e. with a #!/usr/bin/perl or similar shebang line, and with the executable flag set with chmod +x), you can just add /home/gomeza/shared/sharm646-2021-02-24-09_22/Softwares/NGSQCToolkit_v2.3.3/Trimming/ to your $PATH: PATH="$PATH:/home/gomeza/shared/sharm646-2021-02-24-09_22/Softwares/NGSQCToolkit_v2.3.3/Trimming" and run the script as: AmbiguityFiltering.pl -i "$forward_read_file" -irev "$reverse_read_file" -c 1 -t5 -t3
How to remove four random characters before the .extension from various files using a for loop?
1,634,614,391,000
I am trying to write a script that takes two files as arguments and changes an .svg file with values from a .csv file. Csv file consists of lines with two values; id,colour. I need to find the id in the svg file and add the colour to the line where id matches I don't know if my problem is the sed part, since it gets complicated when variables are introduced within the change, or the script is fundamentally flawed. id=($(cut -f1 -d, $2)) colour=($(cut -f2 -d, $2)) file=$1 name=$(basename -s .svg $1) name1=$(echo "$name""1") cat $2 | while IFS=, read id colour; do sed -i "s/id=\"'"$id"'\"/id=\"'"$id"'\" style=\"fill:\"'"$colour"'\";\"/" "$1" done When I use this sed -i 's/id="ca"/id="ca" style="fill:red;"/' data.svg this changes the file, but when I change the "ca" with "$id" or '"$id"', it doesn't work. I've also tried this : cat $2 | while IFS=, read id colour; do sed -i 's/id='"$id"'/id='"$id"' style="fill:red;"/' "$1" done the result I am getting is this: <g id= style="fill:red;""hi"> the expected result was this : <g id="hi" style="fill:red;"> And when I put $colour to sed, sed -i 's/id='"$id"'/id='"$id"' style="fill:'"$colour"';"/' "$1" like this, I get this: <g id= style="fill:;""hi">
You shouldn't need to read the CSV into variables, you can just loop on the CSV directly: cat data.csv | while IFS=, read id colour; do # something with $id and $colour Doing var=$(echo text) is kind of redundant - you should just use var="text" directly. I'm not sure what you mean by the construct [ "grep -E..." ], but that is basically testing if the string in the test is not empty, which it isn't because you just put some text in it. When writing bash scripts it is a good idea to test each expression in turn and see that it behaves as you expect, from the command line (*) - this way a lot of surprises will be eliminated. Writing a complete script and then running it, without a lot of experience on how to write scripts well, is probably a good way to shoot yourself in the foot. I have many years of experience writing bash scripts, and I still try and fail a lot on the command line before putting things down to script. When you do run your script, running it with -x to enable tracing of the commands will prove very helpful. I would guess that the main reason you can't get the replacement to work is that the SVG file uses double quotes (") for the ID attribute and not single quotes as in your sed expression. When constructing sed expressions in a shell script, its often annoying to deal with all the escaping. If possible I usually like to have the sed expression not quoted at all, which is one of the reasons why I often use regexp delimiters that aren't slashes (as we will next see). You need to put in the values of the id and colour into the sed expression while also correctly identifying the quoting of the SVG attributes. One thing that might work would be this: sed -i "s,id=\"$id\",id=\"$id\" style=\"fill:$colour;\"," "$1" Here we just used double quotes around the expression - which allows us to embed the variables - and just escape the double quotes to match to prevent them from terminating the shell string. Another option would be to composed the sed expression from multiple strings that have different quoting rules: sed -i 's,id="'"$id"'",id="'"$id"'" style="fill:'"$colour"';",' "$1" Here we put all the static text in single quotes, and then just terminate the single quotes and open a double quoted string to have the variables parsed. This works because there are no spaces between the single quoted strings and the double quoted strings, and as long as there are no spaces, Bash passes all that as a single argument to sed. But Obviously that's a bit confusing, so I prefer the first style. Another thing that we can do, is to use sed capture groups to not type id="$id" more than once: sed -i "s,\(id=\"$id\"\),\1 style=\"fill:$colour;\"," "$1" The \( and \) capture the thing that was matched, and \1 drops it back into the substituted string. *) If you have done scripting in MS-Windows you are probably aware of the sad sad situation there where the command line (whether CMD or powershell) behaves differently then running the same commands in scripts, in often non-obvious ways. This is not the situation in Unix scripting shells such as Bash.
Changing a file with values from another file - Bash script
1,634,614,391,000
I have a file, 4.txt that contains full paths to *.cfg files as well as additional data I need to strip for the final report (5.csv). For example /source/EDDG/env1/dom1/proj/config/test.cfg <ListVariable name="selected_lookups"> <CompoundVariableValue> <StringVariableValue name="lookup_name" value="CUSTOMER_1"/> <StringVariableValue name="business_name" value="DEVCUSTOMER"/> <StringVariableValue name="sample_data_path"value="/dev/.dat"/> </CompoundVariableValue> <CompoundVariableValue> <StringVariableValue name="lookup_name" value="CODE_1"/> <StringVariableValue name="business_name"value="CONCUSTOMER"/> </CompoundVariableValue> </ListVariable> (AND THIS SEQUENCE REPEATS FOR ~238 TIMES WITH DIFFERENT DATA BETWEEN <ListVariable * > and </ListVariable>. Now I need to get 4 values from this file piped into a csv file... e.g. DOM, PROJ, CFG, LOOKUP NAME VALUE(s) (thr can be many per cfgfile) source, EDGE, test.cfg, CUSTOMER_1 , CONCUSTOMER (second lookup name value) ... repeat for all cfg files in 4.txt In order to acquire this data, I have the following loop and it works great for the first 3 columns but not for the fourth column. for COL_VAL in `cat 4.txt | grep '/source/EDDG*'` ; do DOM=`echo "${COL_VAL}" | awk -F'/' '{ print $7 }'` PROJ=`echo "${COL_VAL}" | awk -F'/' '{ print $8 }'` CGF=`echo "${COL_VAL}" | awk -F'/' '{ print $10 }'` LKP=`echo "${COL_VAL}" | grep 'name="lookup_name" value="' | awk -F'value="' '{ print $2 }' | awk -F'_1' '{ print $1 }'` echo "${DOM},${PROJ},${CFG},${LKP}" done < ${TMPDIR}/4.txt > ${TMPDIR}/5.csv So, I tried something like this nested loop: for COL_VAL in `cat 4.txt | grep '/source/EDDG*'` ; do DOMN=`echo ${COL_VAL} | awk -F'/' '{ print $7 }'` PROJ=`echo ${COL_VAL} | awk -F'/' '{ print $8 }'` APFG=`echo ${COL_VAL} | awk -F'/' '{ print $10 }'` for LOOK_UP in `cat 4.txt | grep 'name="lookup_name" value="'` ; do ULKP=`echo "${LOOK_UP}" | awk -F'value="' '{ print $2 }' | awk -F'_1' '{ print $1 }'` done echo "${DOMN},${PROJ},${APFG},${ULKP}" done < ${TMPDIR}/4.txt > ${TMPDIR}/5.csv This populates the 4th column but with the same data. And, strangely to me, the data that does go in the 4th col is the value of the absolute last lookup name in 4.txt which is "'name="lookup_name" value=XYZ'" e.g. DOM, PROJ, CFG, LOOKUP NAME VALUE(s) source, EDGE, test.cfg, XYZ , , , XYZ ... repeat for all cfg files in 4.txt
How about this. Single run of awk, so likely quite fast in comparison to the original script. $ awk -F/ 'BEGIN{print "DOM, PROJ, CFG, LOOKUP NAME VALUE(s)"}/source\/EDDG/{a=$2", "$3", "substr($8,0,length($8)-2)", "}/lookup_name/{gsub(/^.*value="/,"");gsub(/".*/,"");print a$0}' 4.txt DOM, PROJ, CFG, LOOKUP NAME VALUE(s) source, EDDG, test.cfg, CUSTOMER_1 source, EDDG, test.cfg, CODE_1 $ Or, formatted more nicely: $ awk -F/ 'BEGIN { print "DOM, PROJ, CFG, LOOKUP NAME VALUE(s)" } /source\/EDDG/ { a=$2", "$3", "substr($8,0,length($8)-2)", "} /lookup_name/ { gsub(/^.*value="/,"") gsub(/".*/,"") print a$0 }' 4.txt DOM, PROJ, CFG, LOOKUP NAME VALUE(s) source, EDDG, test.cfg, CUSTOMER_1 source, EDDG, test.cfg, CODE_1 $
Populate a CSV file from data file with nested loops in bash
1,634,614,391,000
I am running Ubuntu 18.04. I have a directory full of storyboard images in .jpg format as below. image-0000.jpg image-0001.jpg image-0002.jpg image-0003.jpg image-0004.jpg . . image-3898.jpg image-3899.jpg Merging 13 images vertically gives me a Single page. So I think I need to use below command, using a range of 13 numbers at a time in a loop and save to a directory "./Merged". convert -append image-{range of 13}.jpg ./Merged/page_001.jpg My experiment and thought process is as below. I am trying to use a nested for loop and seq -w as below. But I am unable understand, how to loop the scrip in such a way that it takes first 13 files (from image-0000 to image-0012), merges them and saves in the ./Merged/ folder. Then come out of the loop and again take the next 13 files (from image-0013 to image-0025) and so on. Till all .jpg files in the current folder are finished or till 300 pages are generated. My Script #!/bin/bash # As 3899 image slices will be converted to 300 pages # I thought to run for loop 300 times for ((page=1; page<=300; page++)) do # As images are slices of pages. for slices in $(seq -w 0 3899) do # We need to merge 13 times so... # Should i use for loop with increment as below? # for ((smerge=1; smerge<=13; smerge++)) # do # convert "SOME LOGIC" ./Merged/page_001.jpg # done # **OR** # somehow take 13 numbers from sequence convert image-$slices_{RANGE}.jpg -append ./Merged/page_$page.jpg done done
With zsh: #! /bin/zsh - typeset -Z3 page files=(image-<0-3900>.jpg) for ((page = 1; $#files; page++)) { convert $files[1,13] -append ./Merged/page_$page.jpg files[1,13]=() } Note that since there are 3901 images (13 × 300 + 1), the last page will have only one image. You can do something similar with bash like: #! /bin/bash - shopt -s extglob shopt -s failglob set -- image-+([[:digit:]]).jpg for ((page = 1; $#; page++)) { printf -v padded_page %03d "$page" convert "${@:1:13}" -append "./Merged/page_$padded_page.jpg" (($# > 13)) || break shift 13 } POSIXly, assuming there are matching files and doing a yet less close check on file names: #! /bin/sh - set -- image-*.jpg # disable split+glob, only retain empty removal on unquoted expansions: set -o noglob; IFS= page=1; while [ "$#" -gt 0 ]; do padded_page=000$page padded_page=${padded_page#"${padded_page%???}"} convert $1 $2 $3 $4 $5 $6 $7 $8 $9 ${10} ${11} ${12} ${13} \ -append "./Merged/page_$padded_page.jpg" [ "$#" -gt 13 ] || break shift 13 page=$((page + 1)) done Note that while here the file names are very tamed (no blanks, special characters...), special care has been taken in those codes to handle arbitrary characters. However note that convert and other imagemagick utilities could have problems with file names starting with - (even when using --) or containing :, so best is to prefix file paths with ./ to avoid those problems (for instance, use ./*.jpg instead of *.jpg).
I need to create a loop in bash to join images vertically and ouput multiple joined Pages for print
1,634,614,391,000
Quick question: I have to write a simple script and part of it is adding up every value in a column -> sum of every column and everything. So file 1 2 5 1 2 1 should return column1: 3 column2: 4 column3: 5 sum: 12 My code is almost perfect but columns are not displayed in ascending order if [[ $# -eq 0 ]]; then awk '{ for (i=1;i<=NF;i++) sum[i]+=$i }; END { for (i in sum) print "column "i" : " sum[i];}' file.txt awk '{for(i=1;i<=NF;i++) sum+=$i;}; END {print "sum: " sum}' file.txt fi And the output is: ➜ script ./sum.sh column 2 : 4 column 3 : 5 column 1 : 3 sum: 12 Why does it start with column 2?
for (variable in array) shall iterate, assigning each index of array to variable in an unspecified order. Solution if [[ $# -eq 0 ]]; then awk '(NF>m){m=NF}{for(i=1;i<=NF;i++)sum[i]+=$i}END{for(i=1;i<=m;i++)print("column "i" : "sum[i])}' file.txt awk '{for(i=1;i<=NF;i++)sum+=$i}END{print("sum: "sum)}' file.txt fi Output column 1 : 3 column 2 : 4 column 3 : 5 sum: 12
for loop executes in a weird way
1,634,614,391,000
I have a bash script which downloads images from Google Images. This is the command I use: bash getimages.sh 3 rocky%20mountains That command will download the 3rd picture in Google Images. To download the first 10 pictures of the Rocky Mountains in Google Image Search I use the following amateurish command: bash getimages.sh 1 rocky%20mountains && \ bash getimages.sh 2 rocky%20mountains && \ bash getimages.sh 3 rocky%20mountains && \ bash getimages.sh 4 rocky%20mountains && \ bash getimages.sh 5 rocky%20mountains && \ bash getimages.sh 6 rocky%20mountains && \ bash getimages.sh 7 rocky%20mountains && \ bash getimages.sh 8 rocky%20mountains && \ bash getimages.sh 9 rocky%20mountains && \ bash getimages.sh 10 rocky%20mountains I want a For Loop incorporated into the bash script so that it will download 20 images of a designated search term. Bash Script: #! /bin/bash # function to create all dirs til file can be made function mkdirs { file="$1" dir="/" # convert to full path if [ "${file##/*}" ]; then file="${PWD}/${file}" fi # dir name of following dir next="${file#/}" # while not filename while [ "${next//[^\/]/}" ]; do # create dir if doesn't exist [ -d "${dir}" ] || mkdir "${dir}" dir="${dir}/${next%%/*}" next="${next#*/}" done # last directory to make [ -d "${dir}" ] || mkdir "${dir}" } # get optional 'o' flag, this will open the image after download getopts 'o' option [[ $option = 'o' ]] && shift # parse arguments count=${1} shift query="$@" [ -z "$query" ] && exit 1 # insufficient arguments # set user agent, customize this by visiting http://whatsmyuseragent.com/ useragent='Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:38.0) Gecko/20100101 Firefox/38.0' # construct google link link="www.google.com/search?q=${query}\&tbm=isch" # fetch link for download imagelink=$(wget -e robots=off --user-agent "$useragent" -qO - "$link" | sed 's/</\n</g' | grep '<a href.*\(png\|jpg\|jpeg\)' | sed 's/.*imgurl=\([^&]*\)\&.*/\1/' | head -n $count | tail -n1) imagelink="${imagelink%\%*}" # get file extention (.png, .jpg, .jpeg) ext=$(echo $imagelink | sed "s/.*\(\.[^\.]*\)$/\1/") # set default save location and file name change this!! dir="$PWD" file="google image" # get optional second argument, which defines the file name or dir if [[ $# -eq 2 ]]; then if [ -d "$2" ]; then dir="$2" else file="${2}" mkdirs "${dir}" dir="" fi fi # construct image link: add 'echo "${google_image}"' # after this line for debug output google_image="${dir}/${file}" # construct name, append number if file exists if [[ -e "${google_image}${ext}" ]] ; then i=0 while [[ -e "${google_image}(${i})${ext}" ]] ; do ((i++)) done google_image="${google_image}(${i})${ext}" else google_image="${google_image}${ext}" fi # get actual picture and store in google_image.$ext wget --max-redirect 0 -qO "${google_image}" "${imagelink}" # if 'o' flag supplied: open image [[ $option = "o" ]] && gnome-open "${google_image}" # successful execution, exit code 0 exit 0
Why do you want to incorporate it in the script when you can run the command in loop itself: for i in `seq 1 20`; do ./getimages.sh $i rocky%20mountains ; done; However, if you really do want to incorporate, a quick way is to move the main script code inside another function and write a for loop to invoke that function. Your bass script would then become: #! /bin/bash # function to create all dirs til file can be made function mkdirs { file="$1" dir="/" # convert to full path if [ "${file##/*}" ]; then file="${PWD}/${file}" fi # dir name of following dir next="${file#/}" # while not filename while [ "${next//[^\/]/}" ]; do # create dir if doesn't exist [ -d "${dir}" ] || mkdir "${dir}" dir="${dir}/${next%%/*}" next="${next#*/}" done # last directory to make [ -d "${dir}" ] || mkdir "${dir}" } function getMyImages { # get optional 'o' flag, this will open the image after download getopts 'o' option [[ $option = 'o' ]] && shift # parse arguments count=${1} shift query="$@" [ -z "$query" ] && exit 1 # insufficient arguments # set user agent, customize this by visiting http://whatsmyuseragent.com/ useragent='Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:38.0) Gecko/20100101 Firefox/38.0' # construct google link link="www.google.com/search?q=${query}\&tbm=isch" # fetch link for download imagelink=$(wget -e robots=off --user-agent "$useragent" -qO - "$link" | sed 's/</\n</g' | grep '<a href.*\(png\|jpg\|jpeg\)' | sed 's/.*imgurl=\([^&]*\)\&.*/\1/' | head -n $count | tail -n1) imagelink="${imagelink%\%*}" # get file extention (.png, .jpg, .jpeg) ext=$(echo $imagelink | sed "s/.*\(\.[^\.]*\)$/\1/") # set default save location and file name change this!! dir="$PWD" file="$1_$count" # get optional second argument, which defines the file name or dir if [[ $# -eq 2 ]]; then if [ -d "$2" ]; then dir="$2" else file="${2}" mkdirs "${dir}" dir="" fi fi # construct image link: add 'echo "${google_image}"' # after this line for debug output google_image="${dir}/${file}" # construct name, append number if file exists if [[ -e "${google_image}${ext}" ]] ; then i=0 while [[ -e "${google_image}(${i})${ext}" ]] ; do ((i++)) done google_image="${google_image}(${i})${ext}" else google_image="${google_image}${ext}" fi # get actual picture and store in google_image.$ext wget --max-redirect 0 -qO "${google_image}" "${imagelink}" # if 'o' flag supplied: open image [[ $option = "o" ]] && gnome-open "${google_image}" # successful execution, exit code 0 #exit 0 } for i in `seq 1 $1`; do echo "Downloading image" $i getMyImages $i "$2"; done; To download the first 10 images, simply invoke is as previously: ./getimages 10 rocky%20mountains
For Loop for Google Image Downloading Bash Script
1,634,614,391,000
I have a array like this "Apple Banana Clementine Date" I have to print like this: 1. Apple 2. Banana 3. Clementine 4. Date Script file: for i in "${fruits[@]}"; do echo "$lineno. $i " lineno+=1 done output of myscript: 1. Apple Banana Clem.... I don't understand why it is not printing lineno and also why it is printing long gap b/w 1. and Apple. Thanks.
The problem is your array. It seems that you have created an array with only one element. Try this example: array=("$(printf 'Apple\nBanana\nClementine\nDate')") for ((i = 0; i < ${#array[@]}; i++)); do printf '%d. %s\n' $((i+1)) "${array[$i]}" done j=0 for e in "${array[@]}"; do j=$((j+1)) printf '%d. %s\n' "$j" "$e" done k=0 for e in ${array[@]}; do k=$((k+1)) printf '%d. %s\n' "$k" "$e" done Then run: $ ./test.sh 1. Apple Banana Clementine Date 1. Apple Banana Clementine Date 1. Apple 2. Banana 3. Clementine 4. Date You can see, you actually create an array contains one element. The 3rd loop print four elements because the shell had performed field splitting on string Apple\nBanana\nClementine\nDate, which gave you four separated words back. If you change the array to: set -f array=( $(printf 'Apple\nBanana\nClementine\nDate') ) set +f (set -f disables wildcard expansion, in case the characters *?\[ appear in the output of the command) you will get the desired result, which is that the output of the command is split at whitespace: $ ./test.sh 1. Apple 2. Banana 3. Clementine 4. Date 1. Apple 2. Banana 3. Clementine 4. Date 1. Apple 2. Banana 3. Clementine 4. Date A note that you must use double quotes "${array[@]}" when you want to iterate through all array elements, or use the c-style for loop like my first example.
for loop not working for multiple lines
1,634,614,391,000
How can I get reasonable parallelisation on multi-core nodes without saturating resources? As in many other similar questions, the question is really how to learn to tweak GNU Parallel to get reasonable performance. In the following example, I can't get to run processes in parallel without saturating resources or everything seems to run in one CPU after using some -j -N options. From inside a Bash script running in a multi-core machine, the following loop is passed to GNU Parallel for BAND in $(seq 1 "$BANDS") ;do echo "gdalmerge_and_clean $VARIABLE $YEAR $BAND $OUTPUT_PIXEL_SIZE_X $OUTPUT_PIXEL_SIZE_Y" done |parallel This saturates, however, the machine and slows down processing. In man parallel I read --jobs -N -j -N --max-procs -N -P -N Subtract N from the number of CPU threads. Run this many jobs in parallel. If the evaluated number is less than 1 then 1 will be used. See also: --number-of-threads --number-of-cores --number-of-sockets and I've tried to use |parallel -j -3 but this, for some reason, uses only one CPU out of the 40. Checking with [h]top, only one CPU is reported high-use, the rest down to 0. Should -j -3 not use 'Number of CPUs' - 3 which would be 37 CPUs for example? and I extended the previous call then -j -3 --use-cores-instead-of-threads blindly doing so, I guess. I've read https://unix.stackexchange.com/a/114678/13011, and I know from the admins of the cluster I used to run such parallel jobs, that hyperthreading is disabled. This is still running in one CPU. I am now trying to use the following: for BAND in $(seq 1 "$BANDS") ;do echo "gdalmerge_and_clean $VARIABLE $YEAR $BAND $OUTPUT_PIXEL_SIZE_X $OUTPUT_PIXEL_SIZE_Y" done |parallel -j 95% or with |parallel -j 95% --use-cores-instead-of-threads. Note For the record, this is part of a batch job, scheduled via HTCondor and each job running on a separate node with some 40 physical CPUs available. Above, I kept only the essential -- the complete for loop piped to parallel is: for BAND in $(seq 1 "$BANDS") ;do # Do not extract, unscale and merge if the scaled map exists already! SCALED_MAP="era5_and_land_${VARIABLE}_${YEAR}_band_${BAND}_merged_scaled.nc" MERGED_MAP="era5_and_land_${VARIABLE}_${YEAR}_band_${BAND}_merged.nc" if [ ! -f "${SCALED_MAP+set}" ] ;then echo "log $LOG_FILE Action=Merge, Output=$MERGED_MAP, Pixel >size=$OUTPUT_PIXEL_SIZE_X $OUTPUT_PIXEL_SIZE_Y, Timestamp=$(timestamp)" echo "gdalmerge_and_clean $VARIABLE $YEAR $BAND $OUTPUT_PIXEL_SIZE_X >$OUTPUT_PIXEL_SIZE_Y" else echo "warning "Scaled map "$SCALED_MAP" exists already! Skipping merging.-"" fi done |parallel -j 95% log "$LOG_FILE" "Action=Merge, End=$(timestamp)" where `log` and `warning` are a custom functions
To debug this I will suggest you first run this with something simpler than gdalmerge_and_clean. Try: seq 100 | parallel 'seq {} 100000000 | gzip | wc -c' Does this correctly run one job per CPU thread? seq 100 | parallel -j 95% 'seq {} 100000000 | gzip | wc -c' Does this correctly run 19 jobs for every 20 CPU threads? My guess is that gdalmerge_and_clean is actually run in the correct number of instances, but that it depends on I/O and is waiting for this. So your disk or network is pushed to the limit while the CPU is sitting idle and waiting. You can verify the correct number of copies is started by using ps aux | grep gdalmerge_and_clean. You can see if your disks are busy with iostats -dkx 1.
GNU Parallel with -j -N still uses one CPU
1,634,614,391,000
Basically, I want to see how much time every user spent logged in. (username) pts/0 (IP adress) Tue Dec 12 17:51 - 18:14 (00:22) - this is how one line looks in the last command. The username is between characters 1-9, the time is between 67-71. There are multiple logins from each user, so I want to sum the time based on their username. I already counted how many times one user logged in using echo `last | sort | cut -c1-9 | uniq -c | sort -n | head -$y` in a bash script. Is it possible to add up the time of each user using a format similar to this? I was thinking of a for loop, but have no ide how to set it up. Any ideas?
You could use the ac command for this. Part of the psacct package. $ last|head steve pts/0 cpc79909-stkp12- Tue Nov 20 18:40 still logged in steve pts/0 cpc79909-stkp12- Mon Nov 19 22:19 - 23:09 (00:50) steve pts/0 cpc79909-stkp12- Wed Nov 14 19:36 - 19:45 (00:09) steve pts/0 cpc79909-stkp12- Fri Nov 9 11:43 - 11:56 (00:12) steve pts/0 cpc79909-stkp12- Thu Nov 8 21:58 - 22:00 (00:02) steve pts/0 cpc79909-stkp12- Mon Nov 5 17:37 - 18:30 (00:53) steve pts/0 cpc79909-stkp12- Fri Nov 2 19:45 - 20:39 (00:54) steve pts/1 cpc79909-stkp12- Fri Nov 2 17:34 - 18:31 (00:57) steve pts/0 cpc79909-stkp12- Fri Nov 2 16:01 - 18:30 (02:29) steve pts/0 cpc79909-stkp12- Thu Nov 1 21:10 - 23:14 (02:04) $ ac -p steve 33.79 total 33.79 $ Or use awk ? $ last | awk '$NF ~ /^\([0-9]+:[0-9]+\)$/{split($NF,t,":");a[$1]+=t[1]*60+t[2]}END{for(x in a){print x,a[x]}}'
How can I sum the time based on usernames?
1,634,614,391,000
For each line of access.log with the pattern /mypattern: www.example.com:80 192.0.2.17 - - [29/Sep/2017:13:49:02 +0200] "GET /mypattern?foo=bar&iptosearch=198.51.100.5 I would like to extract the iptosearch parameter, and show all the lines of access.log that have this IP and which contains blah. Example: [29/Sep/2017:13:49:02 +0200] "GET /mypattern?foo=bar&iptosearch=198.51.100.5: www.example3.com:80 198.51.100.5 - - [27/Sep/2017:00:00:00 +0200] "GET /hello/blah" ... www.example2.com:80 198.51.100.5 - - [25/Sep/2017:00:00:00 +0200] "GET /blah.html" ... www.example7.com:80 198.51.100.5 - - [12/Sep/2017:00:00:00 +0200] "GET /index.htm?i=blah" ... [27/Sep/2017:00:00:00 +0200] "GET /mypattern?iptosearch=203.0.113.2&foo2=bar5: www.example32.com:80 203.0.113.2 - - [15/Sep/2017:00:00:00 +0200] "GET /hello/blah" ... www.example215.com:80 203.0.113.2 - - [14/Sep/2017:00:00:00 +0200] "GET /blah.html" ... I am trying to do it with: grep -f <(grep -o 'mypattern.*iptosearch=(.*)' access.log) access.log |grep blah but: it probably won't be sorted like in my example before: with a header, and the list below corresponding to the relevant iptosearch the header in my example ([29/Sep/2017:13:49:02 +0200] "GET /test?foo=bar&iptosearch=198.51.100.5:) won't be displayed because it doesn't contain blah How to do this, to have the display like before? Should one use a loop in such a case, how?
Extended bash + grep + awk approach: Sample access.log content: www.example3.com:80 198.51.100.5 - - [27/Sep/2017:00:00:00 +0200] "GET /hello/blah" ... www.example2.com:80 198.51.100.5 - - [25/Sep/2017:00:00:00 +0200] "GET /blah.html" ... [29/Sep/2017:13:49:02 +0200] "GET /mypattern?foo=bar&iptosearch=198.51.100.5: www.example7.com:80 198.51.100.5 - - [12/Sep/2017:00:00:00 +0200] "GET /index.htm?i=blah" ... www.example32.com:80 203.0.113.2 - - [15/Sep/2017:00:00:00 +0200] "GET /hello/blah" ... [27/Sep/2017:00:00:00 +0200] "GET /mypattern?iptosearch=203.0.113.2&foo2=bar5: www.example215.com:80 203.0.113.2 - - [14/Sep/2017:00:00:00 +0200] "GET /blah.html" ... The job: grep '/mypattern' access.log | while read -r l; do if [[ $l =~ iptosearch=(([0-9]+\.){3}[0-9]+) ]]; then echo "$l" awk -v ip="${BASH_REMATCH[1]}" '$0~ip && /blah/;END{ print "" }' access.log fi done The output: [29/Sep/2017:13:49:02 +0200] "GET /mypattern?foo=bar&iptosearch=198.51.100.5: www.example3.com:80 198.51.100.5 - - [27/Sep/2017:00:00:00 +0200] "GET /hello/blah" ... www.example2.com:80 198.51.100.5 - - [25/Sep/2017:00:00:00 +0200] "GET /blah.html" ... www.example7.com:80 198.51.100.5 - - [12/Sep/2017:00:00:00 +0200] "GET /index.htm?i=blah" ... [27/Sep/2017:00:00:00 +0200] "GET /mypattern?iptosearch=203.0.113.2&foo2=bar5: www.example32.com:80 203.0.113.2 - - [15/Sep/2017:00:00:00 +0200] "GET /hello/blah" ... www.example215.com:80 203.0.113.2 - - [14/Sep/2017:00:00:00 +0200] "GET /blah.html" ... Details: while read -r l ... - iterating over lines containing /mypattern, returned by grep command [[ $l =~ iptosearch=(([0-9]+\.){3}[0-9]+) ]] - match each line $l against regular expression iptosearch=(([0-9]+\.){3}[0-9]+). BASH_REMATCH is an array variable whose members are assigned by the ‘=~’ binary operator to the [[ conditional command. The element with index 0 is the portion of the string matching the entire regular expression. The element with index n is the portion of the string matching the nth parenthesized subexpression (...). This variable is read-only. -v ip="${BASH_REMATCH[1]}" - passing in the variable ip into awk script $0~ip && /blah/ - output only lines containing the current ip value and keyword blah
Extract all the traffic corresponding to a request with a parameter
1,634,614,391,000
I am trying to create a bash function which should kill all processes using some ports specified in port_array. The kill-port function works if I call it myself with a port e.g. with kill-port "80";. But if I call it inside the for loop I get some strange error from kill: Usage: kill [options] <pid> [...] Options: <pid> [...] send signal to every <pid> listed -<signal>, -s, --signal <signal> specify the <signal> to be sent -l, --list=[<signal>] list all signal names, or convert one to a name -L, --table list all signal names in a nice table -h, --help display this help and exit -V, --version output version information and exit For more details see kill(1). Here is the code: #!/bin/bash port_array=( 5057 5061 5056 ); function kill-ports () { ports=( "$@" ) for i in "${ports[@]}"; do kill-port $i; done }; function kill-port () { lsof -i tcp:"$1" | awk 'NR!=1 {print $2}' | xargs kill; } kill-ports "${port_array[@]}"; I am probably overseeing my mistake, but I would be thankful if someone could tell me what the problem here is. Regards, Silas
There is a solution, thanks to @StéphaneChazelas comment: "Just do: lsof -ti "tcp:$1" | xargs -r kill, that's what -t is for (and -r tells xargs not to run the command if there's no argument. That's for GNU xargs. Some other implementations like FreeBSD do that automatically)" In the end it looks like this and also works (Some cleanup by me too): #!/bin/bash port_array=( 5057 5061 5056 ); kill-ports() { for port in "$@"; do kill-port "$port"; done }; kill-port () { lsof -ti "tcp:$1" | xargs -r kill; } kill-ports ${port_array[@]}; EDIT: @PSkocik's solution was posted after @StéphaneChazelas comment, but works fine too: #!/bin/bash port_array=( 5057 5061 5056 ); kill-ports() { for port in "$@"; do fuser -n tcp "$port" -k -TERM; done } kill-ports ${port_array[@]};
Bash | port killing loop fails
1,634,614,391,000
I am trying to use awk to manipulate data.I have a data file with two columns and I would like to find the row that contains an specific value. But there is more than one row that contains this value and I only would like to find the first one. I tried it with a for-loop and break but it does not work the way I expected. {for (i=0;$2==100.0;min=$1){ i++ max=min+500 if(i==1){ print i"\t"min"\t"max break } } } After this loop there is a little bit more code which utilizes min and max. Edit: The data looks like this 59.45 96 59.50 96 59.60 97 59.75 98 59.90 98 59.95 98 60.00 99 60.05 99 60.20 99 60.25 100 60.40 100 60.45 100 60.50 101 60.55 101 60.60 101 60.65 101 60.70 102 60.90 102 60.95 103 61.00 103 61.05 103 61.15 103 61.20 104 61.35 104 61.40 104 61.45 105 61.50 105 61.60 105 61.65 106 61.70 100 61.85 100 I would like to find the first row that contains 100in the second column and save the value from column one in a variable
Now I have a working solution BEGIN { SearchMinFlag = 0; TempLimit = 195.0 DeltaTime = 550 TempMin = "LEER" FS = "\t" } # Stop MinSearch NR > 2 && $1 > tstop {SearchMinFlag = 0} # MinSearch SearchMinFlag == 1 { ywert = $2; if (ywert < TempMin) { TempMin = ywert; TimeMin = $1 } } # Start MinSearch NR > 2 && $2 > TempLimit && SearchMinFlag == 0 && TempMin == "LEER" { SearchMinFlag = 1; tstart = $1 tstop = tstart + DeltaTime ymerk = $2; TempMin = 10000 } # Result END { print "t(T_min)", "T_min", "t_start", "t_stop\n"TimeMin, TempMin, tstart, tstop }
awk: for-loop with break option
1,635,625,585,000
I have two files with two different values. I want to run a command in loop which needs input from both file. Let me give the example to make it simple. File1 contents: google yahoo File2 contents: mail messenger I need output like the below google is good in mail yahoo is good in messenger How can I use a for/while loop to achieve the same? I need a script to: $File1 needs to replace first result in File1 and $File2 needs to replace first result in File2 /usr/local/psa/bin/domain --create domain $File1 -mail_service true -service-plan 'Default Domain' -ip 1.2.3.4 -login $File2 -passwd "abcghth"
The standard procedure (in Bash) is to read from different file descriptors with the -u switch of read: while IFS= read -r -u3 l1 && IFS= read -r -u4 l2; do printf '%s is good in %s\n' "$l1" "$l2" done 3<file1 4<file2
Parse two files input in for/while loop
1,635,625,585,000
Is it possible to use a for-loop with the watch command? I'm not really sure what to make of this error with what I've tried: $ for i in 1 2 3; do echo $i; done 1 2 3 $ watch -n 10 for i in 1 2 3; do echo $i; done -bash: syntax error near unexpected token `do' $ watch for i in 1 2 3; do echo $i; done -bash: syntax error near unexpected token `do' $
watch's command argument(s) are a script that is run with sh -c. If the command arguments are just a list of tokens separated by spaces (e.g. watch ls -l), it concatenates them all and runs them. But unquoted shell meta-characters are used by the shell that you run watch from and are never seen by watch. This means that meta-characters like ; & | < > etc need to be escaped or quoted to prevent the shell in which you run watch from seeing those characters as, e.g., instructions to mark the end the watch command, run the watch command in the background, or pipe the output of watch into another program (rather than run the pipe inside the watch script). The usual quoting rules apply - single-quotes to prevent variable interpolation, double-quotes otherwise. man watch has an EXAMPLES section at the end showing this. For example: watch -n 10 'for i in 1 2 3; do echo $i; done' or watch -n 10 'grep something /var/log/kern.log | tail' Note: you can use watch's -x option if you want to exec something without sh -c. e.g. watch -x awk -f script.awk.
How to watch a for-loop in bash?
1,635,625,585,000
I need to rename some files using a loop but I can't get it to work as I am still very new at Linux. the files that need to be renamed are: E9-GOWN33_multiplemap.bin.10.fa E9-GOWN33_multiplemap.bin.16.fa E9-GOWN33_multiplemap.bin.21.fa E9-GOWN33_multiplemap.bin.7.fa to a shorter name such as: E9.bin.10.fa E9.bin.16.fa E9.bin.21.fa E9.bin.7.fa I have used rename and mv and other loops I've seen in threads but still cannot get it to work. any input is much appreciated! thank you!
If you have perl rename (default on Ubuntu, Debian and many other systems), you can just do rename -n 's/-GOWN33_multiplemap//' ./*fa If that gives you the right file names, run without the -n to actually rename them: rename 's/-GOWN33_multiplemap//' ./*fa
Renaming multiple files using a loop
1,635,625,585,000
I use Ubuntu 16.04 and I execute a list of remote scripts that are in the same directory (a GitHub repository): curl -s https://raw.githubusercontent.com/${user}/${repo}/master/1.sh | tr -d '\r' | bash curl -s https://raw.githubusercontent.com/${user}/${repo}/master/2.sh | tr -d '\r' | bash curl -s https://raw.githubusercontent.com/${user}/${repo}/master/3.sh | tr -d '\r' | bash curl -s https://raw.githubusercontent.com/${user}/${repo}/master/4.sh | tr -d '\r' | bash curl -s https://raw.githubusercontent.com/${user}/${repo}/master/5.sh | tr -d '\r' | bash curl -s https://raw.githubusercontent.com/${user}/${repo}/master/6.sh | tr -d '\r' | bash How would you cope with the awful redundancy? I think of a for loop but I have no idea how to construct it. All for loops I've seen so far doesn't give me a clue on how to do that particular task of reusing a curl pattern (and piped output) for different files in the same remote directory. You are more than welcome to share an example. Update There might be more or less than six such curl operations. I would use any plausible way but if it requires a utility please recommend a utility available in the Debian repositories.
For two or more files you could use Unix seq: for var in $(seq 6) do curl -s https://raw.githubusercontent.com/${user}/${repo}/master/$var.sh | tr -d '\r' | bash done Explanation: Use the output of seq to attain a count up to 6 (as the question lists 6 curl operations). Read the output into the variable var and use this in your curl command.
Execute 2 or more remote scripts sharing the same curl pattern, without redundancy
1,635,625,585,000
This command outputs PID of process listening on port 8083: lsof -i4TCP:8083 -sTCP:LISTEN -t When there is no process, it returns empty string. No process is running on that port so I am checking if that command returns empty string if [[ -z $(lsof -i4TCP:8083 -sTCP:LISTEN -t) ]]; then echo "waiting for startup" else echo "process is listening on port 8083" fi it outputs "waiting for startup" like expected, but when I am doing: for (( ; -z $(lsof -i4TCP:8083 -sTCP:LISTEN -t) ; )); do echo "waiting for startup" sleep 1 done it does not output anything, but this condition was true when it was evaluated in if so if it's true then this loop should execute do...done block and print "processing", but it does not and exits immediately. Why does that happen?
[[ … ]] evaluates a conditional expression. In a conditional expression, -z is an operator which takes a string as argument and returns true if the string is empty and false otherwise. In for ((…; …; …)), each of the three semicolon-separated parts inside the double parentheses is an arithmetic expression. In an arithmetic expression, - is the negation or subtraction operator. z is a reference to the variable z, evaluating to 0 if the variable is not defined. So when the output of lsof -i4TCP:8083 -sTCP:LISTEN -t is empty and z is not defined, the expression evaluates to 0, so the loop exits. When the output is not empty, it's likely to result in a syntax error in the arithmetic expression. What you wrote just doesn't make sense in an arithmetic expression. This form of for is designed for iterations determined by arithmetic, typically counting from some initial value to some final value. You seem to want a while loop, so write a while loop, and use a conditional expression. while [[ -z $(lsof -i4TCP:8083 -sTCP:LISTEN -t) ]]; do echo "waiting for startup" sleep 1 done See also parentheses, brackets and braces in bash.
Expression evaluates to false in for loop whereas it's true in if
1,635,625,585,000
I have one file query_ids with several lines such as: id1 id2 id3 I'm using grep idx to find matches of the id in my_file. I redirect these matches to a new matches file. I'm also using grep with option -v to obtain all mismatches which I redirect to a mismatches file. I'm using this small script: #!/bin/bash for i in $(cat query_ids) do # saving matches grep "$i" my_file >> matches # saving mismatches grep -v "$i" my_file >> missing done I'm obviously doing something wrong: When manually searching some ids from the missing file in my_file I find that they exist. Even though the missing file should only contain ids from the file query_ids that were NOT found in my_file I do find matches. So when picking some random id let's say id3 by doing grep id3 missing and grep id3 my_file both return a match. <br> Why is my code assigning id3 to mismatches file? I tried removing the quotes around $i but it did not change the result. I also tried echo "$i" to be sure that the id's are actually looped. What am I missing?
What you are doing is that you get one ID, say id1, and then you extract all lines matching that ID into matches. Then you extract all lines not matching that into missing. For the next ID, id2, you then add the lines matching that ID to matches, and the lines not matching id2 to missing. Now, missing contains all lines not containing id1, then all lines not containing id2. Note that a line containing id1 that does not contain id2 would be in missing from that second iteration of your loop. Instead, consider all IDs at once: grep -f query_ids -Fw my_file >matches grep -f query_ids -Fw -v my_file >missing Here, I'm providing grep with patterns from query_ids using -f. I'm asking grep to treat the lines in query_ids as query strings (-F, i.e. not regular expressions), and to match these in my_file as whole words (-w, so that id2 does not match e.g. id23). The first command would extract all lines that contain any of the IDs. The second command would extract all lines that contain none of the IDs. There is no need for a loop of any kind here.
grep loop: I'm using each line of one file as query to find matches another file. Why is my output inconsistent?
1,635,625,585,000
I have a loop that checks for certain criteria for whether or not to skip to the next iteration (A). I realized that if I invoke a function (skip) that calls continue, it is as if it is called in a sub-process for it does not see the loop (B). Also the proposed workaround that relies on eval-uating a string does not work (C). # /usr/bin/bash env skip() { echo "skipping : $1" continue } skip_str="echo \"skipping : $var\"; continue" while read -r var; do if [[ $var =~ ^bar$ ]]; then # A # echo "skipping : $var" # continue # B # skip "$var" # continue: only meaningful in a `for', `while', or `until' loop # C eval "$skip_str" fi echo "processed: $var" done < <(cat << EOF foo bar qux EOF ) Method C: $ source ./job-10.sh processed: foo skipping : processed: qux Also see: Do functions run as subprocesses in Bash? PS1: could someone remind me why < < rather than < is needed after done? PS2: no tag found for while hence for
The problem is that when your function is executed, it is no longer inside a loop. It isn't in a subshell, no, but it is also not inside any loop. As far as the function is concerned, it is a self-contained piece of code and has no knowledge of where it was called from. Then, when you run eval "$skip_str" there is no value in $var because you have set skip_string at a time when $var was not defined. This should actually work as you expect, it's just seriously convoluted and risky (if you don't control input 100%) for no reason: #! /usr/bin/env bash while read -r var; do skip_str="echo \"skipping : $var\"; continue" if [[ $var =~ ^bar$ ]]; then eval "$skip_str" fi echo "processed: $var" done < <(cat << EOF foo bar qux EOF ) That... really isn't very pretty. Personally, I would just use a function to do the test and then operate on the test's results. Like this: #! /usr/bin/env bash doTest(){ if [[ $1 =~ ^bar$ ]]; then return 1 else return 0 fi } while read -r var; do if doTest "$var"; then echo "processed: $var" else echo "skipped: $var" continue fi ## Rest of your code here done < <(cat << EOF foo bar qux EOF ) I could probably give you something better if you explained what your objective is. Finally, you don't need < < after done, you need < <(). The < is the normal input redirection, and the <() is called process substitution and is a trick that lets you treat the output of a command as though it were a file name. If you are using the function just to avoid repeating the extra things like echo "skipping $1", you could simply move more of the logic into the function so that you have a loop there. Something like this: link
continue: only meaningful in a `for', `while', or `until' loop
1,635,625,585,000
I have a small script which should print a couple of calls to a makefile i got. mylist='$(call list_samples,AON_9,NT_1,SC_17) $(call list_samples,AON_10,NT_2,SC_18) $(call list_samples,AON_11,NT_3,SC_19) $(call list_samples,AON_12,NT_4,SC_20) $(call list_samples,AON_13,NT_5,SC_21) $(call list_samples,AON_14,NT_6,SC_22) $(call list_samples,AON_15,NT_7,SC_23) $(call list_samples,AON_16,NT_8,SC_24)' for SAMPLES_OUT in $mylist; do echo "$SAMPLES_OUT" done Output: $(call list_samples,AON_9,NT_1,SC_17) $(call list_samples,AON_10,NT_2,SC_18) $(call list_samples,AON_11,NT_3,SC_19) $(call list_samples,AON_12,NT_4,SC_20) $(call list_samples,AON_13,NT_5,SC_21) $(call list_samples,AON_14,NT_6,SC_22) $(call list_samples,AON_15,NT_7,SC_23) $(call list_samples,AON_16,NT_8,SC_24) the problem i am experiencing is that the for loop is splitting on spaces and therefor the $call and list_samples are taken apart while they should be actually one call. like this: $(call list_samples,AON_9,NT_1,SC_17) $(call list_samples,AON_10,NT_2,SC_18) $(call list_samples,AON_11,NT_3,SC_19) $(call list_samples,AON_12,NT_4,SC_20) $(call list_samples,AON_13,NT_5,SC_21) $(call list_samples,AON_14,NT_6,SC_22) $(call list_samples,AON_15,NT_7,SC_23) $(call list_samples,AON_16,NT_8,SC_24) I have tried putting the strings in "" but that does not work since it is a list created with '' it sees everything between that as a string. Any hints are appreciated. Thanks!
You need to use an array to keep the elements together: mylist=( '$(call list_samples,AON_9,NT_1,SC_17)' '$(call list_samples,AON_10,NT_2,SC_18)' '$(call list_samples,AON_11,NT_3,SC_19)' '$(call list_samples,AON_12,NT_4,SC_20)' '$(call list_samples,AON_13,NT_5,SC_21)' '$(call list_samples,AON_14,NT_6,SC_22)' '$(call list_samples,AON_15,NT_7,SC_23)' '$(call list_samples,AON_16,NT_8,SC_24)' ) for SAMPLES_OUT in "${mylist[@]}" # crucial to use quotes here do echo "$SAMPLES_OUT" done
for loop over a list
1,635,625,585,000
When i am using the below syntax to iterate why two brackets are needed ? for (( expr1; expr2; expr3 )) do command1 command2 .. done and the below code doesn't work ? and throws error "syntax error near unexpected token `('" for ( expr1; expr2; expr3 ) do command1 command2 .. done
The reason for this is ( has a different meaning. From the bash manpage: (list) list is executed in a subshell environment (see COMMAND EXECUTION ENVIRONMENT below). Variable assignments and builtin commands that affect the shell's environment do not remain in effect after the command completes. The return status is the exit status of list. ((expression)) The expression is evaluated according to the rules described below under ARITHMETIC EVALUATION. If the value of the expression is non-zero, the return status is 0; otherwise the return status is 1. This is exactly equivalent to let "expression".
For loop brackets - C like syntax
1,635,625,585,000
#!/usr/bin/bash TARGETS=( "81.176.235.2" "81.176.70.2" "78.41.109.7" ) myIPs=( "185.164.100.1" "185.164.100.2" "185.164.100.3" "185.164.100.4" "185.164.100.5" ) for t in "${TARGETS[@]}" do for a in "${myIPs[@]}" do echo "${a} ${t} -p 80" >>log 2>&1 & echo "${a} ${t} -p 443" >>log 2>&1 & wait done done I want this code to start with echo commands for each IP in TARGETS executing them in parallel. At the same time the script is not meant to proceed with echo commands for more than one address in myIPs simulteously, hence I introduced wait in the internal loop. I want to have pairs of echo (each for the port 80 and 443) executed in parallel for each target in TARGETS. In other words I want to accomplish this (but sadly it does not work): for t in "${TARGETS[@]}" do & for a in "${myIPs[@]}" do echo "${a} ${t} -p 80" >>log 2>&1 & echo "${a} ${t} -p 443" >>log 2>&1 & wait done done wait Yet, because it would increase my load averages too much, I do not want this: : for t in "${TARGETS[@]}" do for a in "${myIPs[@]}" do echo "${a} ${t} -p 80" >>log 2>&1 & echo "${a} ${t} -p 443" >>log 2>&1 & done done wait How might I accomplish my objective? P.S. This is just a snippet of a more complex script. I wanted isolate the relevant issue, hence the use of echo instead of one of the networking commands.
I find your question hard to understand: you seem to want both parallel and sequential execution. Do you want this? for t in "${TARGETS[@]}"; do ( for a in "${myIPs[@]}"; do echo "${a} ${t} -p 80" >>log 2>&1 & echo "${a} ${t} -p 443" >>log 2>&1 & wait done ) & done each target's for loop is run in a subshell in the background.
How might I execute this nested for loop in parallel?
1,635,625,585,000
In this small for loop I created, I need the loop to print this message only once for all arguments. for arg in $@ do echo "There are $(grep "$arg" cis132Students|wc -l) classmates in this list, where $(wc -l cis132Students) is the actual number of classmates." done What contains in $arg is a few names that do exist in the file, and a couple of names that do not exist in the file. What happens is that the loops prints that message multiple times for each argument, where I only want it to print once.
You don't want to loop through the arguments, this is reading them one at a time causing your echo statement to execute once for each argument. You can do something like the following: #!/bin/sh student_file=cis132Students p=$(echo "$@" | tr ' ' '|') ln=$(wc -l "$student_file") gn=$(grep -cE "$p" "$student_file") echo "There are $gn classmates in the list, where $ln is the actual number of classmates." p: Will be converted into a string that can be fed to grep in extended regex mode. For example if you provide arguments: jesse jay it will be convered to jesse|jay ln: Will be the total number of lines (students) in your input file gn: Will be the number of students that have matched your argument search
For loop printing echo command only once
1,635,625,585,000
I have a directory with 2000+ text files. I am trying to make a script that: Reads a list of IP addresses from ip.txt Cats each file in the directory Greps each file for the IP address If keyword is found, echoes the keyword and the file name to a file. The output should be like this: $ cat results.txt 192.168.2.3 was found in 23233.txt 192.168.4.0 was found in 2323.txt At the moment I have this: while read p; do for filename in *.txt; do if cat $filename | grep "$p" then echo "$p" is "$filename" | tee result.txt fi done done<ips.txt However this also echoes all file names into the results. How can I fix this?
First, save a cat by not using one when you don't need it. Rather than: cat haystack | grep needle You can simply: grep needle haystack As for your script: > results.txt # start with a fresh file for every run while read ip; do grep "$ip" * | grep -Ev 'results\.txt|ips\.txt' >> results.txt done < ips.txt The grep-into-grep pipeline is to prevent adding entries from the input and output files into the output file. If you have a zillion files to check and you're getting argument list too long, we can use a tool like xargs to break our command up into chunks short enough for the shell to permit: > results.txt # start with a fresh file for every run while read ip; do find . -type f -maxdepth 1 -not -name ips.txt -not -name results.txt -print0 | xargs -0 grep "$ip" >> results.txt done < ips.txt Here we're filtering out the input and output files with logic fed into find, so we no longer need to grep into grep.
How can I grep each file in a directory for a keyword and output the keyword and the filename it was found in?
1,635,625,585,000
I have a file (ordered_names) which is in the format pub 000.html pub.19 001.html for about 300 lines, And I can't find a way to feed this to the mv command. I have read Provide strings stored in a file as a list of arguments to a command?, but I could not get what I came for. Here are some of the attempts I made : for line in "$(cat ../ordered_files.reversed)"; do mv $(echo "$line"); done for line in "$(cat ../ordered_files.reversed)"; do echo mv $(echo $line | cut -d' ' -f 1) $(echo $line | cut -d' ' -f 2) ; done
Try: while read -r file1 file2; do mv -n -- "$file1" "$file2"; done <inputfile This assumes that the file names on each line in the input file are space-separated. This, of course, only works if the file names themselves do not contain spaces. If they do, then you need a different input format. How it works while read -r file1 file2; do This starts a while loop. The loop continues as long as input is available. For each line of input, two parameters are read: file1 and file2. mv -n -- "$file1" "$file2" This moves file1 to file2. The option -n protects you from overwriting any file at the destination. Of course, if it is your intention to overwrite files, remove this option. The string -- signals the end of the options. This protects you from problems should any of the file names start with a -. done This signals the end of the while loop. <inputfile This tells the while loop to gets its input from a file called inputfile.
How to read multiples arguments per line for a bash command?
1,635,625,585,000
My iTunes library is on a NAS (WD MyCloud 4TB), and I have a number of TV series, organised by iTunes library in the usual: 'TV Shows' folder: TV Show 1 |------------- Series 1 |-----------01 Episode Name.m4v |-----------02 Episode Name.m4v |... |------------- Series 2 |-----------01 Episode Name.m4v |... |... TV Show 2 |------------- Series 1 |-----------01 Episode Name.m4v |-----------02 Episode Name.m4v |... |------------- Series 2 |-----------01 Episode Name.m4v |... |... I have hard linked the whole TV folder to another folder, called Infuse. This folder will be read by the Infuse app on my Apple TV [this allows me to bypass having a computer with iTunes permanently switched on], so I need to rename all the hard-linked files as such: TV Show 1 |------------- Series 1 |-----------TV Show 1 - S0101 Episode Name.m4v |-----------TV Show 1 - S0102 Episode Name.m4v |... |------------- Series 2 |-----------TV Show 1 - S0201 Episode Name.m4v |... |... TV Show 2 |------------- Series 1 |-----------TV Show 2 - S0101 Episode Name.m4v |-----------TV Show 2 - S0102 Episode Name.m4v |... |------------- Series 2 |-----------TV Show 2 - S0201 Episode Name.m4v |... |... I have so far found the solution (for example TV Show 1, Season 1 folder): cd into each season folder for each show and run for f in *; do mv $f "TV Show 1 E01S$f"; done but this is really time consuming as I then need to cd ../Season 2/ for f in *; do mv $f "TV Show 1 E02S$f"; done cd ../Sesaon 3/ ... cd ../../TV Show 2/Season 1/ for f in *; do mv $f "TV Show 2 E01S$f"; done ... and what I ideally want is to script pulling the name from the grandparent directory and the season number from the parent directory so I can write one short bash script and run it. Something like: #!/bin/bash a=[TV Show folder] b=[Season folder] c=[each episode] for c in each a/b/ mv $c "$a - S0$bE$c" (or) rename $c "$a - S0$bE$c" done Can you help me implement it with a number of for loops or specific command?
Having done something very similar recently I already had a script. for f in */*/* do destdir=${f%/*} tvshow=${f%%/*} season=${destdir#*/} episode=${f##*/} # Get season number seasonnum=${season##* } dest=$(printf "%s/%s - S%02iE%s" "$destdir" "$tvshow" $seasonnum "$episode") echo "mv -- \"$f\" \"$dest\"" # mv -- "$f" "$dest" done
Batch renaming TV Series
1,635,625,585,000
I am using an if condition inside a for loop. If the if condition returns yes, then I'd like to go to the start of the for loop again. Is this possible in bash? #!/bin/bash for i in /apps/incoming/*.sql do j=$(egrep "[A-Z]{6}[ ]{1}[@abc_id]{10}[ ]{1}[=]{1}[ ]{1}[0-9]*" $i | awk '{ print $4 }') #echo $j #Query database export ORACLE_HOME=/apps/uc/tmd/oracle/instantclient_11_2 export LD_LIBRARY_PATH=/apps/uc/tmd/oracle/instantclient_11_2 sqlplus=/apps/oracle/instantclient_11_2/sqlplus tmprnt=$($sqlplus -s abcsd/sakfdj@'(DESCRIPTION =(ADDRESS_LIST =(ADDRESS = (PROTOCOL = TCP)(HOST = odsakjfldkjf)(PORT = 1111)))(CONNECT_DATA =(SERVICE_NAME = SFDDFD)(SRVR = DEDICATED)))' << EOF SELECT name from blabla where abc_id='$j'; EOF) if `echo ${tmprnt} | grep "${searchString1}" 1>/dev/null 2>&1` then GO TO the start of FOR IN loop and run the query again. So if above tmprnt variable inside the IF condition matches the search string, then I'd like to run the thing again (The SQL query return "no rows selected" sometimes but it somehow returns correct results in we run again). I know there is no GO TO in bash. Please suggest a way out.
If you want to move on to the next query, use the continue statement. If you want to repeat the same query, use a loop. You want to repeat the query, so write a script that does that, rather than disguising your intent with a low-level construct that doesn't even exist in the language you're using. for i in /apps/incoming/*.sql do while j=$(…) … ! echo "${tmprnt}" | grep "${searchString1}" 1>/dev/null 2>&1 do … # whatever you want to do when $tmprnt has the right format done done I corrected the conditional statement you wrote to what you probably meant — executing the output of grep as a shell snippet doesn't make sense. Also remember to put double quotes around variable and command substitutions. Note the logic here: while the search string is not present, repeat the query. That script would be querying the database in a tight loop, so something is missing here (hopefully code that you just omitted to keep the question simple).
Go to the beginning of the loop in bash
1,635,625,585,000
I recently came up with the following one-liner: for f in *.mp3; do sha1sum $f | sed -r 's/[a-z]//g' | cut -c1-2 | awk '{ print ($1 >= 255) ? $1 - $1 + (1/3)*$1 : $1 }' | xargs -i id3v2 -T {} $f; done I needed it because I've got an MP3 player that does not have a shuffle function. It was the only way I could find (in about an hour of trying) to give a suitably random distribution to track names. (At first I used jot, but this program does not update its random seed often enough to produce anything like random results. By the way, if anyone can think of a good way of doing this that actually works, I'm interested to hear about it.) I tried storing it as an alias and got: awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error awk: cmd. line:1: { print ( >= 255) ? - + (1/3)* : } awk: cmd. line:1: ^ syntax error It just seems like there should be a suitably easy way of storing it in my .bashrc. I'm aware that I could write it to a file and chmod +x it inside my $path with a shebang, but I prefer to use my .bashrc for things wherever possible.
If you really don't want to have this in its own file, you should use a bash function, not an alias. How did you define your alias? If you wrote something like alias bla="for f in *.mp3.... your * might be empty, if you did not escape it, because it is interpreted at evaluation-time of your bash, not at the time it runs. The same will be the case for $f etc. Some other points: Why do you compare two digits (cut -c1-2) to 255? The number will always be smaller than 100. Why do you write $1 - $1 + (1/3)*$1? This is simply ($1)/3.
What's the right way to reuse this one-liner?
1,635,625,585,000
I often use a for loop to i.e. convert a bunch of file formats. In some cases, where text transformations or variables occur, it would be nice to check if the substitutions were performed correctly. for i in *; do convert $i ${i%jpg}png; done Is there an easy way to show the performed command? Following the example above, something along the lines of: convert image1.jpg image1.png # command output convert image2.jpg image2.png # command output # ...
set -x for file in *jpg; do convert "${file}" "${file%jpg}png" done set +x Setting the -x shell option will display each executed command as it is set to be executed after all parameter expansions are completed. +x undoes this.
Echo contents of for loop automatically
1,635,625,585,000
I'm trying to create a cron job that checks the status of certain worker machines and triggers a webhook: It works, but I'm not sure that this is the best approach: for i in $(oc get nodes | awk 'FNR>1 {print $2}');do if [[ $i != 'Ready' ]];then <TRIGGER_WEBHOOK>;fi;done Output of oc get nodes # oc get nodes NAME STATUS ROLES AGE VERSION master1 Ready master 27h v1.20.0+bafe72f-1054 .... worker4 Ready worker 10h v1.20.0+bafe72f-1054 Any advice to improve it. Thx
The one thing I can see that I might change is removing the if: for i in $(oc get nodes | awk 'FNR > 1 && $2 != "Ready" { print $2 }'); do <TRIGGER API> done
Pipe for loop with awk and if
1,635,625,585,000
I have the following setup: x=0 y=1 z=2 task0(){ for file in "${files_array[$x]}" do sed -i "$var" $file x=$((x+3)) done } task1(){ for file in "${files_array[$y]}" do sed -i "$var" $file y=$((y+3)) done } task2(){ for file in "${files_array[$z]}" do sed -i "$var" $file z=$((z+3)) done } task0 & task1 & task2 & The functions task0, task1 and task2 get executed in the background but every loop only cycles once. I need it to loop until the end of $files_array[]. $files_array[] is well populated and the function reads it correctly. Where is my mistake? Thanks and regards, Jan
An ordinary for loop iterates over a static unchanging list of items. In each of your three loops, that list consists of exactly one item. For example, in task0, you loop over the single element $x from the list file_array. Instead, you may want to use while loops: task0 () { X=${#file_array[@]} while [ "$x" -lt "$X" ]; do sed -i -e "$var" "${file_array[$x]}" x=$(( x + 3 )) done } or an arithmetic for loop: task0 () { X=${#file_array[@]} for (( i=x; i < X; i+=3 )); do sed -i -e "$var" "${file_array[$i]}" done } These loops assume that the array file_array has consecutive indexes, which an array typically has unless you made an effort to remove elements or set elements out of order. You may also use the file list like this, which voids issues with sparse indexes: task0 () { set -- "${file_array[@]}" shift "$x" while true; do sed -i -e "$var" "$1" shift 3 || break done } This sets the positional parameters to the file list. Then shifts off 0, 1, or 3 of the items depending on what $x is. We then loop until we can't shift off three items from the list, and in each iteration we use the first item in our operation. Note that you can use task0 for all three of your background jobs: x=0; task0 & x=1; task0 & x=2; task0 & wait
Background for-loop only cycles once
1,635,625,585,000
I am running Mac OS. I have a directory /Users/sethparker/Documents containing several subdirectories /Users/sethparker/Documents/dir1,/Users/sethparker/Documents/dir2,/Users/sethparker/Documents/dir3. Each subdirectory is filled with identically named, tab separated files file1.txt,file2.txt,file3.txt. I would like all of the files in all of the subdirectories converted to comma separated, though the extension itself does not matter. My current approach is to run a short script in each subdirectory. cat tsv_to_csv.sh for ifile in {1..3}; do sed -i "" 's/\t/,/g' file${ifile}* done Is there an efficient way to apply this type of processing to all files in all subdirectories at once?
If you can safely run this for all subdirectories and all files in those subdirectories, all you need is: sed -i "" 's/\t/,/g' /Users/sethparker/Documents/*/*
Converting files in multiple directories from tab separated to comma separated
1,635,625,585,000
I use this code in my scripts to process files inside a folder, but it only works for subfolders. if [ -d "$1" ]; then for file in "${1%/}/"*/*(*.mkv|*.mp4|*.avi); do I know I can just remove /* to work with flat folders, but I'm looking for a more clean way to handle both flat folders ( no subfolders ) and folders with subfolders. I have big code in the for loop so I don't want solutions that rely on find
Try globstar and extglob which is specific to bash #!/usr/bin/env bash shopt -s globstar extglob if [[ -d $1 ]]; then for file in "$1"/**/*.@(mkv|mp4|avi); do : done fi
For loop for subfolders and flat folders at once
1,635,625,585,000
So I am trying to run a program iRep and usually it runs as- iRep -f Bins/10000A-01-01_bin.* -s sam/10000A-01-01.sam.sorted.sam --sort -o 10000A-01-01_iRep_output in sam folder - 10000A-01-01.sam.sorted.sam 10000A-01-02.sam.sorted.sam 10000A-01-03.sam.sorted.sam in Bins folder - 10000A-01-01_bin.1.fa 10000A-01-01_bin.2.fa 10000A-01-01_bin.3.fa 10000A-01-02_bin.1.fa 10000A-01-02_bin.2.fa 10000A-01-02_bin.3.fa 10000A-01-03_bin.1.fa 10000A-01-03_bin.3.fa 10000A-01-03_bin.5.fa 10000A-01-03_bin.7.fa I want to have one loop where I can do all in one command, instead of running each command for each sample individually, like iRep -f Bins/10000A-01-01_bin.* -s sam/10000A-01-01.sam.sorted.sam --sort -o 10000A-01-01_iRep_output iRep -f Bins/10000A-01-02_bin.* -s sam/10000A-01-02.sam.sorted.sam --sort -o 10000A-01-02_iRep_output iRep -f Bins/10000A-01-03_bin.* -s sam/10000A-01-03.sam.sorted.sam --sort -o 10000A-01-03_iRep_output Any idea how I could do this?
#!/bin/sh # Loop over the SAM files for sam in sam/*.sam.sorted.sam; do # Extract the sample name by taking the basename of the SAM file # and removing the known filename suffix. sample=$(basename "$sam" .sam.sorted.sam) # Call iRep (as described in the question) iRep -f Bins/"$sample"_bin.* -s "$sam" --sort -o "$sample"_iRep_output done Given the files in the question, this would end up running iRep -f Bins/10000A-01-01_bin.1.fa Bins/10000A-01-01_bin.2.fa Bins/10000A-01-01_bin.3.fa -s sam/10000A-01-01.sam.sorted.sam --sort -o 10000A-01-01_iRep_output iRep -f Bins/10000A-01-02_bin.1.fa Bins/10000A-01-02_bin.2.fa Bins/10000A-01-02_bin.3.fa -s sam/10000A-01-02.sam.sorted.sam --sort -o 10000A-01-02_iRep_output iRep -f Bins/10000A-01-03_bin.1.fa Bins/10000A-01-03_bin.3.fa Bins/10000A-01-03_bin.5.fa -s sam/10000A-01-03.sam.sorted.sam --sort -o 10000A-01-03_iRep_output
Loop to run a program using multiple files from different directories
1,635,625,585,000
I am submitting a job to a computer. It looks something like this: mpirun -np 12 example_S57 -o S57.results -r S57.final mpirun -np 12 example_S58 -o S58.results -r S58.final ... ... ... mpirun -np 12 example_S74 -o S74.results -r S74.final How can I loop through this command and run this for S57 up to S74 within my script without having to type out each command?
for example in S{57..74}; do mpirun -np 12 "example_$example" -o "$example.results" -r "$example.final" done This uses a brace expansion in bash to create the Snn values to loop over. The value $example will in each iteration be one of these values and can be used when calling the mpirun command.
looping through command to submit several jobs
1,635,625,585,000
I have a set of files in a directory. And every file will have a line called ---PUBG-xxxxx-- or ---PUBG-xxxxx, PUBG-yyyyy ----. Below is the output of the grep command. grep "^--" FILE*.sql | grep "PUBG" FILE1.sql:---PUBG-10901-- FILE2.sql:---PUBG-11617-- FILE3.sql:---PUBG-11625-- FILE4.sql:--PUBG-11724-- FILE5.sql:---PUBG-11720, PUBG-11406--- FILE6.sql:---PUBG-11403--- FILE7.sql:---PUBG-12021-- FILE8.sql:---PUBG-12207-- FILE9.sql:---PUBG-12270-- FILE10.sql:---PUBG-12552-- FILE11.sql:--- PUBG-14284-- FILE12.sql:--- PUBG-10908-- FILE13.sql:--- PUBG-15136--- FILE14.sql:--- PUBG-15163--- FILE15.sql:--- PUBG-15166--- FILE16.sql:-- PUBG-15059 -- FILE17.sql:-- PUBG-15252 -- The PUBG and its numbers will be random. All I need is file name and its associated PUBG value without any -- before or after PUBG and its value. There can also be multiple PUBGs as like in FILE5.sql:---PUBG-11720, PUBG-11406---. I have written the below set for loop. for (i in `grep "^--" FILE*.sql | grep "PUBG"`) do FILE_NAME=`echo ${i} | awk -F ":" {'print $1'}` PUBG_NO=`echo ${i} | awk -F "PUBG-" {'print "PUBG-" $2'}` echo ${FILE_NAME} echo ${PUBG_NO} done But the sample output for PUBG_NO is PUBG-15166--- for FILE15.sql and is PUBG-11720, for FILE5.sql. I need all the PUBG values in a file for particular FILE_NAME without any --. The PUBG value of FIlE5.sql can be PUBG-11720, PUBG-11406 How can this loop be improved to fetch the exact results.
You wouldn't need to write a loop. You could just pipe your ouput to sed. My attempt is as follows: grep "^--" FILE*.sql | grep "PUBG" | sed -E 's/--+\ ?//g' Which would give FILE1.sql:PUBG-10901 FILE2.sql:PUBG-11617 FILE3.sql:PUBG-11625 FILE4.sql:PUBG-11724 FILE5.sql:PUBG-11720, PUBG-11406 FILE6.sql:PUBG-11403 FILE7.sql:PUBG-12021 FILE8.sql:PUBG-12207 FILE9.sql:PUBG-12270 FILE10.sql:PUBG-12552 FILE11.sql:PUBG-14284 FILE12.sql:PUBG-10908 FILE13.sql:PUBG-15136 FILE14.sql:PUBG-15163 FILE15.sql:PUBG-15166 FILE16.sql:PUBG-15059 FILE17.sql:PUBG-15252 FILE14.sql:PUBG-15163 FILE15.sql:PUBG-15166 FILE16.sql:PUBG-15059 FILE17.sql:PUBG-15252 Here, I am using a sed substitue command which takes the form of 's/regular expression/substition/flag' To further break down the command: The regular expression "--+\ ?" is the pattern you want to find and select. This can be read as "Find a pattern that has "-" followed by one or more consecutive "-" that is followed by zero or one " ". This will match "--", "---", and "--- " in your output. Note that you will need the -E flag for sed in order to recongize these quantifiers. Here's a quick reference to brush up on the regex quantifiers like ? and + Here, the substituion space is left empty. This will replace the found patterns with nothing and is an effective method to strip your output. The flag "g" indicates that the search will be global. Without this, the substitution will only happen for the first match on each line. Adding the g will make sure that every instance of that pattern on each line is replaced with nothing. You could also apply those concepts to your initial grep command to perform only one search. grep -E "^--+\ ?PUBG" FILE*.sql | sed -E 's/--+\ ?//g'
GREP for pattern and remove all the junk characters before or after the pattern
1,635,625,585,000
PROBLEM: I can't get the $@ variable to do what I want in a for loop, the loop only sends one name into the file while looping, it should loop through all the arguments and write them to the file USERS.txt each on its own line. Here is the file: something78 something79 something7 dagny oli bjarni toti stefan_hlynur jessie Here is the test code: #!/bin/bash prepare_USERS() { /usr/bin/awk -F: '$3 >= 1000 { print $1 }' /etc/passwd > USERS.txt /bin/chmod 777 USERS.txt echo "$@" for user in "$@" do echo $user echo "$user" >> USERS.txt || echo "writing to USERS.txt failed"; exit 127 done } prepare_USERS "$@" #for user in "$@" #do # echo "$user" >> USERS.txt #done for user in USERS.txt do printf "%s" $user done Here are the arguments I pass: ./somethingDELETEme.sh jessie henry allison jason CURRENT output: $./somethingDELETEme.sh jessie henry allison jason jessie henry allison jason jessie EXPECTED output: The loop loops through all names from the argument list and writes it to the file USERS.txt. QUESTION: I have used this variable ($@) before and never had this problem. Why is the loop not iterating through all names in the argument list ($@) and how is the right way coding this? HERE IS THE REAL CODE: prepare_USERS() { checkIfUser /usr/bin/awk -F: '$3 >= 1000 { print $1 }' /etc/passwd > "$CURRENTDIR"/USERS.txt /bin/chmod 777 "CURRENTDIR"/USERS.txt for user in "$@" do echo "$user" >> "CURRENTDIR"/USERS.txt || echo "writing to USERS.txt failed"; exit 127 done }
The problem is with the incorrect usage of exit 127 in your for loop, which is exiting after the first for-loop iteration. You need to group the echo statement and the exit as a compound block under {..} to prevent this. echo "$user" >> USERS.txt || { echo "writing to USERS.txt failed"; exit 127; } Without this grouping what happens is the || defined applies only for the echo command and always run the exit command no matter if re-direction to file passed or failed because you have a command separator ; defined there. Now with the compound grouping enabled, the entire set of actions inside {..} is treated as one block and both of them are executed if the write action to USERS.txt fails.
The $@ variable is not working in a for loop, trying to iterate through a users list
1,635,625,585,000
I have roughly 1 million images on a directory. The files were numbered from 1 to n. I am using for loop to iterate over each image. Since each iteration is checked by individuals, only certain number of iterations could be done in a day. When I begin the loop again the subsequent day, the loop obviously begins from the first file again. I saved the files iterated through the loop in a text file and read the last line of the text file before the loop starts every time. I am trying to use the last read file as a beginning for the for loop. The following is the code done so far: query=/ImageFolder/*.jpg fil=$( tail -n 1 readfiles.txt ) for f in $query do python ~/runprog.py --query $f done I am not sure how to use the $fil as my starting point in the for loop and the start iterating subsequent files from thereon.
If your readfiles.txt contains all already processed files, you can use grep to look up if a certain was done or not. After running the python script, update that file with the processed file. for f in /ImageFolder/*.jpg; do if ! grep -q "$f" readfiles.txt; then python ~/runprog.py --query "$f" echo "$f" >> readfiles.txt fi done
Begin for loop every time where it ended last
1,635,625,585,000
I have thousands of *csv files in a certain subdirectory. There's a simple in line executable which I use to work with these files, which I pipe into a new file: executable file1.csv standard.csv > output_file1.csv I would like to create a for loop to do this not just for file1.csv, but for all files in that subdirectory. I would try something like this: for file in *.csv do # run executable on "$file" and output executable $file standard.csv > output done I think this will work, but how do I name each output output_ + $file +.csv?
Credit to Bruno9779 for the original draft of this answer. Not sure why it was self-deleted, as it was a pretty good answer: You have pretty much done it yourself: destinationDir="/destination/path/here/" if cd "$destinationDir"; then for file in *.csv; do # run executable on "$file" and output executable "$file" standard.csv > "${destinationDir}/output_${file}.csv" done else echo "Unable to change to working directory." fi Just remember to quote filenames with variables.
Loop through all files in a given subdirectory, name the output by the looped file
1,635,625,585,000
I've to run a command iptables --flush in 100+ servers. I don't have root credentials, but have sudo access for my account. So I come up with a below script to run the command in all servers. I added the servers names in the flushhosts file. [root@~]# cat testscript.sh > for i in `cat /flushhosts` > > do > > sshpass -p 'Mypwd' ssh -t username@$i sudo /sbin/iptables --flush > > cat << EOF || "Mypwd" > > done [root@~]# ./testscript.sh ./testscript.sh: line 6: syntax error: unexpected end of file I couldn't find what I'm missing in the script.
If you are open to a different approach I would like to propose using expect: Create a small expect script ex1.sh #!/usr/bin/expect -f set arg1 [lindex $argv 0] spawn ssh username@$arg1 expect "password: " send "Mypwd\r" expect "$ " send "sudo /sbin/iptables --flush\r" expect "password " send "Mypwd\r" expect "$ " send "exit\r" Then you can use it in your loop like this: for i in $(</flushhosts); do ./ex1 $i; done You have a lot more flexibility with expect, for this kind of situations.
SSH to servers with sudo access - script throws unexpected end of file error
1,635,625,585,000
Does GNU Parallel start a batch of as many jobs as possible (the number of jobs started being governed by GNU Parallel internals or/and the -j option along with given parameters), and once complete, then start the next batch of jobs and so on? Context I want to learn how to better handle timestamps related to jobs (start time, end time and then running time) and GNU Parallel. As an example here, I would like to understand if I can make use of the timestamps in my custom logs, recorded via a custom log function, which come just before executing the actual processing command, always inside a for loop that is passed to GNU Parallel. Can they give me the running time of the actual processing commands? Details Inside a for loop, passed then to GNU Parallel along with --joblog, I have put two commands : the first command is a custom log command including some timestamping, just before the second command which does the actual processing of interest. The timing of the custom log command is not of direct interest -- it is yet another logging command. Unfortunately, I was not aware of how the --joblog option works -- as explained here GNU Parallel --joblog logs only first line of commands inside a for loop, it only logs the first command. Trying to make sense of the logs I have, I use mlr to show the first three lines of a --joblog output ❯ mlr --itsv --oxtab head -n 3 parallel/parallel.job.4437.3.log Seq 1 Host : Starttime 1670106266.417 JobRuntime 0.000 Send 0 Receive 0 Exitval 0 Signal 0 Command log /scratch/pvgis/job.4437.3/Process_2022_12_02_23_15_50_10m_u_component_of_wind_2008.log Action=Metadata, Map=era5_and_land_10m_u_component_of_wind_2008_band_79_merged_scaled.nc, Hours since=946704, Longname=10 metre U wind component, Units=m s**-1 Seq 2 Host : Starttime 1670106266.419 JobRuntime 0.009 Send 0 Receive 0 Exitval 0 Signal 0 Command log /scratch/pvgis/job.4437.3/Process_2022_12_02_23_15_50_10m_u_component_of_wind_2008.log Action=Metadata, Map=era5_and_land_10m_u_component_of_wind_2008_band_39_merged_scaled.nc, Hours since=946705, Longname=10 metre U wind component, Units=m s**-1 Seq 3 Host : Starttime 1670106266.422 JobRuntime 0.012 Send 0 Receive 0 Exitval 0 Signal 0 Command log /scratch/pvgis/job.4437.3/Process_2022_12_02_23_15_50_10m_u_component_of_wind_2008.log Action=Metadata, Map=era5_and_land_10m_u_component_of_wind_2008_band_28_merged_scaled.nc, Hours since=946706, Longname=10 metre U wind component, Units=m s**-1 The above doesn't refer to the running time of the command gdalmerge_and_clean which I am interested in. Nevertheless, I thought that the logged starting time should differ in-between each logged line as the running time of all commands that are executed (in batches?) in an iteration of a for loop passed to GNU Parallel. I guess this is not the case and GNU Parallel is very precise in what it logs which is exactly the running time of the very command it reads first. The differences between successive Starttime records (below shown the first 10 lines) mlr --itsv --opprint step -a delta -f Starttime then rename Starttime_delta,Delta then cut -f Starttime,JobRuntime,Delta parallel/parallel.job.4437.3.log |head are Starttime JobRuntime Delta 1670106266.417 0.000 0 1670106266.419 0.009 0.0019998550415039062 1670106266.422 0.012 0.003000020980834961 1670106266.424 0.014 0.002000093460083008 1670106266.427 0.013 0.003000020980834961 1670106266.434 0.012 0.006999969482421875 1670106266.439 0.021 0.004999876022338867 1670106266.442 0.019 0.003000020980834961 1670106266.446 0.018 0.004000186920166016 .. and so on it goes. The average Delta mlr --itsv --opprint step -a delta -f Starttime then rename Starttime_delta,Delta then cut -f Starttime,JobRuntime,Delta then stats1 -a mean -f Delta parallel/parallel.job.4437.3.log is Delta_mean 0.33402504553451784 which obviously concerns to the log commands. Unlikely the gdalmerge_and_clean commands are so fast. Nonetheless, from the custom log commands, I can compute the overall duration of all Jobs ran from the overall Start and End timestamps Action=Processing, Start=2022-12-02 23:15:50 Action=Processing, End=2022-12-04 02:16:43 which is very useful. However, I want to know more about each and every single Job ran during this "Processing". This is why there is a log command to record a timestamp just before executing an actual gdalmerge_and_clean command. These log lines look like so: .. size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_210_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_211_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_212_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_213_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_214_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_215_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_216_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_217_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_218_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_219_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_220_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_221_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_222_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_223_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_224_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_225_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_226_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_227_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_228_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_229_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_230_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_231_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_232_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_233_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_234_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_235_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_236_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_237_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_238_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_239_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_240_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_241_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_242_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_243_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_244_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_245_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_246_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_247_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_248_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_249_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_250_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_251_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_252_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_253_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_254_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_255_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_256_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_257_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_258_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_259_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_260_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_261_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_262_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_263_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_264_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_265_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_266_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_267_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_268_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_269_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_270_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_271_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_272_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_273_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_274_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_275_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_276_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02 Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_277_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02 .. Using mlr to compute the differences, again, between the logged timestamp, maybe there is something useful? The non-zero differences concern then the timestamps of batches of jobs started in different moments (I think this is useful because there are many jobs with the same start time, since they run in parallel via GNU Parallel, right?): mlr --ocsv grep 'Action=Merge, Output' then clean-whitespace then put '$Seconds = localtime2sec($Timestamp)' then step -a delta -f Seconds then cut -f Timestamp,Seconds,Seconds_delta then cat -n then rename n,Job,Seconds_delta,Delta then filter '$Delta != 0' jobs/Process_2022_12_02_23_15_50_10m_u_component_of_wind_2008.log are Job,Timestamp,Seconds,Delta 209,2022-12-02 23:48:15,1670024895,625 232,2022-12-02 23:55:45,1670025345,450 255,2022-12-02 23:56:02,1670025362,17 278,2022-12-02 23:56:19,1670025379,17 291,2022-12-02 23:56:20,1670025380,1 301,2022-12-02 23:56:36,1670025396,16 324,2022-12-02 23:56:56,1670025416,20 347,2022-12-02 23:57:11,1670025431,15 370,2022-12-02 23:57:25,1670025445,14 393,2022-12-02 23:57:38,1670025458,13 .. 8570,2022-12-03 21:18:20,1670102300,94 8593,2022-12-03 21:19:48,1670102388,88 8616,2022-12-03 21:21:56,1670102516,128 8639,2022-12-03 21:23:54,1670102634,118 8662,2022-12-03 21:25:42,1670102742,108 8685,2022-12-03 21:26:00,1670102760,18 8708,2022-12-03 21:27:12,1670102832,72 8731,2022-12-03 21:28:24,1670102904,72 8754,2022-12-03 21:29:19,1670102959,55 8777,2022-12-03 21:29:59,1670102999,40 Maybe these differences do tell something about how long it took, more or less, for each individual job ran inside a GNU Parallel-ised for loop. ?
GNU Parallel starts a job when there is a free job slot. The number of job slots is given by -j/--jobs and defaults to the number of CPU threads. Let us assume your server has 8 CPU threads. When you start GNU Parallel it will spawn 8 jobs immediately. When a job finishes, the info is logged (in --joblog), and a new job is spawned. So if all your jobs take exactly the same time, it will seem as if GNU Parallel spawns jobs in batches. But it does not. This should make it easier to see what is going on: seq 1000 | parallel --lb --joblog my.log 'echo Starting {};sleep {};echo Ending {}' In general it seems using gdalmerge_and_clean is a really bad way of learning how to use GNU Parallel. Instead use much simpler examples to learn from, and then apply what you have learned to gdalmerge_and_clean.
Understanding timestamps in a GNU Parallel --joblog output
1,635,625,585,000
Trying two variables with a here-document to generate a list of bash scripts. I ran into the problem that I cannot put the two variables into the script properly and the output file (only one file) has the file name of cut_cat.sh, basically the code is interpreted cat as text, not a function. How can I improve this? This is in the bash shell environment. The operating system is Red Hat Linux. The input file R2_adaptor contains Sample_ID:CGATATTCG (the first half is sample ID and the second half is the sequence to be removed). They are separated by :. for n in cat ${FA_PATH}/R2_adaptor; do first=$(echo $n | cut -d ":" -f 1) second=$(echo $n | cut -d ":" -f 2) cat <<- EOF > ${SCRIPTS}/cut_${first}.sh #!/bin/bash FA_PATH="/home/xxx/test_files" cutadapt -a TTCCTCCT -A echo ${second} --cores=14 \ -o ${FA_PATH}/${first}_R1_trim.fastq.gz \ -p ${FA_PATH}/${first}_R2_trim.fastq.gz \ ${FA_PATH}/${first}_R1_001.fastq.gz ${FA_PATH}/${first}_R2_001.fastq.gz EOF done Here is an example of the ${FA_PATH}/R2_adaptor file Sample2:AGAAGTTT Sample3:TTGGATAT Sample4:GTAGTATT Sample5:ATATATAT Sample6:AACTTGGC Sample7:GATGGTGA Sample8:GTCCCTAT
A lot of this is guesswork since you haven't told us any detail at all, but I think that ${FA_PATH}/R2_adaptor expands to a file name and I think what you are trying to do is iterate over the contents of the file. Which means that what you were probably looking for is for n in $(cat ${FA_PATH}/R2_adaptor). However, and although that would work here if you just have one string per line, it is better to get used to using while loops for things like this. I am, again, guessing that your input file has two strings separated by :, something like: ACTTGCTATAGCGAT:CGATATTCGGCGATAT If so, you can do: #!/bin/bash FA_PATH="/home/xxx/test_files" SCRIPTS="/path/to/scripts" while IFS=: read -r first second rest_if_any_ignored; do cat <<- EOF > "${SCRIPTS}"/cut_"${first}".sh #!/bin/bash cutadapt -a TTCCTCCT -A '${second}' --cores=14 \ -o '${FA_PATH}/${first}_R1_trim.fastq.gz' \ -p '${FA_PATH}/${first}_R2_trim.fastq.gz' \ '${FA_PATH}/${first}_R1_001.fastq.gz' '${FA_PATH}/${first}_R2_001.fastq.gz' EOF done < "${FA_PATH}"/R2_adaptor Note that I also removed the echo from -A echo $second since the echo would just be a string there and the -A expects an adapter sequence to remove.
Here-document with two variables
1,635,625,585,000
I have a series of files located in a series of folder, for example: ~/BR2_1-3/bin.1.permissive.tsv ~/BR2_1-3/bin.2.permissive.tsv ~/BR2_1-3/bin.3.orig.tsv ~/BR2_2-4/bin.1.strict.tsv ~/BR2_2-4/bin.2.orig.tsv ~/BR2_2-4/bin.3.permissive.tsv ~/BR2_2-4/bin.4.permissive.tsv ~/BR2_3-5/bin.1.permissive.tsv ~/BR2_3-5/bin.2.permissive.tsv ~/BR2_3-5/bin.3.orig.tsv ~/BR2_3-5/bin.4.orig.tsv ~/BR2_3-5/bin.5.permissive.tsv ... What I want to do is to extract the 1st and 5th column from each of the *.tsv files and create a new tab delimited file in the corresponding folder. That I can do separately for each file under its corresponding folder by using the commands below: $ awk -F '\t' 'OFS="\t" {if ($5 != "") print($1,$5)}' bin.1.permissive.tsv > test $ sed -i '1d' test $ mv test BR2_1-bin.1.permissive.ec My question is, because I have over a hundred of this kind of file, is there a way to write a for loop to do this step at the terminal automatically? The naming convention for the folder and files are as follows: "BR(2~5)_(1~6)-(n, as the number of files contained in the folder)" for the folders; "bin.n.(strict/permissive/orig).tsv" for the files. One input file should be mapping to one output file. The name for an output files is "BR2_1-bin.1.permissive.ec" if the corresponding input file was "~/BR2_1-3/bin.1.permissive.tsv". And the name for an output file is "BR2_3-bin.3.orig.ec" if the corresponding input file was "~/BR2_3-5/bin.3.orig.tsv". In addition, the output file is supposed to be written in the same folder with its corresponding input file. Thanks for this question from the comment. Thank you in advance and all suggestions are welcomed!
find and xargs are typically recommended for this: find "$HOME" -name \*.tsv | xargs awk -F'\t' -v OFS='\t' '$5 != "" {print $1, $5}' >> output.tsv or, more safely find "$HOME" -name \*.tsv -print0 | xargs -0 awk -F'\t' -v OFS='\t' '$5 != "" {print $1, $5}' >> output.tsv find's -print0 directive prints out the matched files separated with a null byte, and xargs's -0 options uses the null byte to separate filenames. This is done because the null byte is not allowed to appear in filenames, while newline is a valid character. OK, for each file to be processed into the corresponding .ec file: find "$HOME" -name \*.tsv -print0 | xargs -0 awk -F '\t' -v OFS='\t' ' FNR == 1 { if (ec) close(ec) ec = gensub(/\.tsv$/, ".ec", 1, FILENAME) next } $5 != "" {print $1, $5 > ec} ' Notes: print ... > ex -- similar to redirection in the shell, this redirects the output to the filename contained in the ec variable. unlike the shell, this does not overwrite the file for every "print", but only the first print truncates/creates the file and all subsequent prints append to it. You can run into "too many open files" errors, so it's best practice to close an open file when you're done with it. do this when you're at the first record of a file if the ec variable is not empty, it holds a filename that was used for the previous file that was processed gensub is a gawk-specific function, similar to sub and gsub. it's described in the manual unlike sub and gsub, gensub returns the transformed value.
Use For loop to extract certain columns from a series of files to write new tab-delimited files
1,635,625,585,000
I'm making a script to start up a number of services at server reboot. To do this, I'm looping over the directory, checking that each has a start.sh script, and calling that script if they do. However, as part of this, I would like to set up the script to ignore a service it has a select name, in this case mongo: for service in $HOME/start/* ; do if [ -e $HOME/start/$service/start.sh ] && [ $service != mongo ] then cd $HOME/start/$service ./start.sh else pwd echo "No start script found for $service" fi done This is because mongo is started before this loop is called, as a prequesite for the services, and you cannot have more than one instance of it at a time. However, it continues to loop and call mongo. How can I get it to ignore this? Edit: corrected appName to service
When you iterate over files with a glob the path will be added to the file. So if your file is just mongo (which it should be based on your code, if it is not that is also a problem), your loop will set service to /home/<YOUR USER>/start/mongo. You are then trying to see if that is not equal to mongo which it is not. You could use basename to fix that: for s in "$HOME/start/"* do service=$(basename "$s") if [ -f "$HOME/test/$service/start.sh" ] && [ "$service" != mongo ] then cd "$HOME/start/$service" ./start.sh else pwd echo "No start script found for $service" fi done
How to ignore file in a for loop
1,635,625,585,000
I have a loop in a bash script, test.sh that reads as follows: #!/bin/bash CHOSEN_NQUEUE=0 foo(){ for chunk in $(seq 0 $((${CHOSEN_NQUEUE}-1))); do echo "CHUNK = $(($chunk+1))" done } bar(){ CHOSEN_NQUEUE=10 foo } bar This loop has previously been working fine up till now. If I run the program as . test.sh, I get the following error code in the loop: -bash: 0 1 2 3 4 5 6 7 8 9+1: syntax error in expression (error token is "1 2 3 4 5 6 7 8 9+1") If I run the program as bash test.sh, then the function produces the desired result: CHUNK = 1 CHUNK = 2 CHUNK = 3 CHUNK = 4 CHUNK = 5 CHUNK = 6 CHUNK = 7 CHUNK = 8 CHUNK = 9 CHUNK = 10 This is a snippet from a much larger program; if I run this program with bash program.sh, the error I see in the first case persists. Particularly, if I simply run foo, then the error does not occur. If I run foo from bar, then the error occurs. This occurs irrespective of using bash program.sh or . program.sh. Can someone kindly suggest what I might be doing wrong? Is it poor practice to run functions from inside other functions in bash? Kindest regards! EDIT: Thanks to everyone in the comments! Upon realizing this problem arises from using select for arrays, I attempted the following code: select opt in "${options[@]}" do next=false local IFS=@ case "@${options[*]}@" in (*"@$opt@"*) foo (*) echo "Invalid option: $REPLY" ;; esac echo "" done echo "IFS = $IFS" The problem arises from IFS=@, which should not be @ outside of the loop. However, if I run this code attempting to set IFS locally, i.e. local IFS=@, it appears the global IFS is modified. The code outputs: IFS = @ Does anyone have an idea why this might be? Kind regards again!
The error you get indicates that $chunk contains a multiline value, all the numbers from 0 to 9. That would happen if word-splitting doesn't happen on the result of $(seq ...) in the for. Now, the usual way to prevent word-splitting is to put double-quotes around the expansion, so for chunk in "$(seq ...)" wouldn't expand. But that's not the case here since you'd know if you added double-quotes, and anyway, it works in some cases. But word-splitting isn't always the same, it's based on the value of IFS, which by default contains a space, tab and a newline ($' \t\n' using the C-style quoting). If it contains something different, then those are the characters that will be taken as word separators. And indeed you have modified IFS inside the select, just before calling foo: local IFS=@ case "@${options[*]}@" in (*"@$opt@"*) foo The way the variable scoping works in Bash is that foo also sees the modified value of IFS. local doesn't mean the change is visible only to that function, but instead it's also visible to all subfunctions called from that level too: $ x=999 $ a() { echo "a: x=$x"; } $ b() { local x=$1; a; } $ b 123 a: x=123 This is unlike what you'd have in, say, C. A workaround would be to save IFS to another variable instead, so something like this: local oldifs=$IFS IFS=@ str="@${options[*]}@" IFS=$oldifs case $str in ... or to change it in a subshell (hiding the IFS change there): str=$(IFS=@; echo "@${options[*]}@") case $str in ... You could also make a function to do that string join (hiding the IFS change in the function), you just need name references to pass variables by name: # join a b c: # join the values of array 'a' with the character 'b', and put the result in 'c' join() { local -n _arr=$1 local IFS=$2 local -n _res=$3 _res="${_arr[*]}" } src=(11 22 33) join src @ dst echo "$dst" # outputs "11@22@33" (Of course, that's a bit unwieldy for one use, and name references aren't perfect either: a nameref inside a function can't refer to a variable with the same name outside it (at least in Bash 4). The minor upside of this over just using a command substitution is avoiding a fork to start the subshell.) Or, just to be on the safe side, (re)set IFS every time you need it. Inside foo: foo() { local IFS=$' \t\n' # or just IFS=$'\n' for chunk in $(seq ...); do ... }
(Bash) For Loop Not Functioning Properly
1,635,625,585,000
I have a requirement to perform specific set of tasks based on server , So I want to have the condition(s) defined based on server. Here is the script I came up with and I have read multiple blogs and couldn`t find any mistake from my script. Can you guide on what I am overlooking here ? #!/bin/bash SERVER_NAME=`hostname -s` hostname -s DBServer=(servr1 servr2 servr3 servr4) #AppServer=[hqidlfdiwa01 , hqiqlfdiwa01] echo "Values of DBServer seeing is ${DBServer[*]}" Values of DBServer seeing is servr1 servr2 servr3 servr4 for i in ${DBServer[*]} do echo "current value in I is $i" echo "The server name found is $SERVER_NAME" if [$SERVER_NAME == $i] then echo "I am on one of the servers and it is $i" fi done Output I see on server is current value in I is servr1 The server name found is servr1 -bash: [servr1: command not found current value in I is servr2 The server name found is servr1 -bash: [servr1: command not found current value in I is servr3 The server name found is servr1 -bash: [servr1: command not found current value in I is servr4 The server name found is servr1 -bash: [servr1: command not found
In shell, spaces matter. Replace: if [$SERVER_NAME == $i] with: if [ "$SERVER_NAME" = "$i" ] Without the spaces, the shell thinks that you want to run a command named [$SERVER_NAME (such as [servr1) with arguments == and $i]. With the space, the shell runs the test command, denoted by [. Also, always place shell variables in double quotes unless you understand what shell expansions would be applied and you explicitly want them to be applied. Lastly, while bash accepts either == or = to mean string-equal inside [...], other shells only understand =. For portability, it is best practice to use = for string-equal inside [...].
If condition on shell not working , not sure what is the mistake I am doing? [duplicate]
1,635,625,585,000
I have 2 text files "${LinkP}" and "${QuestionP}. I want to read these files and store each complete line in the respective array, IFS=$'\r\n' GLOBIGNORE='*' command eval "LinkA=($(cat "${LinkP}"))" IFS=$'\r\n' GLOBIGNORE='*' command eval "QuestionA=($(cat "${QuestionP}"))" Now I want to operate on these using a for loop nLink=${#LinkA[@]} # Size of array for ((i = 0; i < nLink; i = i + 1)); do echo $i Question=${QuestionA[i]} echo "Question=${QuestionA[i]}" done But, the Question variable doesn't contains full line, it breaks after each space character. How can I store each question and link (complete line in respective file) in these variable and process them inside for loop.
store each complete line in the respective array is easy with a different approach: mapfile LinkA < "$LinkP" See help mapfile for more options, such as -t to remove a trailing delimiter from each line.
Reading multiple files and operating on stored Arrays
1,635,625,585,000
If a parameter is provided as an input, script should only check number of lines for that input file, otherwise script should display number of lines for each file within that directory. The script file should be ignored i.e. the count for the script file should not be displayed. I tried: #!/bin/bash if [ $# -eq 0 ] then for l in *.txt do echo $(wc -l $l) done else for l in $* do echo $(wc -l $l) done fi But I am not allowed to explicitly mention the file type i.e *.txt - it must check all files except script file.
I think you almost got it. You can use basename "$0" to find the name of the script from within the script, and print the line count of everything except that #!/bin/bash if [ $# -eq 0 ] then for k in * do if [[ ! -d "$k" && "$k" != `basename "$0"` ]] then wc -l "$k" fi done else for k in $* do wc -l "$k" done fi I took the liberty of Using k instead of l (not recommended for variable names I'm pretty sure although I can't find a link (it can be confused with 1, I)) Using 4 spaces for indentation instead of 8. In the end that is of course your decision, but I'd say 4 are more readable than 8. Quoting your variables. Highly recommended to avoid splitting variable names with spaces. Dropping the unnecessary echo like @Serg pointed out. EDIT I added double brackets and a test for directories to ignore them as well to the if conditional.
Count number of lines in each file within current directory using a for loop?
1,475,657,579,000
I'm having a problem here on a bash script I made. In a for loop, I iterate on all the arguments to construct a string variable that is later fed to an "eval" command: for arg in "$*" do if [ $arg != $lastArg ]; then findTarget+="-name $arg -o " else findTarget=$(echo $findTarget | sed 's/-o$//') break fi done The problem stems from the "$*". For example when I enter "*.c" in the arguments, and the current folder contains files that match that pattern, the *.c argument is expanded into those files; I do not want that, I want findTarget to be concatenated with -name *.c -o, I have tried with and witout quotes, using eval, nothing seems to work. Any idea how to do this (simply if possible) ? Note: the total number of arguments can vary. This is an example of how I run the script: $ trouver.bash *.c *.f90 someString At the end of my for loop, the variable findTarget should read -name *.c -o -name *.f90 This does not work if the *.c or *.f90 match files in the current folder...
The syntax to loop over the positional parameters is for arg do alone. "$*" is the concatenation of the positional parameters with the first character of $IFS, so you would be looping over one element only. Also, if you want to build a list of arguments for the find command, you need an array, not a string. And don't forget to quote your variables! So: findTarget=() or=() for arg do [ "$arg" = "$lastArg" ] && break findTarget+=("${or[@]}" -name "$arg") or=(-o) done find . \( "${findTarget[@]}" \) Note that when you invoke your script, you need to quote the *.c... patterns as otherwise they would be expanded by the shell before being passed to the script. trouver.bash '*.c' '*.f90' someString If your interactive shell is zsh, you can define an alias for your command where globbing is disabled with: alias trouver.bash='noglob trouver.bash' That way, you can do: trouver.bash *.c *.f90 someString without the shell expanding those *.c *.f90 globs.
Problem with wildcard expansion in for loop range
1,475,657,579,000
I have script here that will list the date that the user will enter and output the date 5 days ago. #!/bin/bash echo "What month?" echo "1 - January" echo "2 - February" echo "3 - March" echo "4 - April" echo "5 - May" echo "6 - June" echo "7 - July" echo "8 - August" echo "9 - September" echo "10 - October" echo "11 - November" echo "12 - December" echo "" echo -n "What month? " read m if [ "$m" == "1" ] then mn="Jan " elif [ "$m" == "2" ] then mn="Feb " elif [ "$m" == "3" ] then mn="Mar " elif [ "$m" == "4" ] then mn="Apr " elif [ "$m" == "5" ] then mn="May " elif [ "$m" == "6" ] then mn="Jun " elif [ "$m" == "7" ] then mn="Jul " elif [ "$m" == "8" ] then mn="Aug " elif [ "$m" == "9" ] then mn="Sep " elif [ "$m" == "10" ] then mn="Oct " elif [ "$m" == "11" ] then mn="Nov " elif [ "$m" == "12" ] then mn="Dec " else echo "Invalid month" fi echo "" #DAY echo -n "What day? " read d if [ "$d" -lt "9" ] then mnd="$mn"" ""$d" elif [ "$d" -gt "31" ] then mnd="1" else mnd="$mn""$d" fi for dy in {0..4}; do date -d "$mn $d - $dy days" +'%b %_d' done Output: What month? 8 What day? 1 Aug 1 Jul 31 Jul 30 Jul 29 Jul 28 What I want now is to store each date in to variable for example the first line must Aug 1 must be stored on variable x1 Jul 31 must be stored on variable x2 etc.. What I mean whatever output on the first list must be stored on x1 and so on.
Use an array instead of trying to create separate variables. declare -a x for dy in {0..4}; do x+=( "$( date -d "$mn $d - $dy days" +'%b %_d' )" ) done You may then access the four values in ${x[0]} through to ${x[3]}. For the first part of your script, have you considered using a select statement? select mn in "Jan" "Feb" "Mar" "Apr" "May" "Jun" \ "Jul" "Aug" "Sep" "Oct" "Nov" "Dec" do if [[ -z "$mn" ]]; then echo "Invalid choice" >&2 else break fi done printf "You selected '%s'\n" "$mn" This does the following: 1) Jan 3) Mar 5) May 7) Jul 9) Sep 11) Nov 2) Feb 4) Apr 6) Jun 8) Aug 10) Oct 12) Dec #? 56 Invalid choice #? 5 You selected 'May' The value of $mn will be the selected string (May for example).
Storing the each output into a variable
1,475,657,579,000
Overview: I save my variable in a config file and call them later. Each entry with the name FailOverVM has a number beside it like FailOverVM1 and I want to check to see if it has data and generate a function named FailOverVM1() that later in the script starts $FailOverVM1Name, which happens to be 'Plex Server' I can manually do it like StartVM1() and i works but I may expand to 15 later and want it to adjust accordingly. To clarify I can start the VM with a Case statement later and have but I can't wrap my head around the variable that in itself is a variable. I hope I didn't confuse anyone. Maybe im making this WAY more complicated than it is or needs to be. #!/bin/bash . "${BASH_SOURCE%/*}/configlocation.conf" . $Configuration checkVM1=$(nc -vvz $FailOverVM1IP $FailOverVM1Port 2>&1) VMCount=$(grep "FailOverVM.Name" /media/VirtualMachines/Current/Configuration.conf | wc -l) pinggateway=$(ping -q -w 1 -c 1 `ip r | grep default | cut -d ' ' -f 3` > /dev/null && echo ok || echo error = error) STATE="error"; while [ $STATE == "error" ]; do #do a ping and check that its not a default message or change to grep for something else STATE=$(ping -q -w 1 -c 1 `ip r | grep default | cut -d ' ' -f 3` > /dev/null && echo ok || echo error) #sleep for 2 seconds and try again sleep 2 done for i $VMCount; do if [ -z "$FailOverVM$VMCountName" ]; echo "$FailOverVM$VMCountName" fi done StartVM1(){ if [[ $checkVM1 = "Connection to $FailOverVM1IP $FailOverVM1Port port [tcp/*] succeeded!" ]]; then echo '$FailOverVM1Name is up' else echo "$FailOverVM1Name down" su -c 'VBoxManage startvm $FailOverVM1Name -type headless' vbox fi } Where I'v gotten so far in a test script #!/bin/bash . "${BASH_SOURCE%/*}/configlocation.conf" . $Configuration Pre='$FailOverVM' post="FailOverVM" name="Name" VMCount=$(grep "FailOverVM.Name" $Configuration | wc -l) #Count entires in config file wirn FailOverVM*Name while [[ $i -le $VMCount ]] do #if [ -z $Pre$i"Name" ];then #If the variable $FailOverVM*Name is not blank $post$i=$Pre$i$Name echo "$post$i" #print it #else # echo $Pre$i"Name" "was empty" #fi ((i = i + 1)) done Output: ./net2.sh: line 11: FailOverVM=$FailOverVM: command not found FailOverVM ./net2.sh: line 11: FailOverVM1=$FailOverVM1: command not found FailOverVM1 ./net2.sh: line 11: FailOverVM2=$FailOverVM2: command not found FailOverVM2 ./net2.sh: line 11: FailOverVM3=$FailOverVM3: command not found FailOverVM3 ./net2.sh: line 11: FailOverVM4=$FailOverVM4: command not found FailOverVM4 ./net2.sh: line 11: FailOverVM5=$FailOverVM5: command not found FailOverVM5 ./net2.sh: line 11: FailOverVM6=$FailOverVM6: command not found FailOverVM6 The problem here is there is no $FailOverVM without a number beside it, and what is up with "command not found FailOverVM5" (or any other number) I didn't know I issued one. But the biggest problem is its not grabbing the variable $FailOVerVM* form the config file. I need that for the func loop. New modified script with @dave_thompson_085 help #!/bin/bash . "${BASH_SOURCE%/*}/configlocation.conf" . $Configuration for i in ${!FailOverName[@]}; do selip=FailOverIP[${i}] selport=FailOverPort[${i}] checkVM[$i]=$(nc -vvz ${!selip} ${!selport} 2>/devnull) echo ${!selip} echo ${!selport} echo FailOverName[${i}] done StartVM() { # first argument to a function is accessed as $1 or ${1} selname=FailOverName[${i}] if [[ checkVM[$i] =~ 'succeeded' ]]; then # only need to check the part that matters echo number $i name ${!selname} already up else echo starting number $i name ${!selname} echo su -c "VboxManager startvm '${!selname}' -headless" vbox # note " because ' $ fi } #done StartVM 1 # and StartVM 2 # etc Output root@6120:~/.scripts# ./net2.sh -v 192.168.1.6 32400 FailOverName[1] 192.168.1.5 80 FailOverName[2] 192.168.1.7 80 FailOverName[3] 192.168.1.1 1030 FailOverName[4] starting number 4 name finch su -c VboxManager startvm 'finch' -headless vbox starting number 4 name finch su -c VboxManager startvm 'finch' -headless vbox root@6120:~/.scripts# Config file # FailOverVM1IP='192.168.1.6' FailOverVM1Port='32400' FailOverVM1Name='Plex Server' FailOverVM1NASHDD='/media/VirtualMachines/Current/Plex\ Server/Plex\ Server.vmdk' FailOverVM1LocalHDD='/home/vbox/VirtualBox\ VMs/Plex\ Server/Plex\ Server.vmdk' FailOverVM2IP='192.168.1.7' FailOverVM2Port='32402' FailOverVM1Name='Plex Server2' FailOverVM2NASHDD='/media/VirtualMachines/Current/Plex\ Server/Plex\ Server.vmdk' FailOverVM2LocalHDD='/home/vbox/VirtualBox\ VMs/Plex\ Server/Plex\ Server.vmdk' FailOverVM3IP='192.168.1.8' FailOverVM3Port='32403' FailOverVM3Name='Plex Server3' FailOverVM3NASHDD='/media/VirtualMachines/Current/Plex\ Server/Plex\ Server.vmdk' FailOverVM3LocalHDD='/home/vbox/VirtualBox\ VMs/Plex\ Server/Plex\ Server.vmdk' FailOverVM4IP='192.168.1.9' FailOverVM4Port='32404' FailOverVM4Name='Plex Server4' FailOverVM4NASHDD='/media/VirtualMachines/Current/Plex\ Server/Plex\ Server.vmdk' FailOverVM4LocalHDD='/home/vbox/VirtualBox\ VMs/Plex\ Server/Plex\ Server.vmdk' FailOverVM5IP='192.168.1.10' FailOverVM5Port='32405' FailOverVM5Name='Plex Server5' FailOverVM5NASHDD='/media/VirtualMachines/Current/Plex\ Server/Plex\ Server.vmdk' FailOverVM5LocalHDD='/home/vbox/VirtualBox\ VMs/Plex\ Server/Plex\ Server.vmdk' FailOverIP[1]=192.168.1.6 FailOverName[1]=robin FailOverPort[1]=32400 FailOverIP[2]=192.168.1.5 FailOverName[2]=bluejay FailOverPort[2]=80 FailOverIP[3]=192.168.1.7 FailOverName[3]=sparrow FailOverPort[3]=80 FailOverIP[4]=192.168.1.1 FailOverName[4]=finch FailOverPort[4]=1030 VM1LogDirLogDir='/media/VirtualMachines/Logs/Plextstart' PlexServerIP='192.168.1.6' PlexPort='32400' mydate=`date '+%Y-%m-%d_%H%M'` rsyncfrom= NASPlexvmHDD='/media/VirtualMachines/Current/Plex\ Server/Plex\ Server.vmdk' LocalPlexvmDHDD='/home/vbox/VirtualBox\ VMs/Plex\ Server/Plex\ Server.vmdk' PlexVMname='Plex Server' PlexStartLogDir='/media/VirtualMachines/Logs/Plextstart' RouterIp='192.168.1.1' So it sees all the vms but is only executing the last and twice at that. #!/bin/bash . "${BASH_SOURCE%/*}/configlocation.conf" . $Configuration for i in ${!FailOverName[@]}; do selip=FailOverIP[${i}] selport=FailOverPort[${i}] checkVM[$i]=$(nc -vvz ${!selip} ${!selport} 2>&1) echo ${!selip} echo ${!selport} #echo ${i} #done StartVM() { # first argument to a function is accessed as $1 or ${1} selname=FailOverName[${i}] if [[ $checkVM[$i] =~ 'succeeded' ]]; then # only need to check the part that matters echo number $i name ${!selname} already up else echo starting number $i name ${!selname} echo su -c "VboxManager startvm '${!selname}' -headless" vbox # note " because ' prevents the variable expansion fi } StartVM done Note: Checking of if VM is already running doesn't function yet but that wasnt the question I asked so this meets the criteria.
Asides: you can eliminate the wc -l by using grep -c FailoverVM.Name configfile. But if you want to use numbers over 9 decimal (not e.g. 123456789abcdef) your pattern needs to be FailoverVM[0-9][0-9]?Name or FailoverVM[0-9]{1,2}Name in -E extended mode. Also for i $VMCount is a syntax error; I assume you mean for i in $(seq $VMCount). You can read a variable indirectly in bash with ! (bang) and another variable containing the name: for i in $(seq $VMCount); do selname=FailoverVM${i}Name selip=FailoverVM${i}IP selport=FailoverVM${i}Port echo name ${!selname} is IP ${!selip} and port ${!selport} done which is less of a blunderbuss than eval but still clumsy. But you cannot set a variable this way, so you should use an array for that. And you cannot do this for functions, so instead write one function that accepts an argument to tell it which (set of) variables to use: for i in $(seq $VMCount); do selip-Failover${i}IP selport=Failover${i}Port checkVM[$i]=$(nc -vvz ${!selip} ${!selport} 2>/devnull) } StartVM() { # first argument to a function is accessed as $1 or ${1} selname=FailoverVM${1}Name if [[ checkVM[$1] =~ 'succeeded' ]] # only need to check the part that matters then echo number $1 name ${!selname} already up else echo starting number $1 su -c "VboxManager startvm ${!selname} -headless" vbox # note " because ' prevents the variable expansion fi ] ... StartVM 1 # and StartVM 2 # etc OTOH if you can change the config to use array variables for everything like this FailoverIP[1]=10.255.1.1 FailoverName[1]=robin FailoverIP[2]=10.255.2.2 FailoverName[2]=bluejay etc that would make everything much simpler. And then you don't need to re-grep the file to count the entries, you can just use e.g. ${#FailoverName[@]}
Func name as variable in loop
1,475,657,579,000
I am new in programing with bash script. Here is my problem: I am going to open a sort of data whose file name includes the date (format: file_yyyymmddhh.nc). There are some requirements: mm is from 01 to 12. This must be a two-digit integer. dd is from 01 to 28, 30, or 31, depending on the what month it is. I tried to solve the problem with if structure and loops. I know that I could use something like this so that I can apply ${dd} to my filename. if [${mm} == 01] ; then for ((i=1; i<=31; i=i+1)) do ${dd}=i done fi But I don't know how to specify ${dd} to be a 2-digit integer especially when ${dd} <= 9. Is there any way to fix the code above?
You can use printf to format your numbers. Here the %02d denotes a two digit integer with leading zeros if appropriate. dd=$(printf "%02d" $i) You can extend this so that if $y, $m, $d, and $h contain your year, month, day, and hour numbers the construct could become this file=$(printf "file_%04d%02d%02d%02d.nc" $y $m $d $h) While we're here, your construct ${dd}=i is incorrect. The $ symbol is prefixed in front of a variable name to get that variable's value (in your case, i is the variable and $i equates to its value). So in your case you would instead have written dd=$i.
Question about if structure and loops
1,475,657,579,000
I have created a script that will create a few directories and then organize other files from one directory by moving them into specific sub-directories based on their extension (i.e. .gif in media, .jpg in pictures). Now I have to check those directories to make sure they contain only those files with the proper extension. Below is what I've come up with so far with comments explaining where I'm going with this: #!/bin/bash #iterate over each DIRECTORY once #while in DIRECTORY list the files it contains #check the extension of containe files with given list of EXTENSIONS #if file has wrong extension print error message and stop loop #if all files are in corret DIRECTORY print confirmation message echo "Checking: $DIRECTORY for:$EXTENSIONS" for (( i = 0; i < 4; i++ )); do if [[ i -eq 1 ]]; then DIRECTORY="documents" EXTENSIONS="*.txt *.doc *.docx" #list files and check EXTENSIONS elif [[ i -eq 2 ]]; then DIRECTORY="media" EXTENSIONS="*.gif" #if I equals 2 then look into media DIRECTORY #list files and check EXTENSIONS elif [[ i -eq 3 ]]; then DIRECTORY="pictures" EXTENSIONS="*.jpg *.jpeg" #if I quals 3 then look into pictures DIRECTORY #list files and check EXTENSIONS else DIRECTORY="other" EXTENSIONS="*" #statements fi done
How about you just print all the files that don't match your extensions? find documents -type f ! \( -name \*.txt -o -name \*.doc -o -name \*.docx \) find media -type f ! -name \*.gif find pictures -type f ! \( -name \*.jpg -o -name \*.jpeg \) Why do you need to check other at all if anything is allowed in there? By the way, Unix convention is: "no output = good news". So the above commands just print files that don't match the extensions specified; if all is well they won't print anything. P.S.: This is a good example of the evolution of a programmer. ;)
Check if the files in a specific directory have the proper extension?
1,475,657,579,000
I use SCP a lot to transfer log files from servers to a jumpbox where I can analyse and troubleshoot etc. If I have a cluster of servers and I want to create a set of subdirectories I do it like this: mkdir -p /foo/bar-nnn/{mailserver,dnsserver,minecraftserver,syslogserver} Lets's say 'bar-nnn' is a reference of sorts; be that a ticket number or incident etc. What I want to be able to do is run a script or a shell command which will prompt me for what 'bar-nnn' should be then go and create all the subfolders required. I'm pretty sure I'm going to need a for loop but can't quite get my head around it.
Try this: IFS= read -r -p "Folder name: " dir mkdir -p "/foo/${dir}/"{mailserver,dnsserver,minecraftserver,syslogserver}
Create subdirectories under a parent but prompt for the name of the parent