date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,386,277,472,000 |
Given a file like:
a
b
c
How do I get an output like:
a 0cc175b9c0f1b6a831c399e269772661
b 92eb5ffee6ae2fec3ad71c777531578f
c 4a8a08f09d37b73795649038408b5f33
in an efficient way? (Input is 80 GB)
|
This could just be a oneliner in perl:
head 80gb | perl -MDigest::MD5=md5_hex -nlE'say"$_\t".md5_hex($_)'
a 0cc175b9c0f1b6a831c399e269772661
b 92eb5ffee6ae2fec3ad71c777531578f
c 4a8a08f09d37b73795649038408b5f33
d 8277e0910d750195b448797616e091ad
e e1671797c52e15f763380b45e841ec32
f 8fa14cdd754f91cc6554c9e71929cce7
g b2f5ff47436671b6e533d8dc3614845d
h 2510c39011c5be704182423e3a695e91
i 865c0c0b4ab0e063e5caa3387c1a8741
j 363b122c528f54df4a0446b6bab05515
If you need to store the output and want a nice progress bar while chewing this huge chunk:
sudo apt install pv #ubuntu/debian
sudo yum install pv #redhat/fedora
pv 80gb | perl -MDigest::MD5=md5_hex -nlE'say"$_\t".md5_hex($_)' | gzip -1 > 80gb-result.gz
| Compute md5sum for each line in a file |
1,386,277,472,000 |
Suppose we want to dispatch jobs to a collection of servers using GNU parallel. What would happen if one of the servers die(power failure, thermal shutdown...) while busy executing a job? Will GNU parallel just dispatch the same job to another server or will that job be lost forever?
|
It will be lost forever unless you use --retries in which case it will be retried on another server. Also have a look at --filter-hosts to remove hosts that are down.
| GNU parallel ssh jobs: What happens to an incomplete job if the server dies? |
1,386,277,472,000 |
ls *.txt | parallel 'echo Starting on file {}; mkdir {.}; cd {.}; longCMD3 ../{} > /dev/null; echo Finished file {}'
This one liner partially works, except longCMD3 takes about 3 minutes, but the first and second echo commands are printed almost at the same time.
I tried putting in
wait
before the final echo, but that made no difference.
How can I ensure that the final echo is only printed once longCMD3 is complete?
Here's an example
Assume I only have 4 cores:
ls
foo1.txt foo2.txt foo3.txt foo4.txt foo5.txt foo6.txt
What I expected:
Starting on file foo1.txt
Starting on file foo2.txt
Starting on file foo3.txt
Starting on file foo4.txt
then at least 2 minutes should pass for longCMD3 to finish on one of the files
Finished file foo1.txt
Starting on file foo5.txt
But what I get is:
Starting on file foo1.txt
Finished file foo1.txt
Starting on file foo2.txt
Finished file foo2.txt
Starting on file foo3.txt
Finished file foo3.txt
Starting on file foo4.txt
Finished file foo4.txt
This continues for all 6 files. And the Start and Finished statements are printed simultaneously for each file. But a few minutes are expended between each file.
|
For each file, the commands echo Starting on file foo.txt, mkdir foo, cd foo, longCMD3 ../foo.txt > /dev/null and echo Finished file foo.txt run sequentially, i.e. each command starts after the previous one has finished.
The commands for different files are interspersed. By default, the parallel command runs as many jobs in parallel as you have cores.
However the output of the commands is not interspersed by default. This is why you don't see a bunch of “Starting” lines and then later the corresponding “Finished” lines. Parallel groups the output of each job together. It buffers the output until the job is finished. See the description of the --group option in the manual. Grouping doesn't make sense in your case, so turn it off with the --ungroup (-u) option, or switch to line grouping with --line-buffer.
Some other corrections:
Parsing ls is not reliable. Pass the file names to parallel directly.
If mkdir fails, you shouldn't proceed. If any command fails, you should arrange for the job to fail. An easy way to do that is to start the job script with set -e.
parallel --line-buffer 'set -e; echo Starting on file {}; mkdir {.}; cd {.}; longCMD3 ../{} > /dev/null; echo Finished file {}' ::: *.txt
| Why does my parallel command print “Starting” and ”Finished“ at the same time? |
1,386,277,472,000 |
I need to run grep on a couple of million files. Therefore I tried to speed it up, following the two approaches mentioned here: xargs -P -n and GNU parallel. I tried this on a subset of my files (9026 in number), and this was the result:
With xargs -P 8 -n 1000, very fast:
$ time find tex -maxdepth 1 -name "*.json" | \
xargs -P 8 -n 1000 grep -ohP "'pattern'" > /dev/null
real 0m0.085s
user 0m0.333s
sys 0m0.058s
With parallel, very slow:
$ time find tex -maxdepth 1 -name "*.json" | \
parallel -j 8 grep -ohP "'pattern'" > /dev/null
real 0m21.566s
user 0m22.021s
sys 0m18.505s
Even sequential xargs is faster than parallel:
$ time find tex -maxdepth 1 -name "*.json" | \
xargs grep -ohP 'pattern' > /dev/null
real 0m0.242s
user 0m0.209s
sys 0m0.040s
xargs -P n does not work for me because the output from all the processes gets interleaved, which does not happen with parallel. So I would like to use parallel without incurring this huge slowdown.
Any ideas?
UPDATE
Following the answer by Ole Tange, I tried parallel -X, the results are here, for completeness:
$ time find tex -maxdepth 1 -name "*.json" | \
parallel -X -j 8 grep -ohP "'pattern'" > /dev/null
real 0m0.563s
user 0m0.583s
sys 0m0.110s
Fastest solution: Following the comment by @cas, I tried to grep with -H option (to force printing the filenames), and sorting. Results here:
time find tex -maxdepth 1 -name '*.json' -print0 | \
xargs -0r -P 9 -n 500 grep --line-buffered -oHP 'pattern' | \
sort -t: -k1 | cut -d: -f2- > /dev/null
real 0m0.144s
user 0m0.417s
sys 0m0.095s
|
Try parallel -X. As written in the comments the overhead of starting a new shell and opening files for buffering for each argument is probably the cause.
Be aware that GNU Parallel will never be as fast as xargs because of that. Expect an overhead of 10 ms per job. With -X this overhead is less significant as you process more arguments in one job.
| GNU parallel excessively slow |
1,386,277,472,000 |
I have a bash script that takes as input three arrays with equal length: METHODS, INFILES and OUTFILES.
This script will let METHODS[i] solves problem INFILES[i] and saves the result to OUTFILES[i], for all indices i (0 <= i <= length-1).
Each element in METHODSis a string of the form:
$HOME/program/solver -a <method>
where solver is a program that can be called as follows:
$HOME/program/solver -a <method> -m <input file> -o <output file> --timeout <timeout in seconds>
The script solves all the problems in parallel and set the runtime limit for each instance to 1 hour (some methods can solve some problems very quickly though), as follows:
#!/bin/bash
source METHODS
source INFILES
source OUTFILES
start=`date +%s`
## Solve in PARALLEL
for index in ${!OUTFILES[*]}; do
(alg=${METHODS[$index]}
infile=${INFILES[$index]}
outfile=${OUTFILES[$index]}
${!alg} -m $infile -o $outfile --timeout 3600) &
done
wait
end=`date +%s`
runtime=$((end-start))
echo "Total runtime = $runtime (s)"
echo "Total number of processes = ${#OUTFILES[@]}"
In the above I have length = 619. I submitted this bash to a cluster with 70 available processors, which should take at maximum 9 hours to finish all the tasks. This is not the case in reality, however. When using the top command to investigate, I found that only two or three processes are running (state = R) while all the others are sleeping (state = D).
What am I doing wrong please?
Furthermore, I have learnt that GNU parallel would be much better for running parallel jobs. How can I use it for the above task?
Thank you very much for your help!
Update: My first try with GNU parallel:
The idea is to write all the commands to a file and then use GNU parallel to execute them:
#!/bin/bash
source METHODS
source INFILES
source OUTFILES
start=`date +%s`
## Write to file
firstline=true
for index in ${!OUTFILES[*]}; do
(alg=${METHODS[$index]}
infile=${INFILES[$index]}
outfile=${OUTFILES[$index]}
if [ "$firstline" = true ] ; then
echo "${!alg} -m $infile -o $outfile --timeout 3600" > commands.txt
firstline=false
else
echo "${!alg} -m $infile -o $outfile --timeout 3600" >> commands.txt
fi
done
## Solve in PARALLEL
time parallel :::: commands.txt
end=`date +%s`
runtime=$((end-start))
echo "Total runtime = $runtime (s)"
echo "Total number of processes = ${#OUTFILES[@]}"
What do you think?
Update 2: I'm using GNU parallel and having the same problem. Here's the output of top:
top - 02:05:25 up 178 days, 8:16, 2 users, load average: 62.59, 59.90, 53.29
Tasks: 596 total, 7 running, 589 sleeping, 0 stopped, 0 zombie
Cpu(s): 12.9%us, 0.9%sy, 0.0%ni, 63.3%id, 22.9%wa, 0.0%hi, 0.1%si, 0.0%st
Mem: 264139632k total, 260564864k used, 3574768k free, 4564k buffers
Swap: 268420092k total, 80593460k used, 187826632k free, 53392k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
28542 khue 20 0 7012m 5.6g 1816 R 100 2.2 12:50.22 opengm_min_sum
28553 khue 20 0 11.6g 11g 1668 R 100 4.4 17:37.37 opengm_min_sum
28544 khue 20 0 13.6g 8.6g 2004 R 100 3.4 12:41.67 opengm_min_sum
28549 khue 20 0 13.6g 8.7g 2000 R 100 3.5 2:54.36 opengm_min_sum
28551 khue 20 0 11.6g 11g 1668 R 100 4.4 19:48.36 opengm_min_sum
28528 khue 20 0 6934m 4.9g 1732 R 29 1.9 1:01.13 opengm_min_sum
28563 khue 20 0 7722m 6.7g 1680 D 2 2.7 0:56.74 opengm_min_sum
28566 khue 20 0 8764m 7.9g 1680 D 2 3.1 1:00.13 opengm_min_sum
28530 khue 20 0 5686m 4.8g 1732 D 1 1.9 0:56.23 opengm_min_sum
28534 khue 20 0 5776m 4.6g 1744 D 1 1.8 0:53.46 opengm_min_sum
28539 khue 20 0 6742m 5.0g 1732 D 1 2.0 0:58.95 opengm_min_sum
28548 khue 20 0 5776m 4.7g 1744 D 1 1.9 0:55.67 opengm_min_sum
28559 khue 20 0 8258m 7.1g 1680 D 1 2.8 0:57.90 opengm_min_sum
28564 khue 20 0 10.6g 10g 1680 D 1 4.0 1:08.75 opengm_min_sum
28529 khue 20 0 5686m 4.4g 1732 D 1 1.7 1:05.55 opengm_min_sum
28531 khue 20 0 4338m 3.6g 1724 D 1 1.4 0:57.72 opengm_min_sum
28533 khue 20 0 6064m 5.2g 1744 D 1 2.1 1:05.19 opengm_min_sum
(opengm_min_sum is the solver above)
I guess that some processes consume so much resource that the others do not have anything left and enter the D state?
|
Summary of the comments: The machine is fast but doesn't have enough memory to run everything in parallel. In addition the problem needs to read a lot of data and the disk bandwidth is not enough, so the cpus are idle most of the time waiting for data.
Rearranging the tasks helps.
Not yet investigated compressing the data to see if it can improve the effective disk I/O bandwidth.
| BASH: parallel run |
1,386,277,472,000 |
I have a 0-byte-delimited file of records.
Record 1, Line 1
Record 1, Line 2
[zero byte]
Record 2, Line 1
Record 2, Line 2
I'd like to run the "process.sh" command once for each record, with the record as standard input:
bash process-one-record-stdin.sh <record-contents
Can I do this with xargs, parallel, or some other tool? (I know how using bash scripting, but I'd prefer to use built-in tools where possible)
Motivation:
magic-xargs-type-command-here -0 all-records.txt -- xargs -d"\n" -- bash process-one-record-arguments.sh
|
If you have GNU Parallel you can do this:
parallel --rrs --recend '\0' -N1 --pipe bash process-one-record-stdin.sh <record-contents
All new computers have multiple cores, but most programs are serial in nature and will therefore not use the multiple cores. However, many tasks are extremely parallelizeable:
Run the same program on many files
Run the same program for every line in a file
Run the same program for every block in a file
GNU Parallel is a general parallelizer and makes is easy to run jobs in parallel on the same machine or on multiple machines you have ssh access to.
If you have 32 different jobs you want to run on 4 CPUs, a straight forward way to parallelize is to run 8 jobs on each CPU:
GNU Parallel instead spawns a new process when one finishes - keeping the CPUs active and thus saving time:
Installation
If GNU Parallel is not packaged for your distribution, you can do a personal installation, which does not require root access. It can be done in 10 seconds by doing this:
(wget -O - pi.dk/3 || curl pi.dk/3/ || fetch -o - http://pi.dk/3) | bash
For other installation options see http://git.savannah.gnu.org/cgit/parallel.git/tree/README
Learn more
See more examples: http://www.gnu.org/software/parallel/man.html
Watch the intro videos: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial: http://www.gnu.org/software/parallel/parallel_tutorial.html
Sign up for the email list to get support: https://lists.gnu.org/mailman/listinfo/parallel
| xargs, records, and standard input |
1,386,277,472,000 |
GNU Parallel is outputting hidden directories as follows using the --results parameter.
What command do I use on Ubuntu to change the all to directories so that they are no longer hidden. The directories are called:
'.\_ValidateAll.sh GL 170'/
'.\_ValidateAll.sh GL 190'/
'.\_ValidateAll.sh GL 220'/
'.\_ValidateAll.sh GL 355'/
'.\_ValidateAll.sh GL 357'/
'.\_ValidateAll.sh GL 359'/
'.\_ValidateAll.sh GL 361'/
'.\_ValidateAll.sh GL 363'/
Actually when I do a cat on the directory, I don't see the single quotes
vmdovs@ubuntu:/mnt/out/1$ cat
GL170/ .\_ValidateAll.sh GL 357/ .\_ValidateAll.sh GL 390/ .\_ValidateAll.sh GL 470/ .\_ValidateAll.sh GL 570/
rename.sh .\_ValidateAll.sh GL 359/ .\_ValidateAll.sh GL 400/ .\_ValidateAll.sh GL 480/ .\_ValidateAll.sh GL 572/
.\_ValidateAll.sh GL 190/ .\_ValidateAll.sh GL 361/ .\_ValidateAll.sh GL 410/ .\_ValidateAll.sh GL 500/ .\_ValidateAll.sh GL 574/
.\_ValidateAll.sh GL 220/ .\_ValidateAll.sh GL 363/ .\_ValidateAll.sh GL 420/ .\_ValidateAll.sh GL 530/ .\_ValidateAll.sh GL 590/
.\_ValidateAll.sh GL 355/ .\_ValidateAll.sh GL 368/ .\_ValidateAll.sh GL 440/ .\_ValidateAll.sh GL 540/ .\_ValidateAll.sh GL 710/
Also cd can access the directory as follows
cd .\\_ValidateAll.sh\ GL\ 190/
|
If the only issue is that the directories are hidden, you can just remove the . from the beginning of their name to make them unhidden. For example, using perl-rename (called rename on Ubuntu):
rename 's/^\.//' '.\_Validate'*
Or, with just shell tools:
for dir in '.\_Validate'*; do echo mv "$dir" "${dir//.}"; done
Bot of these leave you with horrible directory names though, with spaces and slashes and other unsavory things. Since you're renaming, you may as well rename to something sane:
rename 's/^\.\\//; s/\s+/_/g' '.\_Validate'*
That will result in:
$ ls -d _*
_ValidateAll.sh_GL_100 _ValidateAll.sh_GL_107 _ValidateAll.sh_GL_114
_ValidateAll.sh_GL_101 _ValidateAll.sh_GL_108 _ValidateAll.sh_GL_115
_ValidateAll.sh_GL_102 _ValidateAll.sh_GL_109 _ValidateAll.sh_GL_116
_ValidateAll.sh_GL_103 _ValidateAll.sh_GL_110 _ValidateAll.sh_GL_117
_ValidateAll.sh_GL_104 _ValidateAll.sh_GL_111 _ValidateAll.sh_GL_118
_ValidateAll.sh_GL_105 _ValidateAll.sh_GL_112 _ValidateAll.sh_GL_119
_ValidateAll.sh_GL_106 _ValidateAll.sh_GL_113 _ValidateAll.sh_GL_120
IMPORTANT: note that I am not checking for file name collision. If you rename one of these to something that already exists, then you will overwrite the existing file.
| Rename all hidden directories created by GNU parallel |
1,386,277,472,000 |
I am copying the files from machineB and machineC into machineA as I am running my below shell script on machineA.
If the files is not there in machineB then it should be there in machineC for sure so I will try copying the files from machineB first, if it is not there in machineB then I will try copying the same files from machineC.
I am copying the files in parallel using GNU Parallel library and it is working fine. Currently I am copying 10 files in parallel.
Below is my shell script which I have -
#!/bin/bash
export PRIMARY=/test01/primary
export SECONDARY=/test02/secondary
readonly FILERS_LOCATION=(machineB machineC)
export FILERS_LOCATION_1=${FILERS_LOCATION[0]}
export FILERS_LOCATION_2=${FILERS_LOCATION[1]}
PRIMARY_PARTITION=(550 274 2 546 278) # this will have more file numbers
SECONDARY_PARTITION=(1643 1103 1372 1096 1369 1568) # this will have more file numbers
export dir3=/testing/snapshot/20140103
find "$PRIMARY" -mindepth 1 -delete
find "$SECONDARY" -mindepth 1 -delete
do_Copy() {
el=$1
PRIMSEC=$2
scp david@$FILERS_LOCATION_1:$dir3/new_weekly_2014_"$el"_200003_5.data $PRIMSEC/. || scp david@$FILERS_LOCATION_2:$dir3/new_weekly_2014_"$el"_200003_5.data $PRIMSEC/.
}
export -f do_Copy
parallel --retries 10 -j 10 do_Copy {} $PRIMARY ::: "${PRIMARY_PARTITION[@]}" &
parallel --retries 10 -j 10 do_Copy {} $SECONDARY ::: "${SECONDARY_PARTITION[@]}" &
wait
echo "All files copied."
Problem Statement:-
With the above script at some point (not everytime) I am getting this exception -
ssh_exchange_identification: Connection closed by remote host
ssh_exchange_identification: Connection closed by remote host
ssh_exchange_identification: Connection closed by remote host
And I guess the error is typically caused by too many ssh/scp starting at the same time. That leads me to believe /etc/ssh/sshd_config:MaxStartups and MaxSessions is set too low.
But my question is on which server it is pretty low? machineB and machineC or machineA? And on what machines I need to increase the number?
On machineA this is what I can find and they all are commented out -
root@machineA:/home/david# grep MaxStartups /etc/ssh/sshd_config
#MaxStartups 10:30:60
root@machineA:/home/david# grep MaxSessions /etc/ssh/sshd_config
And on machineB and machineC this is what I can find -
[root@machineB ~]$ grep MaxStartups /etc/ssh/sshd_config
#MaxStartups 10
[root@machineB ~]$ grep MaxSessions /etc/ssh/sshd_config
#MaxSessions 10
|
If I understand this code correctly I believe this is your issue:
do_Copy() {
el=$1
PRIMSEC=$2
scp david@$FILERS_LOCATION_1:$dir3/new_weekly_2014_"$el"_200003_5.data \
$PRIMSEC/. || \
scp david@$FILERS_LOCATION_2:$dir3/new_weekly_2014_"$el"_200003_5.data \
$PRIMSEC/.
}
export -f do_Copy
parallel --retries 10 -j 10 do_Copy {} \
$PRIMARY ::: "${PRIMARY_PARTITION[@]}" &
parallel --retries 10 -j 10 do_Copy {} \
$SECONDARY ::: "${SECONDARY_PARTITION[@]}" &
wait
You're running 20 scp's in parallel, but machines B & C can only handle 10 each:
#MaxStartups 10
I'd dial back those parallel lines to say 5 and see if that fixes your issue. If you want to increase the number of MaxStartups on machines B & C you could do that as well:
MaxStartups 15
And make sure to restart the sshd service on both B & C:
$ sudo service sshd restart
Confirming config file modifications
You can double check that they're working by running sshd in test mode via the -T switch.
$ sudo /usr/sbin/sshd -T | grep -i max
maxauthtries 6
maxsessions 10
clientalivecountmax 3
maxstartups 10:30:100
| MaxStartups and MaxSessions configurations parameter for ssh connections? |
1,386,277,472,000 |
Possible Duplicate:
How to run a command when a directory's contents are updated?
I'm trying to write a simple etl process that would look for files in a directory each minute, and if so, load them onto a remote system (via a script) and then delete them.
Things that complicate this: the loading may take more than a minute.
To get around that, I figured I could move all files into a temporary processing directory, act on them there, and then delete them from there. Also, in my attempt to get better at command line scripting, I'm trying for a more elegant solution. I started out by writing a simple script to accomplish my task, shown below:
#!/bin/bash
for i in ${find /home/me/input_files/ -name "*.xml"}; do
FILE=$i;
done;
BASENAME=`basename $FILE`
mv $FILE /tmp/processing/$BASENAME
myscript.sh /tmp/processing/$BASENAME other_inputs
rm /tmp/processing/$BASENAME
This script removes the file from the processing directory almost immediately (which stops the duplicate processing problem), cleans up after itself at the end, and allows the file to be processed in between.
However, this is U/Linux after all. I feel like I should be able to accomplish all this in a single line by piping and moving things around instead of a bulky script to maintain.
Also, using parallel to concurrent process this would be a plus.
Addendum: some sort of FIFO queue might be the answer to this as well. Or maybe some other sort of directory watcher instead of a cron. I'm open for all suggestions that are more elegant than my little script. Only issue is the files in the "input directory" are touched moments before they are actually written to, so some sort of ! -size -0 would be needed to only handle real files.
|
It sounds as if you simply should write a small processing script and use GNU Parallel for parallel processing:
http://www.gnu.org/software/parallel/man.html#example__gnu_parallel_as_dir_processor
So something like this:
inotifywait -q -m -r -e CLOSE_WRITE --format %w%f my_dir |
parallel 'mv {} /tmp/processing/{/};myscript.sh /tmp/processing/{/} other_inputs; rm /tmp/processing/{/}'
Watch the intro videos to learn more: http://pi.dk/1
Edit:
It is required that myscript.sh can deal with 0 length files (e.g. ignore them).
If you can avoid the touch you can even do:
inotifywait -q -m -r -e CLOSE_WRITE --format %w%f my_dir |
parallel myscript.sh {} other_inputs
Installing GNU Parallel is as easy as:
wget http://git.savannah.gnu.org/cgit/parallel.git/plain/src/parallel
chmod 755 parallel
| process files in a directory as they appear [duplicate] |
1,386,277,472,000 |
parallel from moreutils is a great tool for, among other things, distributing m independent tasks evenly over n CPUs. Does anybody know of a tool that accomplishes the same thing for multiple machines? Such a tool of course wouldn't have to know about the concept of multiple machines or networking or anyhting like that -- I'm just talking about distributing m tasks into N clusters, where in cluster i N_i tasks are run in parallel.
Today I use my own BASH scripts to accomplish the same thing, but a more streamlined and clean tool would be great. Does anyobdy know of any?
|
GNU Parallel does that and more (using ssh).
It can even deal with mixed speed of machines, as it simply has a queue of jobs, that are started on the list of machines (e.g. one per CPU core). When one jobs finishes another one is started.
So it does not divide the jobs into clusters before starting, but does it dynamically.
Watch the intro videos to learn more: http://pi.dk/1
| Multi-machine tool in the spirit of moreutils' `parallel`? |
1,386,277,472,000 |
I've been trying to process the output of find with parallel, which in turn invoked a shell (some textual substitutions were needed). I observed some strange behaviour, which I cannot really explain to myself.
In each directory there are a bunch of files, call them file1.xtc, file2.xtc. Some of them have names such as file1.part0002.xtc, etc. If the file passed from find had the *.part000x.* name, I need to remove the *.part000x.* bit, such that the resultant command is comething like
command -f file1.part0001.xtc -s file1.tpr
I used find and parallel to that effect but parallel's substitutions (in particular, the {.} bit) are not quite sufficient (they remove the .xtc extension, leaving the .part0001 alone), so here's a command I used to check my output:
find 1st 2nd 3rd -name '*.xtc' -print0 | parallel -0 sh -c 'name=""; name="{.}"; echo {.} ${name%.*}.tpr'
If I use the above command, first declaring name and assigning an empty string to it (or anything else for that matter), the result is
file1.part0001 file1.tpr
as required (those are the names I need to use for my command). If, however, I run this
find 1st 2nd 3rd -name '*.xtc' -print0 | parallel -0 sh -c 'name="{.}"; echo {.} ${name%.*}.tpr'
the result is:
file1.part0001 .tpr
or it behaves as if $name didn't exist.
So the my questions are:
-what is the reason for this behaviour?
-what would be the preferred way of dealing with it?
The first question is more important here, as the method I used above is a workaround, which, while not pretty, works. It is not the first time I needed to do a textual substitution like that and this behaviour continues to baffle me.
Output of sh --version
GNU bash, version 3.2.48(1)-release (x86_64-apple-darwin11)
output of a newer version of bash that I installed and used instead of sh in the above command (to the same effect) (/usr/local/bin/bash --version)
GNU bash, version 4.2.0(1)-release (i386-apple-darwin11.4.2)
|
Your problem has nothing to do with bash. In fact, since you're telling parallel to run sh, you may not even be using bash.
The issue is that parallel is not really a drop-in replacement for xargs, as its documentation indicates. Instead, it accumulates its arguments into a single string (with spaces between them) and then interprets that as a series of commands. So, in your case, you have:
sh -c 'name="{.}"; echo {.} ${name%.*}.tpr'
which is interpreted as
sh -c 'name="{.}";
echo {.} ${name.*}.tpr
Since those are two separate commands, and the first one executes in a subshell (sh -c), $name is not set in the second one.
Now, you could add pretty well anything to the beginning of the string, such as true:
sh -c 'true; name="{.}"; echo {.} ${name%.*}.tpr'
That will be interpreted as:
sh -c 'true'
name="{.}"
echo {.} ${name%.*}.tpr'
In this case, the call to sh is essentially a throw-away; then name is set in the environment maintained by parallel and finally echo is called with name set.
So it would appear that the easiest solution is simply to get rid of the unnecessary call to sh:
find 1st 2nd 3rd -name '*.xtc' -print0 |
parallel -0 'name={.}; echo {.} "${name%.*}.tpr"'
Note: Based on a hint given by @StephaneChazelas, I removed the quotes around {.} and added them around ${name%.*}.ptr. parallel does its own quoting of its own substitutions, which interferes in some odd ways with explicit quotes. However, it does not add quoting to shell substitutions, which should be quoted if there is any possibility of the substitution being word-split.
Another option, if you really want to use a subshell for some reason (or a particular subshell), would be to use the -q option:
find 1st 2nd 3rd -name '*.xtc' -print0 |
parallel -0 -q sh -c 'name="{.}"; echo "{.}" "${name%.*}.tpr"'
Note: As above, I adjusted the quotes. In this case, the explicit -q suppresses the quoting of substitutions, so you have to quote them explicitly. However, this is a textual quotation, which is less accurate than shell quoting; if the substitution includes a double-quote character, that character will not be escaped, so it will close the explicit quotes, breaking the command line and effectively introducing a command injection vulnerability (you'd get other problems for file names contain $, `, or \ characters). For this, amongst other reasons, the -q option is discouraged.
| Variable declaration in parallel sh -c … |
1,386,277,472,000 |
If I run a command with nice, then I can see its process having the expected niceness value:
In one terminal:
nice sleep 17
and in another one:
$ ps -aoni,comm | grep sleep
10 sleep
But trying to do the same with GNU parallel (version 20161222, Debian 9.3), I fail:
parallel --nice 10 sleep ::: 17
$ ps -aoni,comm | grep sleep
0 sleep
I am probably missing something obvious, but what ?
update: perhaps it is just a bug, because it worked with older versions...
|
You have found a bug. Thanks.
It was introduced in parallel-20160522, and until now did not have any automated testing to check that --nice was working locally.
Next release will have both testing and --nice working.
The workaround for local jobs is to run parallel with nice:
nice -n 18 parallel bzip2 '<' ::: /dev/zero /dev/zero
The bug only affects local jobs: Remote jobs are niced as you would expect.
| Why is parallel --nice not setting the niceness? |
1,494,413,974,000 |
I'm trying to use GNU Parallel to run a comman mutliple times with a combination of constant and varying arguments. But for some reason the constant arguments are split on white-space even though I've quoted them when passing them to parallel.
In this example, the constant argument 'a b' should be passed to debug-call as a single argument instead of two:
$ parallel debug-call 'a b' {} ::: {1..2}
[0] = '[...]/debug-call'
[1] = 'a'
[2] = 'b'
[3] = '1'
[0] = '[...]/debug-call'
[1] = 'a'
[2] = 'b'
[3] = '2'
debug-call is a simple script which prints each argument it has been passed in argv. Instead I would expect to see this output:
[0] = '[...]/debug-call'
[1] = 'a b'
[2] = '1'
[0] = '[...]/debug-call'
[1] = 'a b'
[2] = '2'
Is this a bug or is there a option to prevent GNU Parallel from splitting command line arguments before passing them on to the command?
|
parallel runs a shell (which exact one depending on the context in which it is called, generally, when called from a shell, it's that same shell) to parse the concatenation of the arguments.
So:
parallel debug-call 'a b' {} ::: 'a b' c
is the same as
parallel 'debug-call a b {}' ::: 'a b' c
parallel will call:
your-shell -c 'debug-call a b <something>'
Where <something> is the arguments (hopefully) properly quoted for that shell. For instance, if that shell is bash, it will run
bash -c 'debug-call a b a\ b'
Here, you want:
parallel 'debug-call "a b" {}' ::: 'a b' c
Or
parallel -q debug-call 'a b' {} ::: 'a b' c
Where parallel will quote the arguments (in the correct (hopefully) syntax for the shell) before concatenating.
To avoid calling a shell in the first place, you could use GNU xargs instead:
xargs -n1 -r0 -P4 -a <(printf '%s\0' 'a b' c) debug-call 'a b'
That won't invoke a shell (nor any of the many commands ran by parallel upon initialisation), but you won't benefit from any of the extra features of parallel, like output reordering with -k.
You may find other approaches at Background execution in parallel
| Prevent GNU parallel from splitting quoted arguments |
1,494,413,974,000 |
I want to extract frames as images from video
and I want each image to be named
as InputFileName_number.bmp.
How can I do this?
I tried the following command:
ffmpeg -i clip.mp4 fr1/$filename%d.jpg -hide_banner
but it is not working as I want.
I want to get, for example, clip_1.bmp, but what I get is 1.bmp.
I am trying to use it with GNU parallel to extract images of multiple videos and I am new to both so I want some king of dynamic file naming input -> input_number.bmp.
|
$filename is handled as a shell variable.
What about
ffmpeg -i clip.mp4 fr1/clip_%d.jpg -hide_banner
or
$mp4filename=clip
ffmpeg -i ${mp4filename}.mp4 fr1/${mp4filename}_%d.jpg -hide_banner
?
Update: For use with gnu parallel, you can use parallel's -i option:
-i
Normally the command is passed the argument at the end of its command line. With this option, any instances of "{}" in the command are replaced with the argument.
The resulting command line could be as simple as
parallel -i ffmpeg -i {} fr1/{}_%d.jpg -hide_banner -- *.mp4
if you can live with the extension in the output files.
Be aware that you may not actually want to run this in parallel on a traditional hard-disk as the concurrent i/o will slow it down.
Edit: Fixed variable reference as pointed out by @DonHolgo.
| How to include input file name in output file name in ffmpeg |
1,494,413,974,000 |
I'm using the following script to copy multiple files into one folder:
{ echo $BASE1; echo $BASE2; echo $BASE3; } | parallel cp -a {} $DEST
Is there any way to use only one echo $BASE with brace expansion?
I mean something like this:
{ echo $BASE{1..3} } | parallel cp -a {} $DEST
|
You could use an array:
BASES[0]=...
BASES[1]=...
BASES[2]=...
# or BASES+=(...)
# or BASES=(foo bar baz)
echo "${BASES[@]}" | parallel cp -a {} $DEST
To make it safer (spaces and newlines in the variable in particular), something like this should work more reliably:
printf "%s\0" "${BASES[@]}" | parallel -0 cp -a {} "$DEST"
Note: arrays aren't in POSIX, this works with current versions of bash and ksh though.
| Copy multiple files to one dir with parallel |
1,494,413,974,000 |
GNU Parallel, without any command line options, allows you to easily parallelize a command whose last argument is determined by a line of STDIN:
$ seq 3 | parallel echo
2
1
3
Note that parallel does not wait for EOF on STDIN before it begins executing jobs — running yes | parallel echo will begin printing infinitely many copies of y right away.
This behavior appears to change, however, if STDIN is relatively short:
$ { yes | ghead -n5; sleep 10; } | parallel echo
In this case, no output will be returned before sleep 10 completes.
This is just an illustration — in reality I'm attempting to read from a series of continually generated FIFO pipes where the FIFO-generating process will not continue until the existing pipes start to be consumed. For example, my command will produce a STDOUT stream like:
/var/folders/2b/1g_lwstd5770s29xrzt0bw1m0000gn/T/tmp.PFcggGR55i
/var/folders/2b/1g_lwstd5770s29xrzt0bw1m0000gn/T/tmp.UCpTBzI3J6
/var/folders/2b/1g_lwstd5770s29xrzt0bw1m0000gn/T/tmp.r2EmSLW0t9
/var/folders/2b/1g_lwstd5770s29xrzt0bw1m0000gn/T/tmp.5TRNeeZLmt
Manually cat-ing each of these files one at a time in a new terminal causes the FIFO-generating process to complete successfully. However, running printfifos | parallel cat does not work. Instead, parallel seems to block forever waiting for input on STDIN — if I modify the pipeline to printfifos | head -n4 | parallel cat, the deadlock disappears and the first four pipes are printed successfully.
This behavior seems to be connected to the --jobs|-j parameter. Whereas { yes | ghead -n5; sleep 10; } | parallel cat produces no output for 10 seconds, adding a -j1 option yields four lines of y almost immediately followed by a 10 second wait for the final y. Unfortunately this does not solve my problem — I need every argument to be processed before parallel can get EOF from reading STDIN. Is there any way to achieve this?
|
A bug in GNU Parallel does, that it only starts processing after having read one job for each jobslot. After that it reads one job at a time.
In older versions the output will also be delayed by the number of jobslots. Newer versions only delay output by a single job.
So if you sent one job per second to parallel -j10 it would read 10 jobs before starting them. Older versions you would then have to wait an additional 10 seconds before seeing the output from job 3.
A workaround the limitation at start is to feed one dummy job per jobslot to parallel:
true >jobqueue; tail -n+0 -f jobqueue | parallel &
seq $(parallel --number-of-threads) | parallel -N0 echo true >> jobqueue
# now add the real jobs to jobqueue
A workound the output is to use --linebuffer (but this will mix full lines from different jobs).
| Make GNU Parallel not delay before executing arguments from STDIN |
1,494,413,974,000 |
I want to convert a bunch of Webp images to individual PDFs. I'm able to do it with this command:
parallel convert '{} {.}.pdf' ::: *.webp
or I can use this FFMPEG command:
find ./ -name "*.webp" -exec dwebp {} -o {}.pdf \;
However during the process of conversion the Webp files are decoded and the resultant PDFs have a much bigger file size. When I use the above commands for JPG-to-PDF conversion the PDF size is reasonably close to the JPG image size.
This command works fine with JPGs, but the program img2pdf doesn't work with the Webp format:
find ./ -name "*.jpg" -exec img2pdf {} -o {}.pdf \;
I also tried Webp-to-PDF conversion with this online service, but the PDF was huge.
How can I keep the PDF size down to the Webp file size?
|
Why not just use Imagemagick and Ghostscript?
convert img.webp img.pdf
gs -sDEVICE=pdfwrite -dNOPAUSE -dBATCH -dPDFSETTINGS=/ebook -sOutputFile=img-small.pdf img.pdf
In my test with your sample file I got a pdf result of about 3.2 MB.
EDIT
You could follow these instructions on Ubuntu to make sure that imagemagick was built with webp. Install this package for Windows or on macos do this:
brew install webp
brew install imagemagick
| Convert Webp to PDF |
1,494,413,974,000 |
The GNU parallel grepping n lines for m regular expressions example states the following:
If the CPU is the limiting factor parallelization should be done on
the regexps:
cat regexp.txt | parallel --pipe -L1000 --round-robin grep -f - bigfile
This will start one grep per CPU and read bigfile one time per CPU,
but as that is done in parallel, all reads except the first will be
cached in RAM
So in this instance GNU parallel round robins regular expressions from regex.txt over parallel grep instances with each grep instance reading bigfile separately. And as the documentation states above, disk caching probably ensures that bigfile is read from disk only once.
My question is this - the approach above appears to be seen as better performance-wise than another that involves having GNU parallel round robin records from bigfileover parallel grep instances that each read regexp.txt, something like
cat bigfile | parallel --pipe -L1000 --round-robin grep -f regexp.txt -
Why would that be? As I see it assuming disk caching in play, bigfile and regexp.txt would each be read from disk once in either case. The one major difference that I can think of is that the second approach involves significantly more data being passed through pipes.
|
It is due to GNU Parallel --pipe being slow.
cat bigfile | parallel --pipe -L1000 --round-robin grep -f regexp.txt -
maxes out at around 100 MB/s.
In the man page example you will also find:
parallel --pipepart --block 100M -a bigfile grep -f regexp.txt
which does close to the same, but maxes out at 20 GB/s on a 64 core system.
parallel --pipepart --block 100M -a bigfile -k grep -f regexp.txt
should give exactly the same result as grep -f regexp.txt bigfile
| GNU Parallel - grepping n lines for m regular expressions |
1,494,413,974,000 |
I have a directory
~/root/
|-- bar
|-- eggs
|-- foo
|-- hello.txt
|-- script.sh
`-- spam
4 directories, 2 files
Issuing find . -type d while in ~/root/ yields
.
./spam
./eggs
./bar
./foo
However, issuing find . -type d | parallel "echo {}" ::: * yields
bar
eggs
foo
hello.txt
script.sh
spam
Why are the nondirectories hello.txt and script.sh piped here?
|
According to the manual, the ::: * syntax uses the shell's expansion of * as an argument list instead of anything from stdin. So as written, your command ignores the result of find and passes all files in the current directory as arguments. If you leave off the ::: *, it should work as intended.
| Why is find piping in non directories when the “-type d” test is used? |
1,494,413,974,000 |
I have files like this on a Linux system:
10S1_S5_L002_chrm.fasta SRR3184711_chrm.fasta SRR3987378_chrm.fasta SRR4029368_chrm.fasta SRR5204465_chrm.fasta SRR5997546_chrm.fasta
13_S7_L003_chrm.fasta SRR3184712_chrm.fasta SRR3987379_chrm.fasta SRR4029369_chrm.fasta SRR5204520_chrm.fasta SRR5997547_chrm.fasta
14_S8_L003_chrm.fasta SRR3184713_chrm.fasta SRR3987380_chrm.fasta SRR4029370_chrm.fasta SRR5208699_chrm.fasta SRR5997548_chrm.fasta
17_S4_L002_chrm.fasta SRR3184714_chrm.fasta SRR3987415_chrm.fasta SRR4029371_chrm.fasta SRR5208700_chrm.fasta SRR5997549_chrm.fasta
3_S1_L001_chrm.fasta SRR3184715_chrm.fasta SRR3987433_chrm.fasta SRR4029372_chrm.fasta SRR5208701_chrm.fasta SRR5997550_chrm.fasta
4_S2_L001_chrm.fasta SRR3184716_chrm.fasta SRR3987482_chrm.fasta SRR4029373_chrm.fasta SRR5208770_chrm.fasta SRR5997551_chrm.fasta
50m_S10_L004_chrm.fasta SRR3184717_chrm.fasta SRR3987489_chrm.fasta SRR4029374_chrm.fasta SRR5208886_chrm.fasta SRR5997552_chrm.fasta
5_S3_L001_chrm.fasta SRR3184718_chrm.fasta SRR3987493_chrm.fasta SRR4029375_chrm.fasta SRR5211153_chrm.fasta SRR6050903_chrm.fasta
65m_S11_L005_chrm.fasta SRR3184719_chrm.fasta SRR3987495_chrm.fasta SRR4029376_chrm.fasta SRR5211162_chrm.fasta SRR6050905_chrm.fasta
6_S6_L002_chrm.fasta SRR3184720_chrm.fasta SRR3987647_chrm.fasta SRR4029377_chrm.fasta SRR5211163_chrm.fasta SRR6050920_chrm.fasta
70m_S12_L006_chrm.fasta SRR3184721_chrm.fasta SRR3987651_chrm.fasta SRR4029378_chrm.fasta SRR5215118_chrm.fasta SRR6050921_chrm.fasta
80m_S1_L002_chrm.fasta SRR3184722_chrm.fasta SRR3987657_chrm.fasta SRR4029379_chrm.fasta SRR5247122_chrm.fasta SRR6050958_chrm.fasta
In all there are 423
I was asked to cut them in 32 parts for an optimal parallelisation on 32 CPU
So now I have this:
10S1_S5_L002_chrm.part-10.fasta SRR3986254_chrm.part-26.fasta SRR4029372_chrm.part-22.fasta SRR5581526-1_chrm.part-20.fasta
10S1_S5_L002_chrm.part-11.fasta SRR3986254_chrm.part-27.fasta SRR4029372_chrm.part-23.fasta SRR5581526-1_chrm.part-21.fasta
10S1_S5_L002_chrm.part-12.fasta SRR3986254_chrm.part-28.fasta SRR4029372_chrm.part-24.fasta SRR5581526-1_chrm.part-22.fasta
10S1_S5_L002_chrm.part-13.fasta SRR3986254_chrm.part-29.fasta SRR4029372_chrm.part-25.fasta SRR5581526-1_chrm.part-23.fasta
10S1_S5_L002_chrm.part-14.fasta SRR3986254_chrm.part-2.fasta SRR4029372_chrm.part-26.fasta SRR5581526-1_chrm.part-24.fasta
10S1_S5_L002_chrm.part-15.fasta SRR3986254_chrm.part-30.fasta SRR4029372_chrm.part-27.fasta SRR5581526-1_chrm.part-25.fasta
10S1_S5_L002_chrm.part-16.fasta SRR3986254_chrm.part-31.fasta SRR4029372_chrm.part-28.fasta SRR5581526-1_chrm.part-26.fasta
10S1_S5_L002_chrm.part-17.fasta SRR3986254_chrm.part-32.fasta SRR4029372_chrm.part-29.fasta SRR5581526-1_chrm.part-27.fasta
10S1_S5_L002_chrm.part-18.fasta SRR3986254_chrm.part-3.fasta SRR4029372_chrm.part-2.fasta SRR5581526-1_chrm.part-28.fasta
10S1_S5_L002_chrm.part-19.fasta SRR3986254_chrm.part-4.fasta SRR4029372_chrm.part-30.fasta SRR5581526-1_chrm.part-29.fasta
10S1_S5_L002_chrm.part-1.fasta SRR3986254_chrm.part-5.fasta SRR4029372_chrm.part-3.fasta SRR5581526-1_chrm.part-2.fasta
10S1_S5_L002_chrm.part-20.fasta SRR3986254_chrm.part-6.fasta SRR4029372_chrm.part-4.fasta SRR5581526-1_chrm.part-30.fasta
10S1_S5_L002_chrm.part-21.fasta SRR3986254_chrm.part-7.fasta SRR4029372_chrm.part-5.fasta SRR5581526-1_chrm.part-31.fasta
I want to apply a command from the CRISPRCasFinder tool
The command works well when I use it alone on 1 namefile.fasta
The command also works well when I use parallel on namefile.part*.fasta.
But when I try to make the command more general by using basename, nothing works. I want to use basename to keep the name of my input files in the output folder.
I tried this on a smaller data set:
time parallel 'dossierSortie=$(basename -s .fasta {}) ; singularity exec -B $PWD /usr/local/CRISPRCasFinder-release-4.2.20/CrisprCasFinder.simg perl /usr/local/CRISPRCasFinder/CRISPRCasFinder.pl -so /usr/local/CRISPRCasFinder/sel392v2.so -cf /usr/local/CRISPRCasFinder/CasFinder-2.0.3 -drpt /usr/local/CRISPRCasFinder/supplementary_files/repeatDirection.tsv -rpts /usr/local/CRISPRCasFinder/supplementary_files/Repeat_List.csv -cas -def G --meta -out /databis/defontis/Dossier_fasta_chrm_avec_CRISPRCasFinder/Test/Result{} -in /databis/defontis/Dossier_fasta_chrm_avec_CRISPRCasFinder/Test/{}' ::: *_chrm.part*.fasta
And it did this
ERR358546_chrm.part-1.fasta SRR4029114_k141_23527.fna.bck SRR5100341_k141_10416.fna.lcp SRR5100345_k141_3703.fna.al1
ERR358546_chrm.part-2.fasta SRR4029114_k141_23527.fna.bwt SRR5100341_k141_10416.fna.llv SRR5100345_k141_3703.fna.bck
ERR358546_chrm.part-3.fasta SRR4029114_k141_23527.fna.des SRR5100341_k141_10416.fna.ois SRR5100345_k141_3703.fna.bwt
ERR358546_chrm.part-4.fasta SRR4029114_k141_23527.fna.lcp SRR5100341_k141_10416.fna.prj SRR5100345_k141_3703.fna.des
ERR358546_chrm.part-5.fasta SRR4029114_k141_23527.fna.llv SRR5100341_k141_10416.fna.sds SRR5100345_k141_3703.fna.lcp
ERR358546_chrm.part-6.fasta SRR4029114_k141_23527.fna.ois SRR5100341_k141_10416.fna.sti1 SRR5100345_k141_3703.fna.llv
ERR358546_k141_26987.fna SRR4029114_k141_23527.fna.prj SRR5100341_k141_10416.fna.suf SRR5100345_k141_3703.fna.ois
ERR358546_k141_33604.fna SRR4029114_k141_23527.fna.sds SRR5100341_k141_10416.fna.tis SRR5100345_k141_3703.fna.prj
ERR358546_k141_90631.fna SRR4029114_k141_23527.fna.sti1 SRR5100341_k141_10942.fna SRR5100345_k141_3703.fna.sds
ResultERR358546_chrm.part-3 SRR4029114_k141_23527.fna.suf SRR5100341_k141_164.fna SRR5100345_k141_3703.fna.sti1
ResultERR358546_chrm.part-4 SRR4029114_k141_23527.fna.tis SRR5100341_k141_3046.fna SRR5100345_k141_3703.fna.suf
ResultSRR4029114_chrm.part-1 SRR5100341_chrm.part-10.fasta SRR5100341_k141_3968.fna SRR5100345_k141_3703.fna.tis
ResultSRR4029114_chrm.part-4 SRR5100341_chrm.part-11.fasta SRR5100341_k141_631.fna SRR5100345_k141_4429.fna
ResultSRR5100341_chrm.part-10 SRR5100341_chrm.part-12.fasta SRR5100341_k141_6376.fna SRR5100345_k141_4832.fna
ResultSRR5100341_chrm.part-11 SRR5100341_chrm.part-13.fasta SRR5100341_k141_8699.fna SRR5100345_k141_6139.fna
ResultSRR5100341_chrm.part-3 SRR5100341_chrm.part-1.fasta SRR5100341_k141_8892.fna SRR5100345_k141_731.fna
ResultSRR5100341_chrm.part-9 SRR5100341_chrm.part-2.fasta SRR5100345_chrm.part-10.fasta SRR5100345_k141_731.fna.al1
ResultSRR5100345_chrm.part-1 SRR5100341_chrm.part-3.fasta SRR5100345_chrm.part-1.fasta SRR5100345_k141_731.fna.bck
ResultSRR5100345_chrm.part-4 SRR5100341_chrm.part-4.fasta SRR5100345_chrm.part-2.fasta SRR5100345_k141_731.fna.bwt
ResultSRR5100345_chrm.part-9 SRR5100341_chrm.part-5.fasta SRR5100345_chrm.part-3.fasta SRR5100345_k141_731.fna.des
SRR4029114_chrm.part-1.fasta SRR5100341_chrm.part-6.fasta SRR5100345_chrm.part-4.fasta SRR5100345_k141_731.fna.lcp
SRR4029114_chrm.part-2.fasta SRR5100341_chrm.part-7.fasta SRR5100345_chrm.part-5.fasta SRR5100345_k141_731.fna.llv
SRR4029114_chrm.part-3.fasta SRR5100341_chrm.part-8.fasta SRR5100345_chrm.part-6.fasta SRR5100345_k141_731.fna.ois
SRR4029114_chrm.part-4.fasta SRR5100341_chrm.part-9.fasta SRR5100345_chrm.part-7.fasta SRR5100345_k141_731.fna.prj
SRR4029114_chrm.part-5.fasta SRR5100341_k141_10416.fna SRR5100345_chrm.part-8.fasta SRR5100345_k141_731.fna.sds
SRR4029114_k141_14384.fna SRR5100341_k141_10416.fna.al1 SRR5100345_chrm.part-9.fasta SRR5100345_k141_731.fna.sti1
SRR4029114_k141_16765.fna SRR5100341_k141_10416.fna.bck SRR5100345_k141_1211.fna SRR5100345_k141_731.fna.suf
SRR4029114_k141_23527.fna SRR5100341_k141_10416.fna.bwt SRR5100345_k141_2884.fna SRR5100345_k141_731.fna.tis
SRR4029114_k141_23527.fna.al1 SRR5100341_k141_10416.fna.des SRR5100345_k141_3703.fna
The names of the folder are not okay because I want for example just ResultERR358546 and not ResultERR358546_chrm.part-2.fasta
And I don't want a result for each part but only for each ID.
|
Your basename command only removes the fixed .fasta extension - as far as I know it cannot remove a variable pattern.
However GNU parallel provides a Perl expression replacement string facility that is much more powerful than basename - ex. given
$ ls *_chrm.part*.fasta
ERR358546_chrm.part-2.fasta ERR358546_chrm.part-5.fasta ERR358546_chrm.part-8.fasta
ERR358546_chrm.part-3.fasta ERR358546_chrm.part-6.fasta ERR358546_chrm.part-9.fasta
ERR358546_chrm.part-4.fasta ERR358546_chrm.part-7.fasta
then
$ parallel echo Result'{= s:_.*$:: =}' ::: *_chrm.part*.fasta
ResultERR358546
ResultERR358546
ResultERR358546
ResultERR358546
ResultERR358546
ResultERR358546
ResultERR358546
ResultERR358546
where the substitution s:_.*$:: replaces everything after an underscore with nothing. Transplanting to your original command:
time parallel '
singularity exec -B "$PWD" /usr/local/CRISPRCasFinder-release-4.2.20/CrisprCasFinder.simg \
perl /usr/local/CRISPRCasFinder/CRISPRCasFinder.pl \
-so /usr/local/CRISPRCasFinder/sel392v2.so \
-cf /usr/local/CRISPRCasFinder/CasFinder-2.0.3 \
-drpt /usr/local/CRISPRCasFinder/supplementary_files/repeatDirection.tsv \
-rpts /usr/local/CRISPRCasFinder/supplementary_files/Repeat_List.csv \
-cas -def G --meta \
-out /databis/defontis/Dossier_fasta_chrm_avec_CRISPRCasFinder/Test/Result'{= s:_.*$:: =}' \
-in /databis/defontis/Dossier_fasta_chrm_avec_CRISPRCasFinder/Test/{}
' ::: *_chrm.part*.fasta
If you want to capture and include the part index, you could modify the expression to
Result'{= s:_chrm\.part-(\d+)\.fasta$:_$1: =}'
or
'{= s:_chrm\.part-(\d+)\.fasta$:Result_$1: =}'
for example.
| How can I use basename with parallel? |
1,494,413,974,000 |
I can't undestand well how the parallel command works.
I need to run this simple command: (100 times)
curl https://jsonplaceholder.typicode.com/todos/1
curl https://jsonplaceholder.typicode.com/todos/2
curl https://jsonplaceholder.typicode.com/todos/3
...
curl https://jsonplaceholder.typicode.com/todos/100
end redirect the output to files with the names like these:
1.txt
2.txt
3.txt
....
100.txt
|
Well, this is a somewhat over-engineered Bash-solution, but it works and hopefully clarifies the use of the parallel command:
function xx(){ curl "https://jsonplaceholder.typicode.com/todos/$1" > "$1.txt";}
parallel xx -- {1..100}
The first line creates a new "command" or function called xx which - when executed - causes the execution of a curl command that has its stdout redirected to a file. The xx function takes a single number as its argument; inside the body of the function, it is referred to as `$1', i.e. the first positional parameter.
The second line demonstrates the use of the parallel command, which runs xx once for (and with) each argument from the list 1, 2, 3, ..., 100 (the list 1 2 3 ... 100 is generated by the shell when it performs brace expansion on {1..100}).
NOTE: this answer relates to the parallel command in the moreutils package on Debian systems, not to the GNU parallel command.
| Run parallel command and redirect the output to files with specific name |
1,494,413,974,000 |
I've found answers close to this but fail to understand how to use them in my case (I'm rather new to Bash)... so, I'm trying to process a folder containing a large image sequence (100k+ files) with Imagemagick and would like to use GNU Parallel to speed things up.
This is the code I use (processing 100 frames at a time to avoid running out of ram):
calcmethod1=mean;
allframes=(*.png)
cd out1
for (( i=0; i < "${#allframes[@]}" ; i+=100 )); do
convert "${allframes[@]:i:100}" -evaluate-sequence "$calcmethod1" \
-channel RGB -normalize ../out2/"${allframes[i]}"
done
how would I 'parallelize' this? Most solutions I've found work with not using a loop but piping - but doing this I've run into the problem that my script would break because of my arguments list getting too long...
I guess what I would want to do is to have parallel splitting the load like handing the first 100 frames to core 1, frames 100-199 to core 2 etc.?
|
Order
Your sample program did not seem to care about the order of the *.png for the allframes array that you were constructing, but your comments led me to believe that order would matter.
I guess what I would want to do is to have parallel splitting the load like handing the first 100 frames to core 1, frames 100-199 to core 2 etc.?
Bash
Therefore I'd start with a modification to your script like so, changing the construction of the allframes array so that the files are stored in numeric order.
allframes=($(printf "%s\n" *.png | sort -V | tr '\n' ' '))
This can be simplified further to this using sort -zV:
allframes=($(printf "%s\0" *.png | sort -zV | tr '\0' ' '))
This has the effect on constructing your convert ... commands so that they look like this now:
$ convert "0.png 1.png 2.png 3.png 4.png 5.png 6.png 7.png 8.png 9.png \
10.png 11.png 12.png 13.png 14.png 15.png 16.png 17.png 18.png \
19.png 20.png 21.png 22.png 23.png 24.png 25.png 26.png 27.png \
28.png 29.png 30.png 31.png 32.png 33.png 34.png 35.png 36.png \
37.png 38.png 39.png 40.png 41.png 42.png 43.png 44.png 45.png \
46.png 47.png 48.png 49.png 50.png 51.png 52.png 53.png 54.png \
55.png 56.png 57.png 58.png 59.png 60.png 61.png 62.png 63.png \
64.png 65.png 66.png 67.png 68.png 69.png 70.png 71.png 72.png \
73.png 74.png 75.png 76.png 77.png 78.png 79.png 80.png 81.png \
82.png 83.png 84.png 85.png 86.png 87.png 88.png 89.png 90.png \
91.png 92.png 93.png 94.png 95.png 96.png 97.png 98.png 99.png" \
-evaluate-sequence "mean" -channel RGB -normalize ../out2/0.png
Parallels
Building off of eschwartz's example I put together a parallel example as follows:
$ printf '%s\n' *.png | sort -V | parallel -n100 --dryrun convert {} \
-evaluate-sequence 'mean' -channel RGB -normalize ../out2/{1}
again, more simply using sort -zV:
$ printf '%s\0' *.png | sort -zV | parallel -0 -n100 --dryrun "convert {} \
-evaluate-sequence 'mean' -channel RGB -normalize ../out2/{1}
NOTE: The above has an echo "..." as the parallel action to start. Doing it this way helps to visualize what's happening:
$ convert 0.png 1.png 2.png 3.png 4.png 5.png 6.png 7.png 8.png 9.png 10.png \
11.png 12.png 13.png 14.png 15.png 16.png 17.png 18.png 19.png \
20.png 21.png 22.png 23.png 24.png 25.png 26.png 27.png 28.png \
29.png 30.png 31.png 32.png 33.png 34.png 35.png 36.png 37.png \
38.png 39.png 40.png 41.png 42.png 43.png 44.png 45.png 46.png \
47.png 48.png 49.png 50.png 51.png 52.png 53.png 54.png 55.png \
56.png 57.png 58.png 59.png 60.png 61.png 62.png 63.png 64.png \
65.png 66.png 67.png 68.png 69.png 70.png 71.png 72.png 73.png \
74.png 75.png 76.png 77.png 78.png 79.png 80.png 81.png 82.png \
83.png 84.png 85.png 86.png 87.png 88.png 89.png 90.png 91.png \
92.png 93.png 94.png 95.png 96.png 97.png 98.png 99.png \
-evaluate-sequence mean -channel RGB -normalize ../out2/0.png
If you're satisfied with this output, simply remove the --dryrun switch to parallel, and rerun it.
$ printf '%s\0' *.png | sort -zV | parallel -0 -n100 convert {} \
-evaluate-sequence 'mean' -channel RGB -normalize
References
[Published in Open Source For You (OSFY) magazine, October 2013 edition.]
| GNU parallel with for loop? |
1,494,413,974,000 |
Suppose I have a list of commands in file cmd_file.
I run these commands via:
cat cmd_file | parallel -k -I {} "{}"
One of the commands fails. All of the commands use the exact same CLI tool with different inputs.
Right now, I have to run across all of the commands one at a time to find the erroring command by substituting my command list for a command builder loop (much more involved):
for ...; do
# assemble the vars for the command
echo "<command>"
<command>
done
Is there a mechanic for getting parallel to display the command that failed, or the execution order onto stderr, for example?
|
You can instruct parallel to print each command executed either to standard output or to standard error. From the man page:
-v Print the job to be run on stdout (standard output).
Can be reversed with --silent. See also -t.
-t Print the job to be run on stderr (standard error).
So perhaps:
for ...; do
# assemble the vars for the command
echo "<command>"
done |
parallel -v -k
or if you have cmd_file already prepared:
parallel -v -k < cmd_file
or something similar will meet your needs.
| Can GNU Parallel be made to output the command line executed when run in linewise mode? |
1,494,413,974,000 |
I am using the following grep script to output all the unmatched patterns:
grep -oFf patterns.txt large_strings.txt | grep -vFf - patterns.txt > unmatched_patterns.txt
patterns file contains the following 12-characters long substrings (some instances are shown below):
6b6c665d4f44
8b715a5d5f5f
26364d605243
717c8a919aa2
large_strings file contains extremely long strings of around 20-100 million characters longs (a small piece of the string is shown below):
121b1f212222212123242223252b36434f5655545351504f4e4e5056616d777d80817d7c7b7a7a7b7c7d7f8997a0a2a2a3a5a5a6a6a6a6a6a7a7babbbcbebebdbcbcbdbdbdbdbcbcbcbcc2c2c2c2c2c2c2c2c4c4c4c3c3c3c2c2c3c3c3c3c3c3c3c3c2c2c1c0bfbfbebdbebebebfbfc0c0c0bfbfbfbebebdbdbdbcbbbbbababbbbbcbdbdbdbebebfbfbfbebdbcbbbbbbbbbcbcbcbcbcbcbcbcbcb8b8b8b7b7b6b6b6b8b8b9babbbbbcbcbbbabab9b9bababbbcbcbcbbbbbababab9b8b7b6b6b6b6b7b7b7b7b7b7b7b7b7b7b6b6b5b5b6b6b7b7b7b7b8b8b9b9b9b9b9b8b7b7b6b5b5b5b5b5b4b4b3b3b3b6b5b4b4b5b7b8babdbebfc1c1c0bfbec1c2c2c2c2c1c0bfbfbebebebebfc0c1c0c0c0bfbfbebebebebebebebebebebebebebdbcbbbbbab9babbbbbcbcbdbdbdbcbcbbbbbbbbbbbabab9b7b6b5b4b4b4b4b3b1aeaca9a7a6a9a9a9aaabacaeafafafafafafafafafb1b2b2b2b2b1b0afacaaa8a7a5a19d9995939191929292919292939291908f8e8e8d8c8b8a8a8a8a878787868482807f7d7c7975716d6b6967676665646261615f5f5e5d5b5a595957575554525
How can we speed up the above script (gnu parallel, xargs, fgrep, etc.)? I tried using --pipepart and --block but it doesn't allow you to pipe two grep commands.
Btw these are all hexadecimal strings and patterns.
|
A much more efficient answer that does not use grep:
build_k_mers() {
k="$1"
slot="$2"
perl -ne 'for $n (0..(length $_)-'"$k"') {
$prefix = substr($_,$n,2);
$fh{$prefix} or open $fh{$prefix}, ">>", "tmp/kmer.$prefix.'"$slot"'";
$fh = $fh{$prefix};
print $fh substr($_,$n,'"$k"'),"\n"
}'
}
export -f build_k_mers
rm -rf tmp
mkdir tmp
export LC_ALL=C
# search strings must be sorted for comm
parsort patterns.txt | awk '{print >>"tmp/patterns."substr($1,1,2)}' &
# make shorter lines: Insert \n(last 12 char before \n) for every 32k
# This makes it easier for --pipepart to find a newline
# It will not change the kmers generated
perl -pe 's/(.{32000})(.{12})/$1$2\n$2/g' large_strings.txt > large_lines.txt
# Build 12-mers
parallel --pipepart --block -1 -a large_lines.txt 'build_k_mers 12 {%}'
# -j10 and 20s may be adjusted depending on hardware
parallel -j10 --delay 20s 'parsort -u tmp/kmer.{}.* > tmp/kmer.{}; rm tmp/kmer.{}.*' ::: `perl -e 'map { printf "%02x ",$_ } 0..255'`
wait
parallel comm -23 {} {=s/patterns./kmer./=} ::: tmp/patterns.??
I have tested this on a full job (patterns.txt: 9GBytes/725937231 lines, large_strings.txt: 19GBytes/184 lines) and on my 64-core machine it completes in 3 hours.
| Boosting the grep search using GNU parallel |
1,494,413,974,000 |
1. Summary
I don't understand, how I can combine parallel and sequential commands in Linux.
2. Expected behavior
Pseudocode:
pip install pipenv sequential pipenv install --dev
parallel task
npm install -g grunt-cli sequential npm install
Windows batch working equivalent:
start cmd /C "pip install pipenv & pipenv install --dev"
start cmd /C "npm install -g grunt-cli & npm install"
3. Not helped
I don't think, that & and wait can solve this problem, see rsaw comment.
I read, that GNU parallel — is better way for parallel tasks, but I can't find, which syntax I need to use in GNU parallel, that solve this task.
I try parallelshell:
parallelshell "pip install pipenv && pipenv install --dev" "npm install -g grunt-cli && npm install"
Full .sh file:
git clone --depth 1 https://github.com/Kristinita/KristinitaPelican
wait
cd KristinitaPelican
wait
parallelshell "pip install pipenv && pipenv install --dev" "npm install -g grunt-cli && npm install"
But at first pipenv install --dev command run for me, then npm install. It sequential, not parallel.
|
Simply with GNU parallel:
parallel ::: 'pip install pipenv && pipenv install --dev' \
'npm install -g grunt-cli && npm install'
| Combine parallel and sequential commands |
1,494,413,974,000 |
On my home machine, I use the script gitpull.sh to concurrently pull all the changes to the git repos under a given directory.
#!/usr/bin/bash
find $1 -name ".git" | sed -r 's|/[^/]+$||' | parallel git -C {} pull origin master
My problem is that parallel is not installed on my work computer. Is it possible to alter my script without the use of parallel?
|
Instead of parallel you could use xargs with the -P flag. Something like:
find $1 -name ".git" | sed -r 's|/[^/]+$||' | xargs -I {} -n 1 -P 0 git -C {} pull origin master
| Can I concurrently pull my git repos without gnu parallel? |
1,494,413,974,000 |
I'd like to just check the status of a bunch of Git repos with a quick command like parallel git -C {} status --short ::: ~/*/.git/... But the Git status doesn't include the repo name or path, so I'd need some way to print either the git command run by parallel or (ideally) just the input (the ~/[…]/.git/.. part of the command) and then the output relevant to that repo. Is this possible? --verbose will print the command, but doesn't print the command output next to the command, so that's not good enough. And --group will keep lines from one job together, but doesn't keep those lines together with the command printed by --verbose, so those two are not enough.
|
Try:
parallel --tagstring {//} git -C {//} status --short ::: ~/*/.git
or:
parallel --plus --tagstring {=s:$HOME.::';s:/.git::'=} git -C {//} status --short ::: ~/*/.git
or:
parallel -v git -C {//} status --short ::: ~/*/.git
It is not exactly what you ask for, but may be an acceptable solution.
A solution matching your requirement would be:
parallel "echo {};git -C {} status --short" ::: ~/*/.git/..
| How to print GNU Parallel command & output together? |
1,494,413,974,000 |
As GNU parallel's manual shows, you can use a zenity progress bar with parallel:
seq 1000 | parallel -j30 --bar '(echo {};sleep 0.1)' \
2> >(zenity --progress --auto-kill) | wc
However, in that example, the cancel button doesn't work. I've read about similar issues with this button when used with more usual commands (i.e. not parallel) as well as some more insight about how that cancel button works but that didn't really help me. Parallel seems to make use of it quite differently and I can't figure out how to get that cancel button to stop the process.
I'm mostly confused by the 2> > and the wc. If I just use a | instead, the cancel button works but now the progress bar goes faster and finishes too early (I guess it only shows the progress of the first split part of the job? But if that was the case it should be 30 times faster, which it's not, so I'm not sure).
PS: Just to let you know, I've told about this issue on the parallel mailing list.
|
Zenity is desinged to read two lines, one for progress bar and one begining with "#" for progress text:
for ((i=0;i<=100;i++));do
echo "$i" # bar
echo "#percent done $i" # text
sleep .1
done
| zenity --progress
I guess that --bar option writes progress to stderr but doesn't close it or
doesn't write a newline character at the end of the line.
That blocks zenity which expects a new line.
The workaround is to print newline to stderr which is file descriptor 2 by default.
seq 1000 | parallel -j30 --bar '(echo {}; echo >&2; sleep 0.1)' \
2> >(zenity --progress --auto-kill) | wc
| Zenity Cancel button for GNU parallel progress bar |
1,494,413,974,000 |
I want to extract all lines of $file1 that start with a string stored in $file2.
$file1 is 4 GB large with about 20 million lines, $file2 has 2 million lines, is about 140 MB large and contains two columns separated with ,. The maximal line length of both files is well below 1000, they are sorted with LC_ALL=C and $file1 can contain any additional characters except \0.
Unexpectedly this command
parallel --pipepart -a $file1 grep -Ff $file2
consumes an extreme amount of memory and gets killed by the OS.
The command works if I limit the number of threads:
parallel --pipepart -j 8 -a $file1 grep -Ff $file2
For the last command, htop reveals that each grep -Ff $file2-thread constantly occupies 12.3 GB of memory. I assume this demand comes from the dictionary grep builds from $file2.
How can I achieve such a filter more efficiently?
|
It is covered in man parallel https://www.gnu.org/software/parallel/man.html#EXAMPLE:-Grepping-n-lines-for-m-regular-expressions
EXAMPLE: Grepping n lines for m regular expressions.
The simplest solution to grep a big file for a lot of regexps is:
grep -f regexps.txt bigfile
Or if the regexps are fixed strings:
grep -F -f regexps.txt bigfile
There are 3 limiting factors: CPU, RAM, and disk I/O.
RAM is easy to measure: If the grep process takes up most of your free
memory (e.g. when running top), then RAM is a limiting factor.
CPU is also easy to measure: If the grep takes >90% CPU in top, then
the CPU is a limiting factor, and parallelization will speed this up.
It is harder to see if disk I/O is the limiting factor, and depending
on the disk system it may be faster or slower to parallelize. The only
way to know for certain is to test and measure.
Limiting factor: RAM
The normal grep -f regexs.txt bigfile works no matter the size of
bigfile, but if regexps.txt is so big it cannot fit into memory, then
you need to split this.
grep -F takes around 100 bytes of RAM and grep takes about 500 bytes
of RAM per 1 byte of regexp. So if regexps.txt is 1% of your RAM, then
it may be too big.
If you can convert your regexps into fixed strings do that. E.g. if
the lines you are looking for in bigfile all looks like:
ID1 foo bar baz Identifier1 quux
fubar ID2 foo bar baz Identifier2
then your regexps.txt can be converted from:
ID1.*Identifier1
ID2.*Identifier2
into:
ID1 foo bar baz Identifier1
ID2 foo bar baz Identifier2
This way you can use grep -F which takes around 80% less memory and is
much faster.
If it still does not fit in memory you can do this:
parallel --pipepart -a regexps.txt --block 1M grep -Ff - -n bigfile |
sort -un | perl -pe 's/^\d+://'
The 1M should be your free memory divided by the number of CPU threads
and divided by 200 for grep -F and by 1000 for normal grep. On
GNU/Linux you can do:
free=$(awk '/^((Swap)?Cached|MemFree|Buffers):/ { sum += $2 }
END { print sum }' /proc/meminfo)
percpu=$((free / 200 / $(parallel --number-of-threads)))k
parallel --pipepart -a regexps.txt --block $percpu --compress \
grep -F -f - -n bigfile |
sort -un | perl -pe 's/^\d+://'
If you can live with duplicated lines and wrong order, it is faster to
do:
parallel --pipepart -a regexps.txt --block $percpu --compress \
grep -F -f - bigfile
Limiting factor: CPU
If the CPU is the limiting factor parallelization should be done on
the regexps:
cat regexp.txt | parallel --pipe -L1000 --round-robin --compress \
grep -f - -n bigfile |
sort -un | perl -pe 's/^\d+://'
The command will start one grep per CPU and read bigfile one time per
CPU, but as that is done in parallel, all reads except the first will
be cached in RAM. Depending on the size of regexp.txt it may be faster
to use --block 10m instead of -L1000.
Some storage systems perform better when reading multiple chunks in
parallel. This is true for some RAID systems and for some network file
systems. To parallelize the reading of bigfile:
parallel --pipepart --block 100M -a bigfile -k --compress \
grep -f regexp.txt
This will split bigfile into 100MB chunks and run grep on each of
these chunks. To parallelize both reading of bigfile and regexp.txt
combine the two using --fifo:
parallel --pipepart --block 100M -a bigfile --fifo cat regexp.txt \
\| parallel --pipe -L1000 --round-robin grep -f - {}
If a line matches multiple regexps, the line may be duplicated.
Bigger problem
If the problem is too big to be solved by this, you are probably ready
for Lucene.
| filtering a large file with a large filter |
1,494,413,974,000 |
I have about two hundred sub-directories located within a directory of interest:
$ ls backup
201302
201607
201608
201609
201610
201701
201702
201705
201801
201802
I want to create a 7z archive xyz.7z for each directory xyz:
cd $HOME/backup/
7z a "storage/nas/TBL/compressed_backups/$xyz.7z" "$xyz" -mmt=4
So in the end I will have these archives in storage/nas/TBL/compressed_backups:
201302.7z
201607.7z
201608.7z
201609.7z
201610.7z
201701.7z
201702.7z
201705.7z
201801.7z
201802.7z
Additionally I want to use parallel in order to process five directories at a time. (I have enough computing power for this purpose)
parallel -j5 ::: 7z a "storage/nas/TBL/compressed_backups/$xyz.7z" "$xyz" -mmt=4
How can I wrap this all up together?
|
Use the following approach:
ls backup | parallel -j5 7z a -mmt=4 "storage/nas/TBL/compressed_backups/{}.7z" {}
{} - input line. This replacement string will be replaced by a full line read from the input source.
| Create separate 7z archives for each directory in the current directory and additionally parallelize through GNU Parallel |
1,494,413,974,000 |
I have a file with many lines, and on each line I have the arguments I want to pass to parallel with a tab separator.
I run this script
cat $hdd_file | grep $ssd | parallel -C '\t' clean_and_destroy
And it works, $hdd_file is the filename, the grep collects the lines which hdds have a certain $ssd as a cache, and then the parallel calls a function which destroys their connection.
Now that I made new partitions to the cleaned ssds, I try to call parallel like this:
cat $hdd_file | grep $ssd | parallel -C '\t' create_new_cache :::+ `seq $partitions_per_ssd`
Which should get the arguments from the pipe and pair them with the numbers given, but it does not.
cat $hdd_file | grep $ssd | parallel -C '\t' create_new_cache ::: {} :::+ `seq $partitions_per_ssd`
I also tried this and it still doesn't work. The {} :::+ are passed as arguments for some reason
|
GNU parallel solution:
Sample input.txt (for demonstration):
a b
c d
e f
grep '^[ac]' input.txt will be used to emulate command(or pipeline) acting like input source file
parallel -C '\t' echo :::: <(grep '^[ac]' input.txt) ::: $(seq 1 3)
The output:
a b 1
a b 2
a b 3
c d 1
c d 2
c d 3
:::: argfiles - treat argfiles as input source. ::: and :::: can be mixed.
To aggregate elements from each input source - add --xapply option:
parallel -C '\t' --xapply echo :::: <(grep '^[ac]' input.txt) ::: $(seq 1 2)
The output:
a b 1
c d 2
| gnu parallel pair argument with file input arguments |
1,494,413,974,000 |
I have an input file, names.txt, with the 1 word per line:
apple
abble
aplle
With my bash script I am trying to achieve the following output:
apple and apple
apple and abble
apple and aplle
abble and apple
abble and abble
abble and aplle
aplle and apple
aplle and abble
aplle and aplle
Here is my bash script
#!/usr/bin bash
readarray -t seqcol < names.txt
joiner () {
val1=$1
val2=$2
echo "$val1 and $val2"
}
export -f joiner
parallel -j 20 '
line=($(echo {}))
for word in "${line[@]}"; do
joiner "${line}" "${word}"
done
' ::: "${seqcol[@]}"
but it is only outputting the following 3 lines comparing identical elements from the array
apple and apple
abble and abble
aplle and aplle
I have the script that uses while read line loop, but it is too slow (my actual datafile is has about 200k lines). That is why I want to use array elements and gnu parallel at the same to speed the process up.
I have tried different ways of accessing the array elements within the parallel ' ' command (by mainly modifying this loop - for word in "${line[@]}", or by supplying the array to parallel via printf '%s\n' "${seqcol[@]}") but they are either leading to errors or output blank lines.
I would appreciate any help!
|
GNU Parallel can generate all combinations of input sources.
In your case you simply use names.txt twice:
parallel -k echo {1} and {2} :::: names.txt names.txt
Or (if you really have an array):
readarray -t seqcol < names.txt
parallel -kj 20 echo {1} and {2} ::: "${seqcol[@]}" ::: "${seqcol[@]}"
| Iterating over array elements with gnu parallel |
1,494,413,974,000 |
I found a script on GitHub which I've slightly modified to fit the needs of the program I'm trying to run in a queue.
It is not working however and I'm not sure why. It never actually echos the jobs to the queue file.
Here is a link to the GitHub page:
https://gist.github.com/tubaterry/c6ef393a39cfbc82e13b8716c60f7824
Here is the version I modified:
#!/bin/sh
END="END"
true > queue
tail -n+0 -f queue | parallel -j 16 -E "$END"
while read i; do
echo "command [args] > ${i%.inp}.log 2> ${i%.inp}.err" > queue
done < "jobs.txt"
echo "$END" >> queue
echo "Waiting for jobs to complete"
while [ "$(pgrep 'perl /usr/local/bin/parallel' | grep -evc 'grep' | tr -d " ")" -gt "0" ]; do
sleep 1
done
touch killtail
mv killtail queue
rm queue
The only thing I can think of is that one of these steps isn't operating as expected on OpenBSD. But I rearranged a step and everything executes without errors but it only submits one job. The change was moving tail -n+0 -f queue | parallel -j 16 -E "$END" after the first while loop and changing true > queue to touch queue since I'm not quite sure what true > queue means.
Any help would be appreciated.
EDIT:
I have a jobs.txt file filled with the path the the input files to the command I plan to run. The files in jobs.txt would be one of the arguments to command and then I output the results of the calculation to a log file and any errors to an error file.
My expectation is that each job will be added to queue and parallel will execute up to 16 jobs, one per core as one of the arguments to command is the utilisation of one core per calculation. This will continue until it reaches the “END” which is signified by the -E argument to parallel.
As written, nothing echos from jobs.txt to queue. I will try again with a >>
I have questioned quite a few things in the original script. I changed the things I’m sure about but some of the functionality I was very confused by and decided to leave it as is.
One of those things I’m not clear on is tail -n+0
I have no idea what that is doing
EDIT2:
${PROGRAM} ${JOB}.inp ${NCPU} > ${JOB}.log 2> ${JOB}.err
${JOB} is a reference to anywhere between 1 and ∞ calculations depending on how many I need to do at a given time. Currently, jobs.txt has 374 individual tests that I need to run. ${PROGRAM} is the software that takes the parameters from ${JOB}.inp and calculates accordingly. ${NCPU} is how many cores I want to use per job; currently I am trying to run each job in serial on a 16-core processor.
The goal is to queue as many calculations as I need to without ever typing that full command in. I just want to generate a list using find calculations -name '*.inp' -print > jobs.txt and then run a script such as SerialRun.sh or ParallelRun.sh and let it crank out results. The jobs may be nested in many different directories depending on how different users choose to organise their work and this method using find allows me to very quickly submit a job and generate results to the correct paths. As each calculation finishes, I can then analyse the data while the system continues to run through the tests.
The script very well may be over complicated. I was looking for a job queue system and found nqs which became the GNU Parallel project. I cannot find many examples of queueing jobs with parallel but came across that script on GitHub and decided to give it a shot. I have quite a few issues with how it is written but I don't understand parallel well enough to question it.
I figured it should be a bit simpler than this to build a queue for it.
EDIT3:
Maybe the correct way to go about this is to just do:
while read i; do
command "$i" > "${i%.inp}".log 2> "${i%.inp}".err | parallel -j 16
done < "jobs.txt"
Would that work?
|
You don't need this complex script, parallel can do what want by itself. Just remove the .inp extension from your list of files using sed or any other tool of your choice, and feed the base name to parallel like this:
sed 's/\.inp//' jobs.txt | parallel -j 16 "${PROGRAM} {}.inp > {}.log 2> {}.err"
The {} notation is part of parallel's basic functionality, described in man parallel as follows:
{} Input line.
This replacement string will be replaced by a full line read from the input source. The input source is normally stdin (standard input), but can also
be given with --arg-file, :::, or ::::.
So it is simply replaced by whatever you pass to parallel, in this case the list of file names with their extension removed by sed.
Alternatively, you can use {.} which is:
{.} Input line without extension.
This replacement string will be replaced by the input with the
extension removed. If the input line contains . after the last /,
the last . until the end of the string will be removed and {.}
will be replaced with the remaining. E.g. foo.jpg becomes foo,
subdir/foo.jpg becomes subdir/foo, sub.dir/foo.jpg becomes
sub.dir/foo, sub.dir/bar remains sub.dir/bar. If the input line
does not contain . it will remain unchanged.
The replacement string {.} can be changed with --extensionreplace
With this, you don't even need the jobs.txt file. If all of your files are in the same directory, you can do:
parallel -j 16 "${PROGRAM} {.}.inp > {.}.log 2> {.}.err" ::: *.inp
Or, to make it recursively descend into subdirectories, assuming you are using bash, you can do:
shopt -s globstar
parallel -j 16 "${PROGRAM} {.}.inp > {.}.log 2> {.}.err" ::: **/*.inp
| A GNU Parallel Job Queue Script |
1,494,413,974,000 |
I need to use parallel and set the rate limit per second, because I need to query an API that has a "5 per second" rate limit.
Do I must combine -n5 and --timeout 1?
Thank you
|
You are looking for --delay 0.2.
| gnu parallel: how to set the limit per second |
1,494,413,974,000 |
I want to run four processes in parallel, but not spawn any new jobs until all of these four have finished.
EDIT: My command looks like this:
find . -name "*.log" | parallel -j 4 './process.sh {}'
|
That very question has recently been put on the mailing list:
https://lists.gnu.org/archive/html/parallel/2019-06/msg00003.html
The short answer is: As GNU Parallel is now, it cannot do this. A (crappy) workaround is here:
https://lists.gnu.org/archive/html/parallel/2019-06/msg00008.html
in-line:
#!/bin/bash
parpar() {
. `which env_parallel.bash`
env_parallel --session
inner() {
parallel "${command[@]}"
}
export -f inner
command=()
while [[ $# -gt 0 ]]; do
if [[ $1 == ",,," ]] ; then
break
fi
command+=("$1")
shift
done
printf "%s\n" "$@" |
env_parallel --pipe --recend ',,,\n' --rrs -j1 -N1 inner
}
# Example (,,, = wait for completion)
parpar -v sleep {}\; echo {} ,,, 3.9 4 4.1 ,,, 1 2 3 ,,, 0.2 0.3 0.1
| Is there a way to tell GNU parallel to hold off spawning new jobs until all jobs in a batch has finished? |
1,494,413,974,000 |
I have a list of java reserved words, first letter capitalised.
$ tail -n5 ~/reservedjava.txt
Break
While
True
False
Null
I'm trying to look through all my java source code to find methods that look like getWhile().
cat ~/reservedjava.txt | parallel 'ag "get{}\(\)$"'
This shows me nothing. Now, I know that I have a method getBreak():
$ ag "getBreak\(\)$"
src/main/java/Foo.java
154: public Break getBreak()
Here's what a dry run looks like:
$ cat ~/reservedjava.txt | parallel --dry-run 'ag "get{}\(\)$"' | tail -n5
ag "getBreak\(\)$"
ag "getWhile\(\)$"
ag "getTrue\(\)$"
ag "getFalse\(\)$"
ag "getNull\(\)$"
I'm using gnu parallel (v. 20130722) and the silver searcher (ag) (v. 0.18.1). If it makes a difference, I'm on Fedora 19, but have compiled these utilities myself. I get the same result with ack (v. 2.12).
|
cat ~/reservedjava.txt | parallel 'ag "get{}\(\)$"'
This doesn't work because ag wants a path argument. Eg, search where.
This works, recursively search starting at the current directory:
cat ~/reservedjava.txt | parallel 'ag "get{}\(\)$" ./'
| No output using parallel in tandem with ag or ack |
1,494,413,974,000 |
Suppose I have a file like this:
COLUMN
1
2
3
4
If I wanna run and process it with GNU parallel but skipping first line aka header, I tried this:
parallel -a test.txt -k --pipepart --will-cite --skip-first-line cat
However, --skip-first-line is not working as I expect:
parallel -a test.txt -k --pipepart --will-cite --skip-first-line cat
COLUMN
1
2
3
4
I expected this:
1
2
3
4
Is it possible to skip the first line using pipepart in parallel?
|
I found a solution using the replacement string {#} aka sequence number. The first header will always be in sequence 1, hence we can treat this specially when we parse it. For example with a script:
#!/bin/bash
_test()
{
if [[ "$1" == 1 ]]; then
:
else
cat
fi
}
export -f _test
parallel -a demo.txt -k --pipepart --will-cite _test {#}
Running this script yields the expected results:
1
2
3
4
| --skip-first-line in GNU parallel not working with --pipepart? |
1,494,413,974,000 |
I am trying to run code across a network of external nodes.
I have access to the 'main' node through ssh, and can execute a parallel script that divides the jobs over the cluster of 5 available nodes.
I have a bash script that contains the parallel command, among other necessary components.
The final command I am using looks similar to this.
parallel -S node0,node1,node2,node3,node4 --ssh-delay 0.25 --delay 0.5 'run {1} {2}' ::: foo ::: bar
However, the cluster that I am working on is known for hanging up relatively frequently, e.g., after ~5 minutes of idle time, I get a Broken pipe error, and the ssh- the connection is broken. It is why I execute the upper bash script (that contains the above parallel line) using nohup, which should keep the ssh-connection alive.
But because my actual code takes considerable computation time, I get errors related to broken ssh-connections:
ssh_askpass: exec(/usr/bin/ssh-askpass): No such file or directory^M
Permission denied, please try again.
even though connecting from (e.g. node0 to node1), the boss node to the other nodes does not require a password.
This results in parallel complaining that there are no more job slots available and a warning that there are no logins possible:
parallel: Warning: There are no job slots available. Increase --jobs.
parallel: Warning: Using only -1 connections to avoid race conditions.
parallel: Warning: ssh to node0 only allows for 0 simultaneous logins.
parallel: Warning: You may raise this by changing /etc/ssh/sshd_config: MaxStartups and MaxSessions on node0.
parallel: Warning: You can also try --sshdelay 0.1
I believe that there is something finicky going on that prematurely closes the ssh-connection to the other nodes within the cluster, possibly the result of closing the connection to the boss node, node0.
I have tried to establish a connection using ssh-agent, ssh-copy-id, and sshpass as per the GNU parallel tutorial, as well as setting the MaxStartups and MaxSessions parameters in /etc/ssh/sshd_config, but to no avail.
Even more, if I reduce the computation time of the code, then the parallel command is perfectly executed and works as expected.
Is there something I can do to ensure that the ssh-connection does not break when the program executed by parallel takes considerable time, or is there something else going on?
|
To keep the connection alive you can often use ServerAliveInterval. It can be set in .ssh/config.
| ssh_askpass Permission denied, when using GNU parallel, even when using nohup |
1,494,413,974,000 |
This is my case scenario:
luis@Balanceador:~$ echo ${array[@]}
a b
luis@Balanceador:~$ echo ${array[1]}
a
luis@Balanceador:~$ echo ${array[2]}
b
luis@Balanceador:~$ parallel echo ${array[]} ::: 1 2
-bash: ${array[]}: bad substitution
luis@Balanceador:~$ parallel echo ${array[{}]} ::: 1 2
-bash: {}: syntax error: operand expected (error token is "{}")
luis@Balanceador:~$ parallel echo ${array[{1}]} ::: 1 2
-bash: {1}: syntax error: operand expected (error token is "{1}")
luis@Balanceador:~$ parallel echo ${array[{#}]} ::: 1 2
-bash: {#}: syntax error: operand expected (error token is "{#}")
How can I reference the individuals elements of some array on GNU Parallel?
Sure this is an easy one, but I have not been able to find it on the manual.
This question has been made to answer this other, but, after asking it, I considered they were two different questions.
|
While it looks easy, it is really very hard.
Jobs started by GNU Parallel are not started inside the same shell as GNU Parallel is run from. So it looks like this:
bash[1]---perl(running parallel)---bash[2]
$array is defined in bash[1] but you want to use it in bash[2]. It is impossible to do completely (i.e. if you want write access to the array), but we can make a copy of $array available:
env_parallel 'echo ${array[{}]}' ::: 1 2
env_parallel (introduced in GNU Parallel 20140822) copies the entire environment of bash[1] to bash[2] (so your environment has to be kinda small) where it is initiated before the job is run.
env_parallel is quite unstable, so if you find bugs, please report them.
| GNU Parallel: How can I reference array elements? |
1,494,413,974,000 |
I'm compiling a huge list of commands (all doing the same thing) I want executed, but as it takes a long time to compile that list, I would like execution to begin before I'm done (execution of each command typically takes longer than creating another, so there's no real risk of the list running dry).
The normal way to execute a list of commands, is to write a shell script listing the commands, but when I start execution of a script I can't add to it anymore.
The way I've found so far is to put the commands in command.list and having parallel --jobs 1 --line-buffer :::: command.list, but as it involves using parallel (I'm using GNU parallel, I don't know if it will work with the program in more-utils) for non-parallel execution of things, I think it's a bit of an abuse of parallel.
Is there a simpler way of doing it? Something that tracks which commands have been executed in case I screw up something and the list does run out, would be nice.
|
From: https://www.gnu.org/software/parallel/man.html#example-gnu-parallel-as-queue-system-batch-manager
true >jobqueue; tail -n+0 -f jobqueue | parallel --joblog my.log &
echo my_command my_arg >> jobqueue
my_job_generator >> jobqueue
This will give you a record (my.log) of which jobs have completed.
GNU Parallel version 20220222 will only output job n (and add it to my.log) when job n+1 has been added. If that is unacceptable, just add another dummy-job:
echo true dummy >> jobqueue
The behaviour is slightly different in older versions.
| Adding to while executing a list of commands |
1,494,413,974,000 |
I have some files
Joapira___BERLINA_DEL_HIERRO.mp4
Joapira___EL_BAILE_DEL_VIVO.mp4
Joapira___EL_CONDE_CABRA.mp4
Joapira___FLAIRE.mp4
Joapira___MAZULKA_DEL_HIERRO.mp4
Joapira___MEDA_A_MANOLITO_DIAZ_ARTESANO_TALLISTA.mp4
that I want to convert to some other formats with ffmpeg and GNU parallel. For example to convert them to flac I do
parallel --bar ffmpeg -i "{}" -map_metadata 0 "{/.}.flac" ::: *
or to convert them to mp3 I do
parallel --bar ffmpeg -i "{}" -vn -ar 44100 -ab 128k -map_metadata 0 "{/.}.mp3" ::: $@
but the process continues forever and the first file is always missing. Why?
Info
I am on Fedora 22 using
GNU parallel 20160222
and
ffmpeg version N-80953-gd4c8e93-static http://johnvansickle.com/ffmpeg/
Update
Fascinating, I tried it with ffmpeg version 2.6.8 (comes with Fedora) and it works!! And even with the most recent static build from git it does not. :-(
Update 2
When I run ps auxwww and search for ffmpeg I see all the jobs with the state Rl, except for the command of the file that is missing, which has the state T.
GNU parallel has the state S+, but sometimes during the processing of the working files changes to R+.
The man page of ps says the following about the states:
D uninterruptible sleep (usually IO)
R running or runnable (on run queue)
S interruptible sleep (waiting for an event to complete)
T stopped by job control signal
t stopped by debugger during the tracing
W paging (not valid since the 2.6.xx kernel)
X dead (should never be seen)
Z defunct ("zombie") process, terminated but not reaped by its parent
< high-priority (not nice to other users)
N low-priority (nice to other users)
L has pages locked into memory (for real-time and custom IO)
s is a session leader
l is multi-threaded (using CLONE_THREAD, like NPTL pthreads do)
+ is in the foreground process group
Maybe this helps to understand the problem.
|
Solution is —as suggested by @OleTange in a comment— to update to a newer version of parallel, i.e. GNU parallel 20161122. EVerything works again.
And it is better to protect the commands from shell interaction with single quotes, i.e.:
parallel --bar 'ffmpeg -i {} -map_metadata 0 {/.}.flac' ::: *
and
parallel --bar 'ffmpeg -i {} -vn -ar 44100 -ab 128k -map_metadata 0 {/.}.mp3' ::: $@
| gnu parallel with ffmpeg does not process first file |
1,494,413,974,000 |
This command works fine for dir with "regular" names as dir1 mydir my-dir, etc
ls | parallel 'echo -n {}" "; ls {}|wc -l'
give me the number of files for each dir
but for dir with white spaces like "my dir" or my long dir name don't work and give error.
How to quote/escape the white spaces?
|
Assuming that you only want to list the number of regular files in the sub-directories from where you execute your cmd, here is a one-liner that will do that for you:
$ find . -maxdepth 1 -type d ! -name "." -print0 2>/dev/null \
| xargs -0 -I {} sh -c 'printf "%20s: %d\n" "{}" "$(find "{}" -maxdepth 1 -type f 2>/dev/null| wc -l)"'
Example output:
./Maildir: 0
./.dvisvgm: 0
./.pyenv: 5
./.ipython: 0
./.ipynb_checkpoints: 3
./.tmux: 1
./.virtualenvs: 12
./seaborn-data: 2
./.local: 2
./bgpix: 12
./.vim: 7
...
I added 2>/dev/null in each find block only to avoid some unwanted file access issues on the platform I used to run tests. You may do away with it if you foresee no such file permission issues when stat'ing your files as part of your find cmds.
I also suppress all output concerning $PWD (your present working directory, denoted .) in keeping with my assumption stated above, which was that you are only interested in counting regular files in current 1st-level subdirectories.
to count regular files in your whole sub-directory tree, starting at $PWD, just omit the -maxdepth 1 global option in the first find block above (but keep it in the 2nd one).
To better highlight the similarity with the solution relying on parallel (below), the above can be rewritten:
$ xargs -0 -I {} sh -c 'printf "%20s: %d\n" "{}" "$(find "{}" -maxdepth 1 -type f 2>/dev/null| wc -l)"' \
< <(find . -maxdepth 1 -type d ! -name "." -print0 2>/dev/null)
Relying on parallel instead of xargs, as in the above, requires escaping some quotes as follows (output is exactly as before):
$ parallel -0 -I {} \
'sh -c "printf \"%20s: %d\n\" \"{}\" \"$(find {} -maxdepth 1 -type f 2>/dev/null | wc -l)\""' \
:::: < <(find -maxdepth 1 -type d ! -name "." -print0 2>/dev/null)
xargs and parallel use the same two arguments -0 -I {} ,
the shell command to execute is the Bourne shell between single quotes, 'sh -c "printf ..."', and
the input to parallelize, introduced by ::::, is the process substitution input file <(...) which contains the output of the "find all subdirectories at first sublevel starting at $PWD" cmd.
| Parallel and ls with white spaces |
1,494,413,974,000 |
I am trying to execute simple 'parallel' command
parallel -S server1,server2,server3 echo "Number {}: Running on \`hostname\`" ::: 1 2 3
It asks me for passwords to the three servers, but then nothing happens. Usual ssh to these servers works fine.
Once I logged in to one of the servers, system warned me about failed login to 'notty'.
How can I achieve a correct execution?
|
"It asks me for passwords to the three servers"
Looking at the documentation for GNU Parallel:
"The sshlogin must not require a password"
Since you are using the -S (--sshlogin) flag this is a problem.
So you get asked for a password, this means GNU Parallel will not run.
You need to set up ssh keys to ensure you can have password-less connections.
You can follow the steps in this link to set up keys.
You do say in your post that normal ssh works fine. So you maybe have ssh set up to ask for password. You can use your existing keys and add them to the authorized_keys file if this is the case.
| Using 'parallel' to execute command on remote hosts - nothing is returned, failed logins |
1,494,413,974,000 |
I am learning GNU parallel and I wonder it this makes sense:
bash:
IFS=" "
while read field || [ -n "$field" ]; do
targets_array+=("$field")
done </location/targets
parallel -Dall bash myScript.sh ::: $targets_array
I wonder if it makes sense because my output seem to stop at some point...
I have 30.000 targets that I scan with myScript.sh
then I update info about them in DB also using myScript.sh
I tried to use some options but I could not make it work: like writing to a logfile
from the performance point of view, does it make sense to run one target at the time?
|
$targets_array is equivalent to ${targets_array[0]}. To get all elements you need ${targets_array[@]}. And you should quote right.
So it could be:
parallel … ::: "${targets_array[@]}" # but don't
parallel is an external command. If the array is large enough then you will hit argument list too long. Use this instead:
printf '%s\n' "${targets_array[@]}" | parallel … # still not the best
It will work better because in Bash printf is a builtin and therefore everything before | is handled internally by Bash.
I notice you didn't use read -r (I assume it was an educated decision), so a backslash-newline pair (if any) in /location/targets can result in a newline character actually in some array element. Therefore separating with newlines while passing data to parallel may be a bug. Separate with null bytes:
printf '%s\0' "${targets_array[@]}" | parallel -0 …
Hopefully /location/targets does not contain null bytes. If it does then they won't get to the array in the first place.
| GNU parallel bash |
1,516,741,576,000 |
I have a db of files to download containing fields for filename and download_url with the format:
"foo-1.23.4.jar", "http://example.com/files/12345678/download"
"bar-5.67.8.jar", "http://example.com/files/9876543/download"
"baz-3.31.jar", "http://example.com/files/42424242/download"
Where the urls tend to all be the same except for the number. I tried exporting the list of URLs and downloading these with wget -i, but every file gets named download, and I have no way to tell them apart.
Normally I would use the -O parameter to specify the correct output file, but I'm not sure how to combine that with -i
How can I format the input file and command line to wget -i such that each line can specify both a download url and an output filename?
|
Concurrently with GNU parallel:
parallel -a files_list.txt -j0 -C ", *" 'wget -q -O {1} {2}'
-a input-file - use input-file as input source
-j N - run up to N jobs in parallel. 0 means as many as possible
-C regex - column separator. The input will be treated as a table with regexp separating the columns. The n'th column can be access using {n} or {n.}. E.g. {3} is the 3rd column
| wget using -i with -O |
1,516,741,576,000 |
I try to run cocurrent curl but it can easily report "Could not resolve host". To run curl parallel, I use "parallel".
parallel :::: ./a.sh ./a.sh
from api server
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 16 0 16 0 0 13781 0 --:--:-- --:--:-- --:--:-- 16000
from api server
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 16 0 16 0 0 14925 0 --:--:-- --:--:-- --:--:-- 16000
from api server
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 16 0 16 0 0 15009 0 --:--:-- --:--:-- --:--:-- 16000
from api server
from api server
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 16 0 16 0 0 14324 0 --:--:-- --:--:-- --:--:-- 16000
curl: (6) Could not resolve host: curl
100 16 0 16 0 0 44198 0 --:--:-- --:--:-- --:--:-- 44198
a.sh
#!/bin/bash
curl http://127.0.0.1:81/a.php
a.php
<?php
echo "from some server\n";
How could resolve host fail for just 4 concurrent curl? I simulate this because the original problem I encounter is described in php curl localhost is slow when making concurrent requests. I really don't think it's an open file limit issue since there is just 4 concurrent curl. Can anyone explain why this occurs? By the way, the OS is ubuntu 16.04.
Well the correct way to use parallel is suggested by rudimeier. After using yes | head -n4 | parallel ./a.sh The issue no longer exists. Still, my original issue is there.
|
You should try using --dryrun when you are confused by what GNU Parallel runs:
$ parallel --dryrun :::: ./a.sh ./a.sh
#!/bin/bash #!/bin/bash
#!/bin/bash
#!/bin/bash curl http://127.0.0.1:81/a.php
#!/bin/bash
curl http://127.0.0.1:81/a.php
curl http://127.0.0.1:81/a.php #!/bin/bash
curl http://127.0.0.1:81/a.php
curl http://127.0.0.1:81/a.php curl http://127.0.0.1:81/a.php
This is clearly not what you intended. So what is going on?
If you analyse the output you see each line from the first file is combined with each line from the second file. It is even more obvious with:
file-b:
1
2
3
$ parallel --dryrun :::: b b
1 1
1 2
1 3
2 1
2 2
2 3
3 1
3 2
3 3
The mistake is in ::::. :::: makes GNU Parallel read the content of the file as arguments. And since you gave 2 files, it makes the cross-product of those.
What you wanted was ::::
$ parallel --dryrun ::: ./a.sh ./a.sh
./a.sh
./a.sh
| concurrent curl could not resolve host |
1,516,741,576,000 |
I am taking two lists having filenames with paths and using gnu-parallel to process the files from the two lists.
However,The command is able to use only files from the first list and not the second list,when I check my output.I have tried various options in this.e.g. giving the filetype in --readFilesIn (which is where the error is)
reads_list=/comb_fastq/fq1.list
reads_list2=/comb_fastq/fq2.list
for fastq in `cat $reads_list`;do
rsync -av $fastq $TMPDIR/input/
done
for fastq in `cat $reads_list2`;do
rsync -av $fastq $TMPDIR/input2/
done
parallel -j $NSLOTS --xapply \
"STAR \
--genomeDir $TMPDIR/reference_genome \
--genomeLoad LoadAndKeep \
--runThreadN 4 \
--readFilesIn ../input/{1} ../input2/{1}
|
You are not telling GNU Parallel $reads_list and $reads_list2. So I am puzzled how you would expect GNU Parallel to guess that it should use these.
By rsyncing in parallel as we go (instead of everything before running first job) it might be faster, too. My guess is that this is enough:
parallel -j $NSLOTS --xapply \
"rsync {1} $TMPDIR/input/{1};\
rsync {2} $TMPDIR/input2/{2};\
STAR \
--genomeDir $TMPDIR/reference_genome \
--genomeLoad LoadAndKeep \
--runThreadN 4 \
--readFilesIn ../input/{1} ../input2/{2}" :::: $reads_list $reads_list2
Consider walking through the tutorial http://www.gnu.org/software/parallel/parallel_tutorial.html which covers this and more. Your command line will love you for it.
| Reading two lists containing filenames |
1,516,741,576,000 |
I have the following script which gets the absolute paths of some directories and pipes them into GNU parallel for zipping. I keep getting a signal 13 error and I'm not sure why.
find $directory -maxdepth 1 | \
grep -v "^${directory}$" | \
xargs realpath | \
parallel -j 4 zip -r ${new_directory}/{/.}.zip {}
The error is terminated with signal 13; aborting. Sometimes the error is not raised but no zip file is present in the output, new_directory.
Any help would be appreciated.
|
Before getting to the actual failure that you're having, there are few things you need to fix in your command to handle some corner cases
find "$directory" -maxdepth 1 -mindepth 1 -print0 | \
xargs -0 realpath | \
parallel -j 4 zip -r "$new_directory"/{/.}.zip {}
The changes in find are:
-mindepth 1 - this will exclude your top directory, so you won't need your grep -v command.
-print0 - This will solve a possible problem case any of you files include spaces or any escape characters. All the files will be delimited by the the null character instead of a new line.
That's why you also need to add the -0 to your xargs command, so it would read the input delimited by the null characters.
Further troubleshooting
In case it doesn't solve your issue, you'll need to debug it further.
First, remove the parallel command altogether and check if the input for this command looks as expected.
If it seems ok, add verbosity to the parallel command to capture exactly what it's doing:
parallel -t -j 4 zip -r ${new_directory}/{/.}.zip {} 2> parallel.log
Then you'll have the list of commands it's running in the parallel.log file. Check if the zip commands seem to be generated correctly.
If you still don't see anything unusual at the list of zip commands in the log file, try the commands in the file:
bash -x parallel.log
At some during the process you'll have to see at what stage you receive the error.
| Piping xargs into GNU parallel |
1,516,741,576,000 |
I need to run an executable a large number of times, each time with two command line arguments. I've used to use xargs for this purpose, but lately I've been made aware of the existence of GNU parallel, which in principle seems like a better tool (more features, more up-to-date, more extensive documentation, etc.).
Also, a strong selling point for me was the claim that it can be used as "a drop in replacement for xargs" (https://www.gnu.org/software/parallel/history.html). However, I'm having a bit of trouble with that last point.
Say that I have a text file, args.txt, with several lines, where each line contains two numbers separated by a space, for example:
1 2
7 9
11 13
I want to run my program, run, once for each line (i.e., once for each pair of arguments). With xargs I would do
cat args.txt | xargs -n2 run
where -n2 indicates that xargs should pass 2 arguments to run at each invocation. xargs then interprets each number as one argument, so each line of args.txt is interpreted as two arguments.
However, when I try using parallel as a drop-in replacement for xargs in the case above, I get different behaviour.
To illustrate, I will use the following small python script in place of my program run:
printer.py:
import sys
print([x for x in sys.argv[1:]])
Now, with xargs I get:
> cat args.txt | xargs -n2 python printer.py
['1', '2']
['7', '9']
['11', '13']
while with parallel I get
> cat args.txt | parallel -n2 python printer.py
['1 2', '7 9']
['11 13']
So while xargs calls the python script with the individual (space-separated) numbers as arguments, parallel interprets instead each line as a single argument, meaning that for example at the first call, the first argument is "1 2" instead of just "1".
I'm a little bit confused by this, as I had expected parallel to work as a drop-in replacement for xargs, but apparently it's a bit more subtle than that. I suppose my question is how I should use parallel to achieve the same thing that I'm doing with xargs, but I'm also just curious about why there is a difference in behaviour here, and if it's intentional.
|
You have hit one of the few incompatibilities between xargs and parallel which is by design.
GNU Parallel will make sure the input is quoted as a single argument, whereas xargs will not. It was one of the original driving forces for writing the first versions of GNU Parallel.
$ echo '9" nails in 10" boxes' | xargs echo
9 nails in 10 boxes
$ echo '9" nails in 10" boxes' | parallel echo
9" nails in 10" boxes
You can, however, force GNU Parallel not to quote the input:
cat args.txt | parallel python printer.py {=uq=}
This will take one line from args.txt and insert it in the command without quoting it.
(Version 20190722 or later).
Another option is to split the columns on a single space (as mentioned in the comments):
cat args.txt | parallel --colsep ' ' python printer.py
Or white space:
cat args.txt | parallel --colsep '\s+' python printer.py
(Version 20100822 or later).
| Difference between xargs and GNU parallel |
1,516,741,576,000 |
I want to use ffmpeg using two pc, i know parallel can do it
I use this cli
parallel --trc {.}.mkv -S virtual,: 'ffmpeg -y -i {} -vf yadif,crop=720:550:0:12,scale=640:352 -c:v libx264 -c:a aac -b:v 1500k -b:a 128k -metadata language=eng -metadata title="example" -aspect 16:9 {.}.mkv' ::: example.mpg
It transfer the file example.mpg on virtual(OK) and then run the command only on the remote pc(virtual)! But the line -S virtual,: don't ask parallel to run even on local pc?
My purpose is to use gnu parallel to distribute the load/cpu use on two or more pc, for example 50% of load on localpc, and 50% on remote, is possible?
Or I need something more complex like the old good openmosix cluster?
|
-S virtual,: will indeed make GNU Parallel spawn jobs both on the local machine (called :) and the server (virtual).
In you example you only give one mpg-file as input. So given that, GNU Parallel will only run one job. In other words: GNU Parallel will not magically split your single mpg-file into multiple files and process those.
If you want to use all your cores, you should at least give GNU Parallel the same number of files as inputs.
| Why my gnu parallel with ffmpeg execute ffmpeg only on remote host? |
1,516,741,576,000 |
Let's say that I have 10 GBs of RAM and unlimited swap.
I want to run 10 jobs in parallel (gnu parallel is an option but not the only one necessarily). These jobs progressively need more and more memory but they start small. These are CPU hungry jobs, each running at 1 core.
For example, assume that each job runs for 10 hours and starts at 500MB of memory and when it finishes it needs 2GBs, memory increasing linearly. So, if we assume that they increase linearly, at 6 hours and 40 minutes these jobs will exceed the 10GBs of ram available.
How can I manage these jobs so that they always run in RAM, pausing the execution of some of them while letting the others run?
Can GNU parallel do this?
|
Things have changed since June.
Git version e81a0eba now has --memsuspend
--memsuspend size (alpha testing)
Suspend jobs when there is less than 2 * size memory free. The size can be
postfixed with K, M, G, T, P, k, m, g, t, or p which would multiply the size
with 1024, 1048576, 1073741824, 1099511627776, 1125899906842624, 1000,
1000000, 1000000000, 1000000000000, or 1000000000000000, respectively.
If the available memory falls below 2 * size, GNU parallel will suspend some
of the running jobs. If the available memory falls below size, only one job
will be running.
If a single job takes up at most size RAM, all jobs will complete without
running out of memory. If you have swap available, you can usually lower
size to around half the size of a single jobs - with the slight risk of
swapping a little.
Jobs will be resumed when more RAM is available - typically when the oldest
job completes.
| parallel: Pausing (swapping out) long-running progress when above memory limit threshold |
1,516,741,576,000 |
I created this script out of boredom with the sole purpose of using/testing GNU parallel so I know it's not particularly useful or optimized, but I have a script that will calculate all prime numbers up to n:
#!/usr/bin/env bash
isprime () {
local n=$1
((n==1)) && return 1
for ((i=2;i<n;i++)); do
if ((n%i==0)); then
return 1
fi
done
printf '%d\n' "$n"
}
for ((f=1;f<=$1;f++)); do
isprime "$f"
done
When run with the loop:
$ time ./script.sh 5000 >/dev/null
real 0m28.875s
user 0m38.818s
sys 0m29.628s
I would expect replacing the for loop with GNU parallel would make this run significantly faster but that has not been my experience. On average it's only about 1 second faster:
#!/usr/bin/env bash
isprime () {
local n=$1
((n==1)) && return 1
for ((i=2;i<n;i++)); do
if ((n%i==0)); then
return 1
fi
done
printf '%d\n' "$n"
}
export -f isprime
seq 1 $1 | parallel -j 20 -N 1 isprime {}
Run with parallel:
$ time ./script.sh 5000 >/dev/null
real 0m27.655s
user 0m38.145s
sys 0m28.774s
I'm not really interested in optimizing the isprime() function, I am just wondering if there is something I can do to optimize GNU parallel?
In my testing seq actually runs faster than for ((i=1...)) so I don't think that has much if anything to do with the runtime
Interestingly, if I modify the for loop to:
for ((f=1;f<=$1;f++)); do
isprime "$f" &
done | sort -n
It runs even quicker:
$ time ./script.sh 5000 >/dev/null
real 0m5.995s
user 0m33.229s
sys 0m6.382s
|
GNU Parallel spends 2-10 ms overhead per job. It can be lowered a bit by using -u, but that means you may get output from different jobs mixed.
GNU Parallel is not ideal if your jobs are in the ms range and runtime matters: The overhead will often be too big.
You can spread the overhead to multiple cores by running multiple GNU Parallels:
seq 5000 | parallel --pipe --round-robin -N100 parallel isprime
You still pay the overhead, but now you at least have more cores to pay with.
A better way would be to change isprime so that it takes multiple inputs and thus takes longer to run:
isprime() {
_isprime () {
local n=$1
((n==1)) && return 1
for ((i=2;i<n;i++)); do
if ((n%i==0)); then
return 1
fi
done
printf '%d\n' "$n"
}
for t in "$@"; do
_isprime $t
done
}
export -f isprime
seq 5000 | parallel -X isprime
# If you do not care about order, this is faster because higher numbers always take more time
seq 5000 | parallel --shuf -X isprime
| How to optimize GNU parallel for this use? |
1,516,741,576,000 |
I want to use variables as input while passing the arguments in GNU parallel. For instance, I have three bash scripts that I want to run in parallel using GNU parallel
"par1.sh","par2.sh","par3.sh".
my script look like this:
Filecount=$(grep -c "if" $1)
echo $Filecount
parallel -j0 sh ::: par$(seq 1 $Filecount).sh
mkdir $2
mv par$(seq 1 $Filecount).sh ./$2
I tried every possible way to run this code, but it is not working. Therefore, my question is how should I provide variable with GNU parallel. I also tried this:
par{1..$Filecount}.sh
But it is also not working, I also tried "seq" as well.
|
The issue isn't with parallel but with the "variable" you're passing. This is what par$(seq 1 $Filecount).sh will expand to (assuming that Filecount=10):
$ echo par$(seq 1 $Filecount).sh
par1 2 3 4 5 6 7 8 9 10.sh
You want it to work like a brace expansion:
$ echo par{1..10}.sh
par1.sh par2.sh par3.sh par4.sh par5.sh par6.sh par7.sh par8.sh par9.sh par10.sh
However, variables aren't expanded inside brace expansions:
$ echo par{1..$Filecount}.sh
par{1..10}.sh
The good news is that none of this is actually needed. You can do one of these:
Use normal globs
parallel -j0 sh ::: par*sh
or perhaps
parallel -j0 sh ::: par[0-9]*.sh
Build the variable beforehand
targets=""; for ((num=1;num<=$Filecount;num++)); do targets="$targets par$num.sh"; done
parallel -j0 sh ::: $targets
So, using the second approach, your script would become (modified slightly to make it safe with arbitrary file names; not relevant in your case but it might be for future visitors):
Filecount=$(grep -c "if" "$1")
echo "$Filecount"
targets=( "par1.sh" );
for ((num=2;num<=$Filecount;num++)); do
targets=("${targets[@]}" par"$num".sh);
done
parallel -j0 sh ::: "${targets[@]}"
mkdir "$2"
mv "${targets[@]}" ./"$2"
| Using variables as inputs in GNU parallel |
1,516,741,576,000 |
I have a simple script that I want to copy and rename files, in files.lst based on a list of names in names.lst
**name.lst**
100GV200.vcf
150GV200.vcf
14300GV200.vcf
**file.lst**
file1.txt
file2.txt
file3.txt
My script so far looks like this:
parallel --link -k "cp {} {}" :::: file.lst :::: name.lst
Unfortunately I get back:
cp: target `100GV200.vcf` is not a directory
When I run a single cp command in the terminal it works perfectly
cp file1.txt 100GV200.vcf
Where am I going wrong in understanding how GNU parallel reads in arguments?
|
Use {1} and {2} notation:
parallel --link -k cp {1} {2} :::: file.lst :::: name.lst
Works for me, it will work with the quotes as well
parallel --link -k "cp {1} {2}" :::: file.lst :::: name.lst
To get it to work with {}, you would have had to do something like this:
parallel --link -k "cp {}" :::: file.lst :::: name.lst
Because parallel will automatically append the line of the two files.
| Copying & Renaming Files with GNU Parallel |
1,516,741,576,000 |
The following for loop runs thousands of jobs in parallel
OSMSOURCE=europe-latest.o5m
for SHAPEFILE in URBAN_[A-Z]*[0-9] ;do
cd $SHAPEFILE
for POLYGON in *.poly ;do
osmconvert --drop-version $OSMSOURCE -B=$POLYGON --out-o5m > $(basename $OSMSOURCE .o5m |tr "-" "_")_$(basename $POLYGON .poly).o5m &
done
cd ..
done
I want to learn how GNU parallel performs and understand if it is worth using.
|
Well, GNU parallel will do the same and it's quite as easy to use. Its advantage is that it will take care of the number of CPU cores on your machine and by default it will not execute more jobs than that (*).
Your program doesn't. If you have hundreds of .poly files, you will spawn hundreds of osmconvert jobs, which at best may not be optimum, and at worst may put your system down (it depends on your resources).
Your program would be something like (not tested):
OSMSOURCE=europe-latest.o5m
OSMBASENAME="$(echo "${OSMSOURCE%.o5m}" | tr - _)"
for SHAPEFILE in URBAN_[A-Z]*[0-9]; do
cd "$SHAPEFILE"
for POLYGON in *.poly; do
echo "cd '$SHAPEFILE'; osmconvert --drop-version '$OSMSOURCE' -B='$POLYGON' --out-o5m > '${OSMBASENAME}_${POLYGON%.poly}.o5m'"
done
cd ..
done | parallel # You may want to add a -j option
(*) You can give it your own threshold. You may want to keep a few spare CPU cores for something else. On the other hand, if I/Os are the bottleneck, you may want to give a higher number than the default one.
| Worth using parallel instead of forking processes in a for loop? |
1,516,741,576,000 |
I am trying to copy files from machineB and machineC into machineA as I am running my below shell script on machineA.
If the files is not there in machineB then it should be there in machineC for sure so I will try copying the files from machineB first, if it is not there in machineB then I will try copying the same files from machineC.
I am copying the files in parallel using GNU Parallel library and it is working fine. Currently I am copying two files in parallel.
Earlier, I was copying the PRIMARY_PARTITION files in PRIMARY folder using GNU parallel and once that is done, then onnly I was copying the SECONDARY_PARTITION files in SECONDARY folder using same GNU parallel so it is sequential as of now w.r.t PRIMARY and SECONDARY folder.
Now I decided to copy files in PRIMARY and SECONDARY folder simultaneously. Meaning, I will copy two files in PRIMARY folder along with two files in SECONDARY folder simultaneously.
Below is my shell script which I have -
#!/bin/bash
export PRIMARY=/test01/primary
export SECONDARY=/test02/secondary
readonly FILERS_LOCATION=(machineB machineC)
export FILERS_LOCATION_1=${FILERS_LOCATION[0]}
export FILERS_LOCATION_2=${FILERS_LOCATION[1]}
PRIMARY_PARTITION=(550 274 2 546 278) # this will have more file numbers
SECONDARY_PARTITION=(1643 1103 1372 1096 1369 1568) # this will have more file numbers
export dir3=/testing/snapshot/20140103
find "$PRIMARY" -mindepth 1 -delete
find "$SECONDARY" -mindepth 1 -delete
do_CopyInPrimary() {
el=$1
scp david@$FILERS_LOCATION_1:$dir3/new_weekly_2014_"$el"_200003_5.data $PRIMARY/. || scp david@$FILERS_LOCATION_2:$dir3/new_weekly_2014_"$el"_200003_5.data $PRIMARY/.
}
export -f do_CopyInPrimary
do_CopyInSecondary() {
el=$1
scp david@$FILERS_LOCATION_1:$dir3/new_weekly_2014_"$el"_200003_5.data $SECONDARY/. || scp david@$FILERS_LOCATION_2:$dir3/new_weekly_2014_"$el"_200003_5.data $SECONDARY/.
}
export -f do_CopyInSecondary
parallel -j 2 do_CopyInPrimary ::: "${PRIMARY_PARTITION[@]}" &
parallel -j 2 do_CopyInSecondary ::: "${SECONDARY_PARTITION[@]}" &
wait
echo "All files copied."
Problem Statement:-
With the above script at some point I am getting this exception -
ssh_exchange_identification: Connection closed by remote host
ssh_exchange_identification: Connection closed by remote host
ssh_exchange_identification: Connection closed by remote host
Is there any better way of doing the same thing as the way I am doing currently? I guess, I can still use GNU Parallel to make it work?
|
The error is typically caused by too many ssh/scp starting at the same time. That is a bit odd as you at most run 4. That leads me to believe /etc/ssh/sshd_config:MaxStartups and MaxSessions on $FILERS_LOCATION_1+2 is set too low.
Luckily we can ask GNU Parallel to retry if a command fails:
do_Copy() {
el=$1
PRIMSEC=$2
scp david@$FILERS_LOCATION_1:$dir3/new_weekly_2014_"$el"_200003_5.data $PRIMSEC/. || scp david@$FILERS_LOCATION_2:$dir3/new_weekly_2014_"$el"_200003_5.data $PRIMSEC/.
}
export -f do_Copy
parallel --retries 10 -j 2 do_Copy {} $PRIMARY ::: "${PRIMARY_PARTITION[@]}" &
parallel --retries 10 -j 2 do_Copy {} $SECONDARY ::: "${SECONDARY_PARTITION[@]}" &
wait
echo "All files copied."
| How to copy in two folders simultaneously using GNU parallel by spawning multiple threads? |
1,516,741,576,000 |
I have a below shell script from which I am trying to copy 5 files in parallel. I am running my below shell script on machineA which tries to copy the file from machineB and machineC.
If the file is not there in machineB, then it should be there in machineC for sure.
I am using GNU Parallel here to download five files in parallel.
#!/bin/bash
readonly PRIMARY=/tech01/primary
readonly FILERS_LOCATION=(machineB machineC)
readonly MEMORY_MAPPED_LOCATION=/techbat/data/be_t1_snapshot
PRIMARY_PARTITION=(550 274 2 546 278 6 558 282 10 554 286 14) # this will have more file numbers
dir1=/techbat/data/be_t1_snapshot/20140501
find "$PRIMARY" -mindepth 1 -delete
do_copy() {
el=$1
scp -o ControlMaster=auto -o 'ControlPath=~/.ssh/control-%r@%h:%p' -o ControlPersist=900 david@${FILERS_LOCATION[0]}:$dir1/t1_weekly_1680_"$el"_200003_5.data $PRIMARY/. || scp -o ControlMaster=auto -o 'ControlPath=~/.ssh/control-%r@%h:%p' -o ControlPersist=900 david@${FILERS_LOCATION[1]}:$dir1/t1_weekly_1680_"$el"_200003_5.data $PRIMARY/.
}
export -f do_copy
parallel -j 5 do_copy ::: "${PRIMARY_PARTITION[@]}"
Problem Statement:-
The problem I am facing with the above script is - It is not able to recognize ${FILERS_LOCATION[0]}, ${FILERS_LOCATION[1]}, $dir1 and $PRIMARY inside do_copy method? And I am not sure why?
If I try to print out like this inside do_copy method nothing is printed out?
echo ${FILERS_LOCATION[0]}
echo ${FILERS_LOCATION[1]}
But if I print out same thing just above do_copy method, then it works fine?
Anything I am missing here?
Update:-
Below is the code I am using -
#!/bin/bash
export PRIMARY=/tech01/primary
export FILERS_LOCATION=(machineB machineC)
export MEMORY_MAPPED_LOCATION=/techbat/data/be_t1_snapshot
PRIMARY_PARTITION=(0 548 272 4 544 276 8 556 280)
export dir1=/techbat/data/be_t1_snapshot/20140501
find "$PRIMARY" -mindepth 1 -delete
do_copy() {
el=$1
scp -o ControlMaster=auto -o 'ControlPath=~/.ssh/control-%r@%h:%p' -o ControlPersist=900 david@${FILERS_LOCATION[0]}:$dir1/t1_weekly_1680_"$el"_200003_5.data $PRIMARY/. || scp -o ControlMaster=auto -o 'ControlPath=~/.ssh/control-%r@%h:%p' -o ControlPersist=900 david@${FILERS_LOCATION[1]}:$dir1/t1_weekly_1680_"$el"_200003_5.data $PRIMARY/.
}
export -f do_copy
parallel -j 8 do_copy ::: "${PRIMARY_PARTITION[@]}"
Another Update:-
This is what I got after running the below script -
#!/bin/bash
export PRIMARY=/tech01/primary
export FILERS_LOCATION=(slc4b03c-407d.stratus.slc.ebay.com chd1b02c-0db8.stratus.phx.ebay.com)
export MEMORY_MAPPED_LOCATION=/techbat/data/be_t1_snapshot
PRIMARY_PARTITION=(0 548 272 4 544)
export dir1=/techbat/data/be_t1_snapshot/20140501
find "$PRIMARY" -mindepth 1 -delete
echo ${FILERS_LOCATION[0]}
echo ${FILERS_LOCATION[1]}
do_copy() {
el=$1
echo "scp -o ControlMaster=auto -o 'ControlPath=~/.ssh/control-%r@%h:%p' -o ControlPersist=900 bullseye@${FILERS_LOCATION[0]}:$dir1/t1_weekly_1680_"$el"_200003_5.data $PRIMARY/. || scp -o ControlMaster=auto -o 'ControlPath=~/.ssh/control-%r@%h:%p' -o ControlPersist=900 bullseye@${FILERS_LOCATION[1]}:$dir1/t1_weekly_1680_"$el"_200003_5.data $PRIMARY/."
}
export -f do_copy
parallel -j 3 do_copy ::: "${PRIMARY_PARTITION[@]}"
Output I got -
david@tvxdbx1143:/home/david$ ./scp_files5.sh
machineB
machineC
When using programs that use GNU Parallel to process data for publication please cite:
O. Tange (2011): GNU Parallel - The Command-Line Power Tool,
;login: The USENIX Magazine, February 2011:42-47.
This helps funding further development; and it won't cost you a cent.
To silence this citation notice run 'parallel --bibtex' once or use '--no-notice'.
scp -o ControlMaster=auto -o 'ControlPath=~/.ssh/control-%r@%h:%p' -o ControlPersist=900 david@:/techbat/data/be_t1_snapshot/20140501/t1_weekly_1680_0_200003_5.data /tech01/primary/. || scp -o ControlMaster=auto -o 'ControlPath=~/.ssh/control-%r@%h:%p' -o ControlPersist=900 david@:/techbat/data/be_t1_snapshot/20140501/t1_weekly_1680_0_200003_5.data /tech01/primary/.
scp -o ControlMaster=auto -o 'ControlPath=~/.ssh/control-%r@%h:%p' -o ControlPersist=900 david@:/techbat/data/be_t1_snapshot/20140501/t1_weekly_1680_548_200003_5.data /tech01/primary/. || scp -o ControlMaster=auto -o 'ControlPath=~/.ssh/control-%r@%h:%p' -o ControlPersist=900 david@:/techbat/data/be_t1_snapshot/20140501/t1_weekly_1680_548_200003_5.data /tech01/primary/.
scp -o ControlMaster=auto -o 'ControlPath=~/.ssh/control-%r@%h:%p' -o ControlPersist=900 david@:/techbat/data/be_t1_snapshot/20140501/t1_weekly_1680_272_200003_5.data /tech01/primary/. || scp -o ControlMaster=auto -o 'ControlPath=~/.ssh/control-%r@%h:%p' -o ControlPersist=900 david@:/techbat/data/be_t1_snapshot/20140501/t1_weekly_1680_272_200003_5.data /tech01/primary/.
scp -o ControlMaster=auto -o 'ControlPath=~/.ssh/control-%r@%h:%p' -o ControlPersist=900 david@:/techbat/data/be_t1_snapshot/20140501/t1_weekly_1680_4_200003_5.data /tech01/primary/. || scp -o ControlMaster=auto -o 'ControlPath=~/.ssh/control-%r@%h:%p' -o ControlPersist=900 david@:/techbat/data/be_t1_snapshot/20140501/t1_weekly_1680_4_200003_5.data /tech01/primary/.
scp -o ControlMaster=auto -o 'ControlPath=~/.ssh/control-%r@%h:%p' -o ControlPersist=900 david@:/techbat/data/be_t1_snapshot/20140501/t1_weekly_1680_544_200003_5.data /tech01/primary/. || scp -o ControlMaster=auto -o 'ControlPath=~/.ssh/control-%r@%h:%p' -o ControlPersist=900 david@:/techbat/data/be_t1_snapshot/20140501/t1_weekly_1680_544_200003_5.data /tech01/primary/.
|
Try exporting them and removing the array as bash can not export arrays:
export PRIMARY=/data01/primary
export FILERS_LOCATION_1=machineB
export FILERS_LOCATION_2=machineC
export MEMORY_MAPPED_LOCATION=/bexbat/data/be_t1_snapshot
export dir1=/bexbat/data/be_t1_snapshot/20140501
Or simply put all the constant variables into the function:
#!/bin/bash
PRIMARY_PARTITION=(0 548 272 4 544 276 8 556 280)
PRIMARY=/data01/primary
find "$PRIMARY" -mindepth 1 -delete
do_copy() {
el=$1
PRIMARY=/data01/primary
FILERS_LOCATION=(machineB machineC)
MEMORY_MAPPED_LOCATION=/bexbat/data/be_t1_snapshot
dir1=/bexbat/data/be_t1_snapshot/20140501
scp -o ControlMaster=auto -o 'ControlPath=~/.ssh/control-%r@%h:%p' -o ControlPersist=900 david@${FILERS_LOCATION[0]}:$dir1/t1_weekly_1680_"$el"_200003_5.data $PRIMARY/. || scp -o ControlMaster=auto -o 'ControlPath=~/.ssh/control-%r@%h:%p' -o ControlPersist=900 david@${FILERS_LOCATION[1]}:$dir1/t1_weekly_1680_"$el"_200003_5.data $PRIMARY/.
}
export -f do_copy
parallel -j 8 do_copy ::: "${PRIMARY_PARTITION[@]}"
Depending on what kinds of files you are copying you should look into rsync -z
instead of scp. And consider running parallel --bibtex once (as suggested by parallel).
| Variable value is not recognized after using gnu parallel? |
1,516,741,576,000 |
I have a very large SQL dumpfile (30GB) that I need to edit (do some find/replace) before loading back into the database.
Besides having a large size, the file also contains very long lines. Except for the first 40 and last 12 lines, all other lines have lenghts ~ 1MB. These lines are all INSERTO INTO commands that all look alike:
cat bigdumpfile.sql | cut -c-100
INSERT INTO `table1` VALUES (951068,1407592,0.0267,0.0509,0.121),(285
INSERT INTO `table1` VALUES (238317,1407664,0.008,0.0063,0.1286),(241
INSERT INTO `table1` VALUES (938922,1407739,0.0053,0.0024,0.031),(226
INSERT INTO `table1` VALUES (44678,1407886,0.0028,0.0028,0.0333),(234
INSERT INTO `table1` VALUES (910412,1407961,0.001,0.0014,0),(911017,1
INSERT INTO `table1` VALUES (903890,1408050,0.0066,0.01,0.0287),(9095
INSERT INTO `table1` VALUES (257090,1408136,0.0023,0.0037,0.0196),(56
INSERT INTO `table1` VALUES (593367,1408237,0.0066,0.0117,0.0286),(95
INSERT INTO `table1` VALUES (870488,1408339,0.0131,0.009,0.0135),(870
INSERT INTO `table1` VALUES (282798,1408414,0.0015,0.014,0.014),(2830
...
Parallel ends with an error on long lines:
parallel -a bigdumpfile.sql -k sed -i.bak 's/table1/newtable/'
parallel: Error: Command line too long (1018952 >= 63543) at input 0: INSERT INTO `table1...
Because all lines are similar and I only need the find/replace to happen at the beginning of the line I've follwed the advice in this similar question here with a nice suggestion to use --recstart and --recend. However these are not working:
parallel -a bigdumpfile.sql -k --recstart 'INSERT' --recend 'VALUES' sed -i.bak 's/table/newtable/'
parallel: Error: Command line too long (1018952 >= 63543) at input 0: INSERT INTO `table1...
Tried a number of variations using --block but could not get it working. I am a GNU parallel newbie, and doing something wrong or just missing something obvious. Any help appreciated. Thanks!
This is using GNU parallel 20240122.
|
You should be using --pipe (or --pipepart). If your disks are fast:
parallel -a bigdumpfile.sql --pipe-part --block 100M -k -q sed 's/table1/newtable/' | sql ...
If they are slow:
parallel -j1 -a bigdumpfile.sql --pipe-part --block 100M -k -q sed 's/table1/newtable/' | sql ...
Adjust -j to find the best for your disks.
If you are really trying to run multiple inserts in parallel:
# Create the table
head -n 40 bigdumpfile.sql | sql ...
# do the INSERTs in parallel
do_ins() {
grep 'INSERT INTO' |
sed s/table1/newtable/ |
sql ...
}
export -f do_ins
parallel -a bigdumpfile.sql --pipe-part --block -1 do_ins
But as Stéphane Chazelas suggests: It may be faster to just do:
sed s/table1/newtable/ bigdumpfile.sql | sql some-database
| Use GNU parallel with very long lines |
1,516,741,576,000 |
Scenario:
$ cat libs.txt
lib.a
lib1.a
$ cat t1a.sh
f1()
{
local lib=$1
stdbuf -o0 printf "job for $lib started\n"
sleep 2
stdbuf -o0 printf "job for $lib done\n"
}
export -f f1
cat libs.txt | SHELL=$(type -p bash) parallel --jobs 2 f1
Invocation and output:
$ time bash t1a.sh
job for lib.a started
job for lib.a done
job for lib1.a started
job for lib1.a done
real 0m2.129s
user 0m0.117s
sys 0m0.033s
Here we see that execution of f1 was indeed in parallel (real 0m2.129s).
However, diagnostic output looks like execution was sequential.
I expected the following diagnostic output:
job for lib.a started
job for lib1.a started
job for lib.a done
job for lib1.a done
Why does diagnostic output look like sequential execution rather than parallel execution?
How to fix the diagnostic output so that it does look like parallel execution?
|
From the man pages of GNU parallel:
--group
Group output.
Output from each job is grouped together and is only printed when the
command is finished. Stdout (standard output) first followed by stderr
(standard error).
This takes in the order of 0.5ms CPU time per job and depends on the
speed of your disk for larger output.
--group is the default.
See also: --line-buffer --ungroup --tag
[...]
--line-buffer
--lb
Buffer output on line basis.
--group will keep the output together for a whole job. --ungroup allows output to mixup with half a line coming from one job and half a
line coming from another job. --line-buffer fits between these two:
GNU parallel will print a full line, but will allow for mixing lines
of different jobs.
So you should add either --line-buffer or --ungroup to your parallel command (according to your preferred behavior):
$ grep parallel t1a.sh
cat libs.txt | SHELL=$(type -p bash) parallel --line-buffer --jobs 2 f1
$ bash t1a.sh
job for lib.a started
job for lib1.a started
job for lib.a done
job for lib1.a done
| GNU parallel: why does diagnostic output look like sequential execution rather than parallel execution? |
1,516,741,576,000 |
Consider the data from the GNU parallel manual's example for --group-by:
cat > table.csv <<"EOF"
UserID, Consumption
123, 1
123, 2
12-3, 1
221, 3
221, 1
2/21, 5
EOF
Is there a way to group records by one column and write all the values from another column in the group as command-line arguments?
This command doesn't group but otherwise gives me the output structure I want.
cat table.csv | parallel --colsep , --header : -kN1 echo UserID {1} Consumption {2}
UserID 123 Consumption 1
UserID 123 Consumption 2
UserID 12-3 Consumption 1
UserID 221 Consumption 3
UserID 221 Consumption 1
UserID 2/21 Consumption 5
What command would give me output like this?
UserID 123 Consumption 1 2
UserID 12-3 Consumption 1
UserID 221 Consumption 3 1
UserID 2/21 Consumption 5
I also want to limit the number of "Consumption" values.
Say there were more than 4 in one of the groups.
cat > table.csv <<"EOF"
UserID, Consumption
123, 1
123, 2
123, 3
123, 4
123, 5
123, 6
123, 7
12-3, 1
221, 3
221, 1
2/21, 5
EOF
I want the command line to contain no more than 4 "Consumption" values.
UserID 123 Consumption 1 2 3 4
UserID 123 Consumption 5 6 7
UserID 12-3 Consumption 1
UserID 221 Consumption 3 1
UserID 2/21 Consumption 5
The manual shows how to use --group-by to select the correct groups.
cat table.csv | \
parallel --pipe --colsep , --header : --group-by UserID -kN1 wc
4 lines of wc output mean that it operates on 4 groups. The first group for example has 3 lines, 6 words, and 40 characters.
3 6 40
2 4 30
3 6 40
2 4 30
To make the group input clearer I swap wc for cat.
cat table.csv | \
parallel --pipe --colsep , --header : --group-by UserID -kN1 cat
The cat output shows that parallel passes the original input lines to the job and copies the header line as the first line of each group.
UserID, Consumption
123, 1
123, 2
UserID, Consumption
12-3, 1
UserID, Consumption
221, 3
221, 1
UserID, Consumption
2/21, 5
The problem is that --group-by makes Parallel use standard input instead of command-line arguments. I don't see a way around that.
Do I need to change the way I pass the arguments to GNU parallel? Do I need to use another tool to create the correct format before using GNU parallel to execute?
I'm using GNU parallel version 20231122.
|
In Bash you can do:
doit() { parallel --header : --colsep , -n4 echo UserID {1} Consumption {2} {4} {6} {8}; }
export -f doit
cat table.csv | parallel --pipe --colsep , --header : --group-by UserID -kN1 doit
I do not see you can do it in a single parallel instance. What you want is to mix --pipe and normal mode, and GNU Parallel can't really do that.
| Can I use GNU parallel to group command arguments by a column value? |
1,516,741,576,000 |
I want to run some script over powers of two in parallel. Doing so by giving GNU Parallel a list of the powers of two I want works well:
%>parallel echo {} ::: 32, 64, 128, 256, 512, 1024
32
64
128
256
512
1024
%>
I can also give GNU Parallel a range of values without issue:
%>parallel echo {} ::: {5..10}
5
6
7
8
9
10
%>
But once I include the bit of arithmetic in the GNU Parallel command, I am met with a syntax error:
%>parallel echo $((2**{})) ::: {5..10}
bash: 2**{}: syntax error: operand expected (error token is "{}")
%>
This surprises me because I can generate these values in a for loop as so:
%>for N in {5..10}; do echo $((2**N)); done
32
64
128
256
512
1024
%>
What is the way to do this using GNU Parallel? I am not concerned with order.
|
You need to quote the entire command being run by parallel, for example:
$ parallel 'echo $((2**{}))' ::: {5..10}
32
64
128
256
512
1024
Actually, just quoting the bash arithmetic part of the command works too:
$ parallel echo '$((2**{}))' ::: {5..10}
32
64
128
256
512
1024
The reason is that without quotes, bash will try to expand & evaluate the arithmetic before passing it to parallel, and 2**{} doesn't mean anything to bash. The error message is actually from bash, not parallel:
$ echo $((2**{}))
-bash: 2**{}: syntax error: operand expected (error token is "{}")
| Arithmetic with GNU parallel |
1,516,741,576,000 |
GNU Parallel worked fine and suddenly whenever I try to use it I am getting this error message I am getting while running any parallel command:
parallel: This should not happen. You have found a bug.
Please contact <[email protected]> and include:
* The version number: 20160222
* The bugid: pidtable format: 10390 1
* The command line being run
* The files being read (put the files on a webserver if they are big)
If you get the error on smaller/fewer files, please include those instead.
kill_sleep TERM
kill_sleep TERM
kill_sleep TERM
kill_sleep KILL
Example:
$ ls -1 | parallel echo
parallel: This should not happen. You have found a bug.
Please contact and include:
* The version number: 20160222
* The bugid: pidtable format: 10390 1
* The command line being run
* The files being read (put the files on a webserver if they are big)
If you get the error on smaller/fewer files, please include those instead.
Steps taken:
GNU parallel reinstalled
machine rebooted
tested with sudo and without
changed ulimit "open files" to default 1024
$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 256635
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 65535
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 4096
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Another trial:
$ parallel -Dall -j0 ping -nc 3 ::: qubes-os.org gnu.org freenetproject.org
parallel: This should not happen. You have found a bug.
Please contact <[email protected]> and include:
* The version number: 20160222
* The bugid: pidtable format: 3694 1
* The command line being run
* The files being read (put the files on a webserver if they are big)
If you get the error on smaller/fewer files, please include those instead.
kill_sleep TERM
kill_sleep TERM
kill_sleep TERM
kill_sleep KILL
|
20160222 is 4 years old. It is a known problem.
Upgrade to 20201222
| GNU Parallel stopped working |
1,516,741,576,000 |
I have a large dataset (>200k files) that I would like to process (convert files into another format). The algorithm is mostly single-threaded, so it would be natural to use parallel processing. However, I want to do an unusual thing. Each file can be converted using one of two methods (CPU- and GPU-based), and I would like to utilize both CPU and GPU at the same time.
Speaking abstractly, I have two different commands (foo and bar), which are supposed to produce equivalent results. I would like to organize two thread pools with fixed capacity that would run up to N instances of foo and M instances of bar respectively, and process each input file with either of those pools depending on which one has free slots (determinism is not required or desired).
Is it possible to do something like that in GNU parallel or with any other tool?
|
Something like this:
gpus=2
find files |
parallel -j +$gpus '{= $_ = slot() > '$gpus' ? "foo" : "bar" =}' {}
Less scary:
parallel -j +$gpus '{=
if(slot() > '$gpus') {
$_ = "foo"
} else {
$_ = "bar"
}
=}' {}
-j +$gpus Run one job per CPU thread + $gpus
{= ... =} Use perl code to set $_.
slot() Job slot number (1..cpu_threads+$gpus).
| Process multiple inputs with multiple equivalent commands (multiple thread pools) in GNU parallel |
1,516,741,576,000 |
As in title. I've got a lot of ZIP archives that I want to extract.
All archives have their own unique name.
All archives contain files only (inside archives there are NOT folder(s) at all: no parent / main folder).
I'd like to process all these ZIP archives via GNU parallel.
To sum up:
archivename(s).zip has NOT folder(s) inside
extract content of archivename(s).zip into archivename(s)/ folder (this folder needs to be created!)
keep archivename(s).zip after extracting it
repeat this for all the ZIP archivename(s).zip
I was wondering about what utility fits best ZIP extraction: gunzip? unzip? bsdtar? 7z?
P. S.: I'd like to take advantage of GNU parallel for speeding up the whole operation (I'm using SATA SSD devices).
|
Removing file extension when processing files:
parallel 'mkdir {.} && cd {.} && unzip ../{}' ::: *.zip
| GNU parallel + gunzip (or 7z, or bsdtar, or unzip): extract every "archivename.zip" into (to-be-created) its "archivename" subfolder |
1,516,741,576,000 |
Not being a perl programmer I'm thinking this would be fairly easy with --tagstring, but basically I'd like to time-stamp each line of output from each job individually from Parallel. Something like, with the right replacement for “STUFF”, the output may look like, assuming millisecond resolution (although nanosecond resolution would be nice too):
$ seq 8 | parallel --tags 'sequence {} {=STUFF=}' -j2 'sleep=$((1 + RANDOM % 2)); echo sleeping $sleep; sleep $sleep; echo done; echo $sleep {#} {%} {}'
sequence 1 0.001 sleeping 1
sequence 1 1.001 done
sequence 1 1.002 1 1 1 1
sequence 2 0.001 sleeping 2
sequence 2 2.001 done
sequence 2 2.002 2 2 2 2
sequence 3 0.001 sleeping 2
sequence 3 2.001 done
sequence 3 2.002 2 3 1 3
sequence 5 0.001 sleeping 1
sequence 5 1.001 done
sequence 5 1.002 1 5 1 5
sequence 4 0.001 sleeping 2
sequence 4 2.001 done
sequence 4 2.002 2 4 2 4
sequence 6 0.001 sleeping 1
sequence 6 1.001 done
sequence 6 1.002 1 6 1 6
sequence 7 0.001 sleeping 2
sequence 7 2.001 done
sequence 7 2.002 2 7 2 7
sequence 8 0.001 sleeping 2
sequence 8 2.001 done
sequence 8 2.002 2 8 1 8
|
You would deceptively think that is easy, and I can't blame you for that.
It is, however, not possible for normal output.
This is because the tagstring is only computed twice, and it is only added after the job is completed.
GNU Parallel runs:
job1 > tmpout1 2> tmperr1
job2 > tmpout2 2> tmperr2
job3 > tmpout3 2> tmperr3
(This is of course not 100% true, but it is close enough).
When the job is done, GNU Parallel reads the tmp* files in nice big chunks, prepends --tagstring and outputs the job.
The important part here is: Tagging is not done while running. And --tagstring is only computed twice: before the job starts and after the job finishes (and it is the final result that will be added).
This design is selected because it is CPU consuming to compute the tagstring, and if your output is 3600000 lines, then even a delay of 1 ms/line would be 1 hour(!) of waiting.
There is, however, one exception: --line-buffer.
--line-buffer does compute the tagstring for every line of output. This design is a chosen because --line-buffer already takes up more CPU time (having to poll for new data from each running job and not being able to only deal with large chunks of data).
So this works:
$ seq 8 | parallel --lb --tagstring 'sequence {} {= $start{$job}||=::now(); $_=sprintf"%06.3f",::now()-$start{$job} =}' -j2 'sleep 1; echo Begin {}; sleep 0.{}; echo End {}'|sort
sequence 1 01.027 Begin 1
sequence 1 01.116 End 1
sequence 2 01.024 Begin 2
sequence 2 01.216 End 2
sequence 3 01.098 Begin 3
sequence 3 01.312 End 3
sequence 4 01.049 Begin 4
sequence 4 01.411 End 4
sequence 5 01.031 Begin 5
sequence 5 01.509 End 5
sequence 6 01.039 Begin 6
sequence 6 01.613 End 6
sequence 7 01.048 Begin 7
sequence 7 01.711 End 7
sequence 8 01.071 Begin 8
sequence 8 01.811 End 8
| GNU Parallel time stamp output |
1,516,741,576,000 |
I am attempting to look into multiple subdirectories within a single directory, and plot the files in each subdirectory using a python script that I am calling within this bash script using gnu-parallel.
The arguments the python script takes are -p: path to files (i.e. the subdirectory) and -o: outpath for plots (which I want to be the same as -p).
I have this bash script:
#!/bin/bash
script="/path/to/python/script/plot.py"
files=($(find . -maxdepth 2 -mindepth 2 | sort))
pat=$files
out=$files
filt="True"
chan="Z"
parallel --jobs 4 python $script -f {} -p $pat -o $out -fl $filt -c $chan ::: ${files[@]}
However, there are no plots in each subdirectory, and the script is running really fast, so I assume that nothing is being piped into the pat or out arguments. What am I doing wrong? Thanks!
|
I don't think $pat and $out have the values you think they do
$ echo ${files[@]}
./d1/file1 ./d1/file2 ./d1/file3 ./d2/file4 ./d2/file5 ./d2/file6
$ pat=$files
$ echo ${pat[@]}
./d1/file1
$ out=$files
$ echo ${pat[@]}
./d1/file1
I think what you're trying to accomplish requires different values for $pat and $out depending on the value of parallel's internal “variable” ‘{}’ (which would be each member of the $files array), but what you've got only puts the first member of $files on the command line.
Among the other “variables” GNU Parallel supports, one is ‘{//}’, which is replaced by the directory name of the ‘{}’ value (if any).
Given that, I think you want something more like
#!/bin/bash
script="/path/to/python/script/plot.py"
files=($(find . -maxdepth 2 -mindepth 2 | sort))
filt="True"
chan="Z"
parallel --jobs 4 python $script -f {} -p {//} -o {//} -fl $filt -c $chan ::: ${files[@]}
Is this something more like what you want?
$ parallel echo python $script -f {} -p {//} -o {//} -f $filt -c $chan ::: ${files[@]}
python /path/to/python/script/plot.py -f ./d1/file1 -p ./d1 -o ./d1 -f True -c Z
python /path/to/python/script/plot.py -f ./d1/file2 -p ./d1 -o ./d1 -f True -c Z
python /path/to/python/script/plot.py -f ./d1/file3 -p ./d1 -o ./d1 -f True -c Z
python /path/to/python/script/plot.py -f ./d2/file5 -p ./d2 -o ./d2 -f True -c Z
python /path/to/python/script/plot.py -f ./d2/file4 -p ./d2 -o ./d2 -f True -c Z
python /path/to/python/script/plot.py -f ./d2/file6 -p ./d2 -o ./d2 -f True -c Z
Since you're using sort on the find, I think you may also want to use the --keep-order parameter:
$ parallel --keep-order echo python $script -f {} -p {//} -o {//} -f $filt -c $chan ::: ${files[@]}
python /path/to/python/script/plot.py -f ./d1/file1 -p ./d1 -o ./d1 -f True -c Z
python /path/to/python/script/plot.py -f ./d1/file2 -p ./d1 -o ./d1 -f True -c Z
python /path/to/python/script/plot.py -f ./d1/file3 -p ./d1 -o ./d1 -f True -c Z
python /path/to/python/script/plot.py -f ./d2/file4 -p ./d2 -o ./d2 -f True -c Z
python /path/to/python/script/plot.py -f ./d2/file5 -p ./d2 -o ./d2 -f True -c Z
python /path/to/python/script/plot.py -f ./d2/file6 -p ./d2 -o ./d2 -f True -c Z
Which only makes sure the order of the output is the same as the input, not that the jobs are run in that or any deterministic order.
| How can I recursively find directories and parse them into python script call within bash script? |
1,516,741,576,000 |
I am running a command line software on multiple folder/samples.Each folder has such files *fastq.gz.
Below is an example of a folder.
Sample_EC_only/EC_only_S1_L005_I1_001.fastq.gz
Sample_EC_only/EC_only_S1_L005_R1_001.fastq.gz
Sample_EC_only/EC_only_S1_L005_R2_001.fastq.gz
Sample_EC_only/EC_only_S1_L006_I1_001.fastq.gz
Sample_EC_only/EC_only_S1_L006_R1_001.fastq.gz
I am trying to run this using gnu parallel for multiple softwares,but having issues with extracting the "ID" of the folder.
parallel -j $NSLOTS --xapply \
" echo {1} \
/home/rob2056/software/cellranger-2.2.0/cellranger count --id = "{basename} {1}" \
--transcriptome=$ref_data \
--fastqs={1} \
" ::: $TMPDIR/FASTQ/Sample*
I want to extract for e.g. "Sample_EC_only" as a pattern from the folder inside gnu parallel. --fastqs is able to get the path using {1} ,but having issues with --id option.I have tried various options to extract a pattern from the paths in {1} but not working.
The --id parameter needs a pattern extracted from the path in {1} so that it can create a output dir.
Each {1} consists of e.g. (shown below only for one sample)
/tmp/FASTQ/Sample_EC_only
|
If I understand you correctly, all you are looking for is {1/} instead of {1}. It is the "basename" of the argument. See man parallel_tutorial and the discussion of --rpl where we have that replacement strings are implemented as
--rpl '{/} s:.*/::'
and The positional replacement strings can also be modified using / etc.
So {1/} is like removing all characters upto the final /.
You can create your own replacement shorthand strings using --rpl followed by a string which begins with a tag ({/} in the example above), then a perl expression, such as the substitute command above (s:pattern:replacement:).
I'm not sure what is allowed as tags, but we can use the tutorial example {..} for a positional tag, i.e. that can be used with {number}. The perl expression to remove everything upto the last / followed by the word "Sample_" woudl be: s:.*/Sample_:: so you need to add before --xapply the arguments
--rpl '{..} s:.*/Sample_::'
and then use --id={1..} to apply this replacement to arg 1.
If, for example, you want to remove the word upto the first underline _, rather than a fixed word Sample, you can use a pattern such as
--rpl '{..} s:.*/[^_]*_::'
The final command should look something like this:
parallel -j $NSLOTS --rpl '{..} s:.*/Sample_::' --xapply \
" echo {1} \
/home/rob2056/software/cellranger-2.2.0/cellranger count --id={1/} \
--id2={1..} \
--transcriptome=$ref_data \
--fastqs={1} \
" ::: $TMPDIR/FASTQ/Sample*
| Extract the pattern of Directory in GNU parallel |
1,516,741,576,000 |
Suppose I have a file "Analysis.C" which takes a data file as input. The data file is named as "a.00001.txt" through "a.01000.txt". One way to loop over all the files is to write a shell script where I use sed to change the input file name in "Analysis.C" over an iteration from 0001 to 1000. However, I have to do this one input file at a time.
What I want is to run multiple instances of the file "Analysis.C" in parallel where it takes different inputs in each instance (the constraint here is the number of cores I can spare on my PC, I suppose), and executes the different instances at the same time. How do I do that?
|
With GNU Parallel you can do this:
parallel analysis.C ::: *.txt
Or if you have really many .txt-files:
printf '%s\0' *.txt | parallel -0 analysis.C
It will default to run one job per CPU thread. This can be adjusted with -j20 for 20 jobs in parallel.
Contrary to the parallel.moreutils-solution you can post process the output: The output is serialized, so you will never see output from two jobs mix.
GNU Parallel is a general parallelizer and makes is easy to run jobs in parallel on the same machine or on multiple machines you have ssh access to.
If you have 32 different jobs you want to run on 4 CPUs, a straight forward way to parallelize is to run 8 jobs on each CPU:
GNU Parallel instead spawns a new process when one finishes - keeping the CPUs active and thus saving time:
Installation
For security reasons you should install GNU Parallel with your package manager, but if GNU Parallel is not packaged for your distribution, you can do a personal installation, which does not require root access. It can be done in 10 seconds by doing this:
(wget -O - pi.dk/3 || curl pi.dk/3/ || fetch -o - http://pi.dk/3) | bash
For other installation options see http://git.savannah.gnu.org/cgit/parallel.git/tree/README
Learn more
See more examples: http://www.gnu.org/software/parallel/man.html
Watch the intro videos: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial: http://www.gnu.org/software/parallel/parallel_tutorial.html
Read the book: https://doi.org/10.5281/zenodo.1146014
Sign up for the email list to get support: https://lists.gnu.org/mailman/listinfo/parallel
| Parallely running multiple copies of the same file with different inputs using shell script |
1,516,741,576,000 |
I have the following zip archive structure:
$ unzip -l Undetermined_S0_L004_R1_001_fastqc.zip
Archive: Undetermined_S0_L004_R1_001_fastqc.zip
Length Date Time Name
-------- ---- ---- ----
0 10-10-14 14:44 Undetermined_S0_L004_R1_001_fastqc/
0 10-10-14 14:44 Undetermined_S0_L004_R1_001_fastqc/Icons/
0 10-10-14 14:44 Undetermined_S0_L004_R1_001_fastqc/Images/
1197 10-10-14 14:44 Undetermined_S0_L004_R1_001_fastqc/Icons/fastqc_icon.png
1450 10-10-14 14:44 Undetermined_S0_L004_R1_001_fastqc/Icons/warning.png
1561 10-10-14 14:44 Undetermined_S0_L004_R1_001_fastqc/Icons/error.png
1715 10-10-14 14:44 Undetermined_S0_L004_R1_001_fastqc/Icons/tick.png
782 10-10-14 14:44 Undetermined_S0_L004_R1_001_fastqc/summary.txt
9095 10-10-14 14:44 Undetermined_S0_L004_R1_001_fastqc/Images/per_base_quality.png
14381 10-10-14 14:44 Undetermined_S0_L004_R1_001_fastqc/Images/per_tile_quality.png
23205 10-10-14 14:44 Undetermined_S0_L004_R1_001_fastqc/Images/per_sequence_quality.png
30978 10-10-14 14:44 Undetermined_S0_L004_R1_001_fastqc/Images/per_base_sequence_content.png
31152 10-10-14 14:44 Undetermined_S0_L004_R1_001_fastqc/Images/per_sequence_gc_content.png
7861 10-10-14 14:44 Undetermined_S0_L004_R1_001_fastqc/Images/per_base_n_content.png
18356 10-10-14 14:44 Undetermined_S0_L004_R1_001_fastqc/Images/sequence_length_distribution.png
23040 10-10-14 14:44 Undetermined_S0_L004_R1_001_fastqc/Images/duplication_levels.png
9096 10-10-14 14:44 Undetermined_S0_L004_R1_001_fastqc/Images/adapter_content.png
58683 10-10-14 14:44 Undetermined_S0_L004_R1_001_fastqc/Images/kmer_profiles.png
355919 10-10-14 14:44 Undetermined_S0_L004_R1_001_fastqc/fastqc_report.html
301092 10-10-14 14:44 Undetermined_S0_L004_R1_001_fastqc/fastqc_data.txt
10117 10-10-14 14:44 Undetermined_S0_L004_R1_001_fastqc/fastqc.fo
-------- -------
899680 21 files
How is it possible to use fastqc_data.txt with crimson in parallel, because at the moment I get the following error:
find `pwd`/*_fastqc.zip -type f | parallel -j 3 unzip -c {} {}/fastqc_data.txt | crimson fastqc {} | less
Usage: crimson fastqc [OPTIONS] INPUT [OUTPUT]
Error: Invalid value for "input": Path "{}" does not exist.
|
You have a pipeline made of four commands:
find, which lists zip files.
parallel, which invokes unzip to extract one file in each zip file. Given that {} is replaced by the path to the zip file, you attempt to extract files like home/user977828/stuff/Undetermined_S0_L004_R1_001_fastqc.zip/fastqc_data.txt from the archive (if the current directory is /home/user977828/stuff).
crimson, which receives a jumble of the extracted files on standard input, and is invoked with the arguments fastqc and {},
less.
parallel only substitutes {} in its arguments. It can't do anything about the other parts of your pipeline. If you want to invoke crimson on each fastqc_data.txtfile separately, you need to pass a pipeline from unzip to crimson as an argument to parallel.
find *_fastqc.zip -type f | sed 's/\.zip$//' |
parallel -j 3 'unzip -c {}.zip {}/fastqc_data.txt | crimson fastqc /dev/stdin' |
less
| Parallel read the contents of a zipped file without extraction |
1,516,741,576,000 |
I tried to use parallel command in the following way:
cat asm.contig.fasta | parallel -k --block 1k --recstart '>' --pipe 'blat -t=dnax -q=prot - ../swissprot.fasta out{#}.psl -noHead'
but unfortunately I got this error:
mustOpen: Can't open - to read: No such file or directory
What did I do wrong?
|
The error is not from GNU Parallel, so it is from blat. I have not used blat for years, so I am not 100% sure of the following.
My guess is that you cannot use use - to denote STDIN for the database in blat.
There are several ways of tickling blat. Use /dev/stdin which will give the standard input as a fifo on many systems:
cat asm.contig.fasta | parallel -k --block 1k --recstart '>' --pipe 'blat -t=dnax -q=prot /dev/stdin ../swissprot.fasta out{#}.psl -noHead'
Use --fifo which will make a fifo/named pipe which will give the standard in put as a fifo on all supported systems. After the command is completed the fifo will be removed:
cat asm.contig.fasta | parallel --fifo -k --block 1k --recstart '>' --pipe 'blat -t=dnax -q=prot {} ../swissprot.fasta out{#}.psl -noHead'
Use --cat which will make a regular file containing the 1k block of data. After the command is completed the file will be removed.
cat asm.contig.fasta | parallel --cat -k --block 1k --recstart '>' --pipe 'blat -t=dnax -q=prot {} ../swissprot.fasta out{#}.psl -noHead'
--cat is generally the slowest (for --block 1k expect an additional 1 ms per job), but almost guaranteed to work.
Let us know which one worked.
| Parallel caused this "error mustOpen: Can't open - to read: No such file or directory" |
1,516,741,576,000 |
I have 4 cores, and 4 python script files preprocess0.py, preprocess1.py, preprocess2.py, preprocess3.py. I would like to run these 4 processes in parallel using GNU parallel. I do not have input files. The input file is hardcoded inside each *.py file (it's read only, so it's ok). I would like to output the results to a file0.csv ... file3.csv files. This is as far as I've come:
parallel -j4 --progress python preprocess*.py ::: '>' ./file{}.csv
But it just stays there without writing anything, as if waiting for some input.
|
The syntax is:
parallel -j4 --progress 'python {} > ./file{}.csv' ::: preprocess*.py
That would create files called filepreprocess1.py.csv... You could use
parallel -j4 --progress 'python {} > ./file{#}.csv' ::: preprocess*.py
instead to use the job number instead and get some file1.csv... files. Or if you want to extract the number for the file names:
parallel -j4 --progress 'python {} > ./file{=s/[^\d]//g=}.csv' ::: preprocess*.py
| gnu parallel with no argument script |
1,516,741,576,000 |
When I run the following command:
parallel --max-procs 4 echo ::: {1..4}
in my PC, it produces the expected output, 1, 2, 3, 4 (in different lines). However, when I run the same command on another computer (which has parallel installed), it doesn't produce output. Both PCs have Ubuntu 14.04 installed (the one where the command works has Ubuntu desktop and the other has Ubuntu server). I know this is a broad question, but what could be the problem??
Running echo {1..4} produces output in both computers.
Additional info: Running help on the console on both computers (the one where parallel works and the one where it doesn't produce output) gives:
GNU bash, version 4.3.11(1)-release (x86_64-pc-linux-gnu)
on both computers. Running parallel --version gives:
GNU parallel 20130922
Copyright (C) 2007,2008,2009,2010,2011,2012,2013 Ole Tange and Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
GNU parallel comes with no warranty.
Web site: http://www.gnu.org/software/parallel
When using GNU Parallel for a publication please cite:
O. Tange (2011): GNU Parallel - The Command-Line Power Tool,
;login: The USENIX Magazine, February 2011:42-47.
on both computers.
It should be noted that I am accessing both computers remotely through ssh, from a third computer, but I don't think that's important (or is it?). Any additional information you need just ask.
|
It turns out that I had zero free disk space on the PC that was giving trouble! Maybe this is too specific, but I am going to leave this here anyway, in case someone else runs into a similar problem.
| parallel --max-procs 4 echo ::: {1..4} produces no output? |
1,516,741,576,000 |
How can I get reasonable parallelisation on multi-core nodes without saturating resources? As in many other similar questions, the question is really how to learn to tweak GNU Parallel to get reasonable performance.
In the following example, I can't get to run processes in parallel without saturating resources or everything seems to run in one CPU after using some -j -N options.
From inside a Bash script running in a multi-core machine, the following loop is passed to GNU Parallel
for BAND in $(seq 1 "$BANDS") ;do
echo "gdalmerge_and_clean $VARIABLE $YEAR $BAND $OUTPUT_PIXEL_SIZE_X $OUTPUT_PIXEL_SIZE_Y"
done |parallel
This saturates, however, the machine and slows down processing.
In man parallel I read
--jobs -N
-j -N
--max-procs -N
-P -N
Subtract N from the number of CPU threads.
Run this many jobs in parallel. If the evaluated number is less than 1 then 1
will be used.
See also: --number-of-threads --number-of-cores --number-of-sockets
and I've tried to use
|parallel -j -3
but this, for some reason, uses only one CPU out of the 40. Checking with [h]top, only one CPU is reported high-use, the rest down to 0. Should -j -3 not use 'Number of CPUs' - 3 which would be 37 CPUs for example?
and I extended the previous call then
-j -3 --use-cores-instead-of-threads
blindly doing so, I guess. I've read https://unix.stackexchange.com/a/114678/13011, and I know from the admins of the cluster I used to run such parallel jobs, that hyperthreading is disabled. This is still running in one CPU.
I am now trying to use the following:
for BAND in $(seq 1 "$BANDS") ;do
echo "gdalmerge_and_clean $VARIABLE $YEAR $BAND $OUTPUT_PIXEL_SIZE_X $OUTPUT_PIXEL_SIZE_Y"
done |parallel -j 95%
or with |parallel -j 95% --use-cores-instead-of-threads.
Note
For the record, this is part of a batch job, scheduled via HTCondor and each job running on a separate node with some 40 physical CPUs available.
Above, I kept only the essential -- the complete for loop piped to parallel is:
for BAND in $(seq 1 "$BANDS") ;do
# Do not extract, unscale and merge if the scaled map exists already!
SCALED_MAP="era5_and_land_${VARIABLE}_${YEAR}_band_${BAND}_merged_scaled.nc"
MERGED_MAP="era5_and_land_${VARIABLE}_${YEAR}_band_${BAND}_merged.nc"
if [ ! -f "${SCALED_MAP+set}" ] ;then
echo "log $LOG_FILE Action=Merge, Output=$MERGED_MAP, Pixel >size=$OUTPUT_PIXEL_SIZE_X $OUTPUT_PIXEL_SIZE_Y, Timestamp=$(timestamp)"
echo "gdalmerge_and_clean $VARIABLE $YEAR $BAND $OUTPUT_PIXEL_SIZE_X >$OUTPUT_PIXEL_SIZE_Y"
else
echo "warning "Scaled map "$SCALED_MAP" exists already! Skipping merging.-""
fi
done |parallel -j 95%
log "$LOG_FILE" "Action=Merge, End=$(timestamp)"
where `log` and `warning` are a custom functions
|
To debug this I will suggest you first run this with something simpler than gdalmerge_and_clean.
Try:
seq 100 | parallel 'seq {} 100000000 | gzip | wc -c'
Does this correctly run one job per CPU thread?
seq 100 | parallel -j 95% 'seq {} 100000000 | gzip | wc -c'
Does this correctly run 19 jobs for every 20 CPU threads?
My guess is that gdalmerge_and_clean is actually run in the correct number of instances, but that it depends on I/O and is waiting for this. So your disk or network is pushed to the limit while the CPU is sitting idle and waiting.
You can verify the correct number of copies is started by using ps aux | grep gdalmerge_and_clean.
You can see if your disks are busy with iostats -dkx 1.
| GNU Parallel with -j -N still uses one CPU |
1,516,741,576,000 |
I've several files containg POST body requests.
I'd like to send those requests in parallel.
Related curl command is like:
curl -s -X POST $FHIR_SERVER/ -H "Content-Type: application/fhir+json" --data "@patient-bundle-01.json"
Request bodies are files like patient-bundle-xx, where xx is a number. Currently, I'd like to send up to 1500 requests using this incremental pattern.
How could I send above requests using incremental pattern?
How could I do this in parallel?
|
With GNU Parallel:
doit() {
bundle="$1"
curl -s -X POST $FHIR_SERVER/ -H "Content-Type: application/fhir+json" --data "@patient-bundle-$bundle.json"
}
export -f doit
export FHIR_SERVER
seq -w 99 | parallel -j77 doit
Adjust -j77 if you do not want 77 jobs in parallel.
| Make parallel http requests using raw data files |
1,516,741,576,000 |
I pass the variables from a script to the main script in a parallel command like
./script1 | parallel -u --jobs 3 "./script2 {}"
How can I pass the job number as the second argument of ./script2?
something like
./script1 | parallel -u --jobs 3 "./script2 {} {}" ::: {1..3}
but the first {} should come from ./script1.
Note that I do not want the combination of arguments. Instead, the running jobs should be
./script2 var1 1
./script2 var2 2
./script2 var3 3
./script2 var4 1
./script2 var5 2
./script2 var6 3
./script2 var7 1
Imagine ./script2' writes to filesfile1.txt,files2.txt, andfile3.txtwhere the number is{1..3}` or the job number.
I want to make sure only one script is writing to its corresponding file. In other words, three parallel jobs write to three designated files.
|
You are most likely looking for {%} which is not the job number, but the job slot number.
LC_ALL=C seq 10 -0.1 1 | shuf |
parallel --lb 'echo Job {} grabs {%}; sleep {}; echo Job {} releases {%}'
Notice how when a number is released it is grabbed by the next job. So no 2 jobs running in parallel will have the same {%} value.
This is covered in chapter 5 of GNU Parallel 2018 (http://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html or https://doi.org/10.5281/zenodo.1146014).
| How to pass job number in GNU Parallel? |
1,516,741,576,000 |
I am copying files from remote server to my local server using below scp command. I just type below command on the terminal and it starts copying.
scp -r user@machineA:/data/process/* /data/process/
Now since on remote servers we have around 100 files and each file size is around 11 GB so above command will copy one file at a time. Is there any way by which I can copy 5 files at a time in parallel with some command that I can run directly on the terminal?
I also have GNU parallel installed but not sure how can I use it here which can help me copy files in parallel by running directly on terminal? Or if there is any other way then I am open for that as well.
|
Here's the command to be run on the remote server, involving find and parallel:
find /data/process/ -type f | parallel scp {} user@machineB:/data/process/
Edit:
See the documentation on how to control the number of jobs to be executed in parallel.
The number of concurrent jobs is given with --jobs or the equivalent -j.
By default --jobs is the same as the number of CPU cores.
--jobs 0 will run as many jobs in parallel as possible.
Edit:
This should be another question, and has already been asked and answered: how to run a command on a remote machine?
ssh user@machineA 'find /data/process/ -type f | parallel scp {} user@machineB:/data/process/'
| Copy files in parallel from remote servers using some command on terminal? |
1,516,741,576,000 |
If I have already started a job with GNU parallel in a similar fashion to:
$ cat jobs | parallel -j 70 "program {};"
is it possible, by e.g. some signal, to adjust the number of jobs of this parallel job? So that I could indicate to parallel that there should now be run at most 75 sub-jobs?
| ERROR: type should be string, got "\nhttps://www.gnu.org/software/parallel/parallel_tutorial.html#Number-of-simultaneous-jobs\n\nNumber of simultaneous jobs\n:\n--jobs can read from a file which is re-read when a job finishes:\necho 50% > my_jobs\n/usr/bin/time parallel -N0 --jobs my_jobs sleep 1 :::: num128 &\nsleep 1\necho 0 > my_jobs\nwait\n\nThe first second\nonly 50% of the CPU cores will run a job. Then 0 is put into my_jobs\nand then the rest of the jobs will be started in parallel.\n\nI highly recommend spending an hour walking through the tutorial. Your command line will love you for it.\n" | Is it possible to adjust number of sub-jobs for GNU parallel after invocation? |
1,516,741,576,000 |
Due to my lack of Bash knowledge I've tried for a few hours now to get something like this to work:
find Directories -mindepth 4 -type d -print0 | parallel -0 -j0 ./MyScript -d {Found Directory} {1} ::: a b c d
Where a,b,c and d are different arguments that my script needs to execute commands (in my case being -rb, -s, -is 20 44, -ib 13 25 .... and so on).
I need to execute the script once per argument for each found subdirectory while keeping the subdirectory information intact. The -d tells my script the target directory to execute itself in which works fine if {1} ::: a b c d is not there, if it is there then it just runs 4 times with a, b, c and d as arguments.
What I have is a large hierarchy of directories, which at depth 4 contains files that the script should execute different commands on based on what the argument {1} is.
Since this is a very resource and time consuming script I thought it would be nice to automate it with something like this, but I've clearly misunderstood something completely, anyone who can point me in the right direction ?
|
find Directories -mindepth 4 -type d -print0 | parallel -0 -j0 ./MyScript -d {2} {1} ::: a b c d :::: -
| Find | parallel executing script with path from find + other arguments |
1,516,741,576,000 |
My scripts are having trouble with correctly running things in GNU parallel.
I have a sub_script like so (all these are actually simplified versions):
#! /bin/bash
input=$1
# input is a date in YYYYMMDD format
mkdir -p $input
cd $input
filename=$input'.txt'
echo 'line1' > $filename
echo 'The date is: '$input >> $filename
Then I have a file multi.sh like so:
cd /home/me/scripts; ./sub_script 20141001
cd /home/me/scripts; ./sub_script 20141002
cd /home/me/scripts; ./sub_script 20141003
cd /home/me/scripts; ./sub_script 20141004
cd /home/me/scripts; ./sub_script 20141005
I am trying to use GNU parallel to execute all these functions with multiple cores using this command
parallel -j 3 --delay 1 < multi.sh
to run on 3 cores. I've tried to implement a 1 second delay between running each line to prevent problems, but this does not work.
I am having problems with the new directories containing improper files. I think this only happens when there are more lines in multi.sh than cores specified by -j, and it only happens sporadically (it's not always reproducible). I can rerun the parallel line 2 times in a row and get different results. Sometimes I might get 20141002.txt files in the 20141005 directory instead of the 20141005.txt files. Other times I may only get the 20141002.txt files in the 201005 directory.
Are there any suggestions on how I can fix this? GNU parallel is preferred, but I can try other commands as well.
|
Why the extra batchfile, if you use parallel?
parallel -j3 --delay 1 ./sub_script ::: 20141001 20141002 20141003 20141004 20141005
| Concurrency problems with GNU parallel |
1,516,741,576,000 |
I have a short script that uses scp to copy files to a number of remote hosts (yes, I know about rdist and rsync; they both fail to work for a few of the hosts - that's not the point here; I'm only copying a few non-critical files anyway).
The meat of the script looks something like this:
for h in $HOSTS; do
echo $h
echo '----------------------------------------'
scp -r $FILES ${h}:
echo ''
done
Here is partial output from running this script:
protector
----------------------------------------
.bash_profile 100% 555 0.5KB/s 00:00
.bashrc 100% 2124 2.1KB/s 00:00
.zshenv 100% 561 0.6KB/s 00:00
.zshrc 100% 2354 2.3KB/s 00:00
.shrc 100% 1887 1.8KB/s 00:00
.bash_logout 100% 17 0.0KB/s 00:00
.logout 100% 64 0.1KB/s 00:00
.zlogout 100% 17 0.0KB/s 00:00
.vimrc 100% 717 0.7KB/s 00:00
pup
----------------------------------------
.bash_profile 100% 555 0.5KB/s 00:00
.bashrc 100% 2124 2.1KB/s 00:00
.zshenv 100% 561 0.6KB/s 00:00
.zshrc 100% 2354 2.3KB/s 00:00
.shrc 100% 1887 1.8KB/s 00:00
.bash_logout 100% 17 0.0KB/s 00:00
.logout 100% 64 0.1KB/s 00:00
.zlogout 100% 17 0.0KB/s 00:00
.vimrc 100% 717 0.7KB/s 00:00
Since this script copies to hundreds of hosts, it takes a little while, so I decided to try to use GNU Parallel to speed it up. Here is the revised version of the script utilizing parallel (please don't comment on the echo hack at the beginning, it's not relevant to the problem):
(for host in $(echo $HOSTS); do echo $host; done) | parallel "echo {}; echo '----------------------------------------' ; scp -r $FILES {}: ; echo ''"
The problem is that the output from using scp in this way looks like this:
pup
----------------------------------------
soils
----------------------------------------
Because the commands are running in parallel, the ordering is different, and parallel seems to complete much faster than a for loop of scps, but as you can see, there is no output to stdout from scp. I can't figure out why - output to stdout from echo is not being suppressed, for instance. When I try scp -v inside parallel, I get partial verbose output, but I don't get any of the usual size/time information.
Is there anybody who knows why scp output is getting suppressed?
|
To trick scp into thinking it has a tty connected you can use script:
script -c 'scp foo server1:/bar/'
So to run that in with parallel you need something like:
parallel -j30 "echo {}; echo '----------------------------------------' ; script -c 'scp -r $FILES {}:' ; echo ''" ::: $HOSTS
To learn more check out the intro videos (https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1), the EXAMPLEs (n1_argument_appending">http://www.gnu.org/software/parallel/man.html#example__working_as_xargs_n1_argument_appending), and the tutorial (http://www.gnu.org/software/parallel/parallel_tutorial.html).
| scp does not display output when used with gnu parallel |
1,582,061,970,000 |
I would like the timing pause to be replaced with the equivalent of a getchar() in a GNU parallel execution:
parallel -j2 --halt 2 ::: 'sleep 5m; return 1' './runMe'
However the following does not work (it finishes the execution of the first job immediately):
parallel -j2 --halt 2 ::: 'read -n1 kbd; return 1' '/runMe'
Is there another way than just waiting?
NB: ./runMe contains an infinite loop.
|
GNU Parallel can run interactively using -p.
parallel -p echo ::: 1 2 3
You will have to answer y every time, but maybe that is good enough.
Also be aware that any output will be delayed. When running 3 jobs in parallel, the output of job 1 will be printed after starting job 3.
| Pausing in GNU parallel and waiting for character |
1,582,061,970,000 |
I trying to run command recon-all with GNU parallel freesurfer preproc i have a bash array of list of patients to run 8 patents simultaneously:
root@4d8896dfec6c:/tmp# echo ${ids[@]}
G001 G002 G003 G004 G005 G006 G007 G008
and try to run with command:
echo ${ids[@]} | parallel --jobs 28 recon-all -s {.} -all -qcache
it doesn't work because i suppose i need to have bash array in ls representation, smth like:
ls ${ids[@]} | parallel --jobs 28 recon-all -s {.} -all -qcache
How can i do that?
|
The problem is that parallel wants the input to be separated by newlines but when you use echo it is separated by spaces. In order to print some words separated by newlines you can try one of these
echo one two three | tr ' ' '\n' # in case your input can not be controlled by you
printf '%s\n' one two three # if you can control the words eg if you have an array
So you should probably do it like this:
printf '%s\n' "${ids[@]}" | parallel --jobs 28 recon-all -s {.} -all -qcache
Remember to quote your array substitutions and variables in general in order to prevent accidental word splitting and other side effects if your values contain special characters.
| gnu parallel with bash array |
1,582,061,970,000 |
I'm learning GNU parallel and tried the following:
$ for i in {1.txt,2.txt}; do time wc -l $i; done
100 1.txt
real 0m0.010s
user 0m0.000s
sys 0m0.010s
10000012 2.txt
real 0m0.069s
user 0m0.050s
sys 0m0.018s
I then reran the above command with parallel, but it slowed things down. Why?
$ for i in {1.txt,2.txt}; do time parallel --nonall wc -l $i; done
100 1.txt
real 0m0.325s
user 0m0.192s
sys 0m0.042s
10000012 2.txt
real 0m0.305s
user 0m0.220s
sys 0m0.043s
|
In your case you're calling it from a for loop so you aren't really running anything in parallel. All you're doing is adding the overhead of calling parallel in the second example, but it's still only running the files through in a single fashion.
Example
This might help you see what's going on.
without parallel
$ time for i in {1..2}; do sleep 2;done
real 0m4.004s
user 0m0.001s
sys 0m0.002s
with parallel
$ time for i in {1..2}; do parallel "sleep 2" < /dev/null;done
real 0m4.574s
user 0m0.245s
sys 0m0.089s
An alternative
You could call parallel like this instead.
$ time parallel --gnu time wc -l ::: 1.txt 2.txt
real 0m0.007s
user 0m0.001s
sys 0m0.000s
1000 1.txt
real 0m0.003s
user 0m0.000s
sys 0m0.001s
1000 2.txt
real 0m0.207s
user 0m0.120s
sys 0m0.052s
Here we can see there is overhead in having to call `parallel with the 3rd time grouping showing the "overall" amount of time taken to run the entire parallel command.
References
Using GNU parallel
| Why does GNU Parallel slow down? |
1,582,061,970,000 |
I am running GNU parallel and after some time I am getting:
parallel: Warning: no more file handles Raising ulimit -n or etc/security/limits.conf may help.
What argument would be appropriate to add to the parallel command in order to overcome this?
I changed limits.conf to unlimited but then I could not use sudo or login as root or ssh to my box same issue like here
here is the piece of code that I am using. I have 2 files, one with passwords second with hosts.
passPasswords_and_hosts() {
`sudo sshpass -p "$1" ssh -o ConnectTimeout=2 root@"$2" -o StrictHostKeyChecking=no "$command_linux"`
}
export -f testone
export -p command_linux
parallel --tag -k passPasswords_and_hosts :::: "$passwords" "$linux_hosts"
|
To avoid having tmp files laying around after abnormal termination (think kill -9 or system crash), GNU Parallel opens tmp files and removes them immediately. But it keeps the files open.
To meet the requirement of --keep-order it has to keep all the files open that have not been printed yet. So if you have 1000000 commands and command 2 is stuck forever, then GNU Parallel will happily run command 3 and beyond, but if command 2 does not come unstuck, then GNU Parallel will eventually run out of file handles (it uses 4 file handles per job).
In your case your passPasswords_and_hosts probably becomes stuck at some point. In your output it would be the job following the last output (i.e. the job not yet printed).
So try running that job by hand and see if there is some obvious problem.
You can also remove -k. Then your stuck job will still use 4 file handles, but all the following jobs that completes, will not, as they will be printed when they are done.
Finally you can use --timeout. I normally use --timeout 1000%. This means that if a job takes longer than 10x the median run time of all successful jobs, then it is killed. It works for a remarkable range of situations.
| parallel: Warning: No more file handles |
1,582,061,970,000 |
I would like to speed up my archiving operations, I am usually doing 23 GiB (one Blu-Ray) backups.
I have found this: How to do large file parallel encryption using GnuPG and GNU parallel?
Since I don't understand this code at all (have never used parallel):
tar --create --format=posix --preserve-permissions --same-owner --directory $BASE/$name --to-stdout . |
parallel --pipe --recend '' --keep-order --block-size 128M "xz -9 --check=sha256 |
gpg --encrypt --recipient $RECIPIENT;echo bLoCk EnD" |
pv > $TARGET/$FILENAME
I would like to ask if anyone could kindly parse it for me. Thank you.
|
tar run command tar.
--create create a tar archive.
--format=posix use the POSIX format of tar archive. This means you can extract it on other systems that support the POSIX format.
--preserve-permissions keep the same permissions on the files
--same-owner keep the same owner of the file (only relevant when extracting as root)
--directory $BASE/$name change to the dir $BASE/$name before starting
--to-stdout instead of saving to a file, send output to stdout
. tar the whole directory
| pipe stdout to next command
parallel run parallel
--pipe use pipe mode, so input on stdin will be given as input on stdin to the command to run (and not as command line arguments, which is the normal mode).
--recend '' Normally GNU Parallel splits on \n. Disable that because input is not text, but binary data.
--keep-order Make sure the output of the first command run is printed before the output of the second command - even if the second command finishes first.
--block-size 128M Pass a block of 128 MB of data to the command.
"..." the command to run
| pipe stdout to next command
pv show the speed of which data is sent
> $TARGET/$FILENAME redirect stdout to $TARGET/$FILENAME
GNU Parallel starts this command for each 128MB block:
xz the command xz
-9 compress level 9
--check=sha256 include integrity check in the output to be able to catch bit errors (e.g. on faulty disks).
| pipe stdout to next command
gpg the GNU Privacy Guard command
--encrypt encrypt data coming on stdin
--recipient $RECIPIENT use $RECIPIENT's key for encryption
; command separator
echo bLoCk EnD print bLoCk EnD
| GNU parallel proper usage in conjunction with tar, xz, gpg |
1,582,061,970,000 |
I want to parallelize spliting of many directories into subdirectories using parallel or using another tool or method.
E.g. I have 1 000 000 directories with content, but it's too much for one directory, so I want to create 10 dirs in the main dir and move in each of them 100 000 original dirs. I also want to use sorting by date. I already asked similar question here, but this isn't duplicate, because I tried new commands, got new results and now I reformulated the question.
So, I already tried this
ls -tr|parallel -n100000 mkdir "dir_{#}"\;mv {} "dir_{#}"
and this
ls -tr | parallel -j10 -n100000 --no-notice -k 'mkdir -p dir_{#}; mv {} -t dir_{#}'
commands, but it moves only ~10 000 into one subdir(sometimes ~6200, sometimes ~12 500) and create too much subdirs - sometimes in 10 times more than I need.
I also tried to use this:
ls -dtr * | parallel -j10 -n100000 --no-notice -k 'mkdir -p dir_{#}; mv {} -t dir_{#}'
but it gave bash: /bin/ls: Argument list too long.
Of course, I don't need exactly 100 000 dirs in each subdir, it can be 101 000 or 98 500 dirs, it should be a number in the range of 100 000
How can I execute this task in parallel or using parallel?
|
This problem deals with heavy IO. I doubt that parallel is really useful in this situation.
Anyway I suggest that you consider a "traditional" approach:
mkdir dir_{1..10}
ls -tr | nl | \
awk '$2 !~ /^dir_/ {i=1+int($1/100000); print $2 | "xargs mv -t dir_"i}'
where
ls -tr | nl sorts the directories by date and adds an auxiliar dir-number
$2 !~ /^dir_/ is used to skip the just-created folders.
i=1+int($1/100000) calculates the number of the folder based on the dir-number
print $2 | "xargs mv -t dir_"i moves without process proliferation
If possible compare also the respective times: time .... (and share the results with us ☺)
| Use parallel to split many directories into subdirectories or parallelize this task |
1,582,061,970,000 |
I'm trying to use GNU Parallel to run a command for each input argument, using that argument as the command's working directory (without appending it to the command line).
Basically, what I need to do is:
/foo -> "cd /foo; mycmd"
/bar -> "cd /bar; mycmd"
/baz -> "cd /baz; mycmd"
Parallel has --workdir which seems to do what I want by virtue of supporting {} replacement strings:
--workdir mydir
--wd mydir
Jobs will be run in the dir mydir. The default is the current dir for the local machine, and the login dir for remote computers.
<...>
mydir can contain GNU parallel's replacement strings.
To prevent the argument from being appended to the command line, I tried to use -n0 or -N0:
--max-args max-args
-n max-args
Use at most max-args arguments per command line.
<...>
-n 0 means read one argument, but insert 0 arguments on the command line.
However, that doesn't seem to work:
$ mkdir -p $HOME/{foo,bar,baz}
$ printf '%s\n' $HOME/{foo,bar,baz} | parallel --workdir '{}' -n0 'pwd'
parallel: Error: Cannot change into non-executable dir : No such file or directory
Note the space before the :, which is not a typo in GNU parallel, but an indication that the workdir got evaluated to an empty string. This becomes evident if I prepend a fixed string to {}, in which case all pwds print that fixed string:
$ printf '%s\n' $HOME/{foo,bar,baz} | parallel --workdir '/{}' -N0 'pwd'
/
/
/
What am I doing wrong?
|
I believe the easiest would be:
printf '%s\n' $HOME/{foo,bar,baz} | parallel 'cd {} && myprg'
You can use --workdir, but it is clearly not meant for this situation. GNU Parallel normally wants a replacement string in the command template, and you want {} to contain the argument for --workdir.
Using -n0 will not help, because then {} will be empty and then --workdir will fail: {} evaluates to the same string no matter where it is used.
So the workaround is to use {}, but use it where it does no harm:
parallel --workdir {} 'true dummy {}; myprg' ::: $HOME/{foo,bar,baz}
Or use {==} to generate a static part of the command:
parallel --workdir {} '{= $_="myprg" =}' ::: $HOME/{foo,bar,baz}
Personally, I prefer the cd {} version: I think easier to see what you are trying to do.
| Use each input argument in GNU Parallel as a working directory |
1,582,061,970,000 |
#!/usr/bin/bash
TARGETS=(
"81.176.235.2"
"81.176.70.2"
"78.41.109.7"
)
myIPs=(
"185.164.100.1"
"185.164.100.2"
"185.164.100.3"
"185.164.100.4"
"185.164.100.5"
)
for t in "${TARGETS[@]}"
do
for a in "${myIPs[@]}"
do
echo "${a} ${t} -p 80" >>log 2>&1 &
echo "${a} ${t} -p 443" >>log 2>&1 &
wait
done
done
I want this code to start with echo commands for each IP in TARGETS executing them in parallel. At the same time the script is not meant to proceed with echo commands for more than one address in myIPs simulteously, hence I introduced wait in the internal loop.
I want to have pairs of echo (each for the port 80 and 443) executed in parallel for each target in TARGETS. In other words I want to accomplish this (but sadly it does not work):
for t in "${TARGETS[@]}"
do &
for a in "${myIPs[@]}"
do
echo "${a} ${t} -p 80" >>log 2>&1 &
echo "${a} ${t} -p 443" >>log 2>&1 &
wait
done
done
wait
Yet, because it would increase my load averages too much, I do not want this: :
for t in "${TARGETS[@]}"
do
for a in "${myIPs[@]}"
do
echo "${a} ${t} -p 80" >>log 2>&1 &
echo "${a} ${t} -p 443" >>log 2>&1 &
done
done
wait
How might I accomplish my objective?
P.S. This is just a snippet of a more complex script. I wanted isolate the relevant issue, hence the use of echo instead of one of the networking commands.
|
I find your question hard to understand: you seem to want both parallel and sequential execution.
Do you want this?
for t in "${TARGETS[@]}"; do
(
for a in "${myIPs[@]}"; do
echo "${a} ${t} -p 80" >>log 2>&1 &
echo "${a} ${t} -p 443" >>log 2>&1 &
wait
done
) &
done
each target's for loop is run in a subshell in the background.
| How might I execute this nested for loop in parallel? |
1,582,061,970,000 |
With this loop we sequential update on all servers (server list = consul members | grep awk {'print $2'} | cut -d ":" -f1) the package consul.
for i in $(consul members | grep awk {'print $2'} | cut -d ":" -f1) ; do sshpass -p $PASSWORD ssh -oStrictHostKeyChecking=no -q root@$i "hostname && yum clean all && yum -y update consul && systemctl restart consul.service" ; done
We have over 1000 servers, so we wish to run the sshpass in parallel on 10 servers. I found GNU Parallel for this task.
Howo use it with sshpass and make sure no server (server list) is done twice?
|
Indeed, pssh sounds like the better solution. If you must use parallel it should be fairly simple: pipe the hostnames one per line into a single command that uses {} as a placehold. Eg:
consul members | ... awk {'print $2'} | cut -d ":" -f1 |
parallel -j 10 sshpass -p "$PASSWORD" ssh -oStrictHostKeyChecking=no -q root@{} "hostname && yum clean all && yum -y update consul && systemctl restart consul.service"
Using sshpass should not make any difference. Test it first with a simple command such as just hostname.
| GNU Parallel and sshpass with server list in a loop |
1,582,061,970,000 |
Scenario:
$ process(){ echo "[$1] [$2] [$3]" ; } ; export -f process
$ process "x" "" "a.txt"
[x] [] [a.txt]
Here we see that the 2nd argument is empty string (expected).
$ find -name "*.txt" -print | SHELL=$(type -p bash) parallel process "x" ""
[x] [./a.txt] []
[x] [./b.txt] []
[x] [./c.txt] []
Here we see that the 2nd argument is the output of find (unexpected).
Expected output:
[x] [] [./a.txt]
[x] [] [./b.txt]
[x] [] [./c.txt]
How to fix?
Note: if the 2nd argument is changed from "" to "y", then the output of find is present as the 3rd argument (expected):
$ find -name "*.txt" -print | SHELL=$(type -p bash) parallel process "x" "y"
[x] [y] [./a.txt]
[x] [y] [./b.txt]
[x] [y] [./c.txt]
Why isn't the output of find present as the 3rd argument with ""?
UPD: It seems that the solution is \"\":
$ find -name "*.txt" -print | SHELL=$(type -p bash) parallel process "x" \"\"
[x] [] [./a.txt]
[x] [] [./b.txt]
[x] [] [./c.txt]
However, I'm not sure that this is the correct general solution. Here is the counterexample:
$ VAR="" ; find -name "*.txt" -print | SHELL=$(type -p bash) parallel process "x" "$VAR"
[x] [./a.txt] []
[x] [./b.txt] []
[x] [./c.txt] []
|
So, parallel runs the command through a shell, instead of executing it directly. Well, it has to, since otherwise the shell function you're using wouldn't work.
It also means that arguments with whitespace will get split:
$ echo x | parallel process "foo bar" ""
[foo] [bar] [x]
And it doesn't really matter if you quote the individual args or the whole command:
$ echo x | parallel "process foo bar"
[foo] [bar] [x]
And you can do things like this:
$ echo x | parallel process '$(date +%F)'
[2024-02-29] [x] []
$ echo x | parallel "process foo bar > test.out"
$ cat test.out
[foo] [bar] [x]
To pass arbitrary values, you'd need to get them quoted for the shell. In Bash, you could use the ${var@Q} expansion for variables:
$ var="foo bar"; echo x | parallel process "${var@Q}"
[foo bar] [x] []
And parallel looks to have an option to do just this:
--quote
-q Quote command. If your command contains special
characters
that should not be interpreted by the shell (e.g. ; \ | *), use --quote to escape these. The command must be a
simple command (see man bash) without redirections and without variable assignments.
See the section QUOTING. Most people will not need this. Quoting is disabled by default.
$ var="foo bar"; echo x | parallel --quote process "$var"
[foo bar] [x] []
Of course that will also trash redirections and such:
$ var="foo bar"; echo x | parallel --quote process "$var" ">test.out"
[foo bar] [>test.out] [x]
Of course, it will quote whitespace, so if you try to pass the command arguments as a single string, it will fail.
Note that when you do this
$ VAR="" ; ... parallel process "x" "$VAR"
the variable still contains just the empty string, which is what gets passed to parallel as an argument. To make it the same as parallel process "x" \"\", you'd need to have hard quotes in the variable, i.e. VAR=\"\", or VAR='""' or equivalent. Or use something like parallel process "x" "'$VAR'" instead. And remember that you can't blindly wrap things in quotes if the variable itself can also contain quotes. This will fail:
$ var="ain't so"; echo x | parallel process "'$var'"
/usr/bin/bash: -c: line 1: unexpected EOF while looking for matching `''
/usr/bin/bash: -c: line 2: syntax error: unexpected end of file
| GNU parallel: how to call exported bash function with empty string argument? |
1,582,061,970,000 |
I use GNU Parallel along a for loop like this:
for BAND in $(seq 1 "$BANDS") ;do
# Do not extract, unscale and merge if the scaled map exists already!
SCALED_MAP="era5_and_land_${VARIABLE}_${YEAR}_band_${BAND}_merged_scaled.nc"
MERGED_MAP="era5_and_land_${VARIABLE}_${YEAR}_band_${BAND}_merged.nc"
if [ ! -f "${SCALED_MAP+set}" ] ;then
echo "log $LOG_FILE Action=Merge, Output=$MERGED_MAP, Pixel size=$OUTPUT_PIXEL_SIZE_X $OUTPUT_PIXEL_SIZE_Y, Timestamp=$(timestamp)"
echo "gdalmerge_and_clean $VARIABLE $YEAR $BAND $OUTPUT_PIXEL_SIZE_X $OUTPUT_PIXEL_SIZE_Y"
else
echo "warning "Scaled map "$SCALED_MAP" exists already! Skipping merging.-""
fi
done |parallel -j 20 --joblog "parallel.${JOB_CLUSTER_PROCESS}.log"
log "$LOG_FILE" "Action=Merge, End=$(timestamp)"
(for the records: where ${JOB_CLUSTER_PROCESS}" a variable given by HTCondor).
In the logs I see only entries of the first command
echo "log $LOG_FILE Action=Merge, Output=$MERGED_MAP, Pixel >size=$OUTPUT_PIXEL_SIZE_X $OUTPUT_PIXEL_SIZE_Y, Timestamp=$(timestamp)"
which is actually a custom way to log actions during this loop in the following way:
# tell what you are doing
function log {
echo "${@: 2}" 2>&1 >> "$1" ;
}
export -f log
Is it possible to get the second line
echo "gdalmerge_and_clean $VARIABLE $YEAR $BAND $OUTPUT_PIXEL_SIZE_X >$OUTPUT_PIXEL_SIZE_Y"
only or along with the first together included in the .log file created by --joblog?
|
--joblog only adds to the joblog when the job is finished.
You are giving GNU Parallel two jobs:
log ...
gdalmerge_and_clean ...
log finishes fast and is added to joblog, but gdalmerge_and_clean probably takes longer to run.
I think you should consider rewriting your job as a function and call that:
doit() {
BAND=$1
# Do not extract, unscale and merge if the scaled map exists already!
SCALED_MAP="era5_and_land_${VARIABLE}_${YEAR}_band_${BAND}_merged_scaled.nc"
MERGED_MAP="era5_and_land_${VARIABLE}_${YEAR}_band_${BAND}_merged.nc"
if [ ! -f "${SCALED_MAP+set}" ] ;then
log $LOG_FILE Action=Merge, Output=$MERGED_MAP, Pixel size=$OUTPUT_PIXEL_SIZE_X $OUTPUT_PIXEL_SIZE_Y, Timestamp=$(timestamp)
gdalmerge_and_clean $VARIABLE $YEAR $BAND $OUTPUT_PIXEL_SIZE_X $OUTPUT_PIXEL_SIZE_Y
else
warning "Scaled map "$SCALED_MAP" exists already! Skipping merging.-"
fi
}
export -f doit
seq 1 "$BANDS" |
parallel -j 20 --joblog "parallel.${JOB_CLUSTER_PROCESS}.log" doit {}
log "$LOG_FILE" "Action=Merge, End=$(timestamp)"
I recommend you try --dry-run if GNU Parallel does something you do not expect. It will tell you what commands it intends to run.
I think it will be time well spent if you read chapter 1+2 of GNU Parallel 2018 (https://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html or download it at: https://doi.org/10.5281/zenodo.1146014)
It should take you less than 20 minutes, and your command line will love you for it.
| GNU Parallel --joblog logs only first line of commands inside a for loop |
1,582,061,970,000 |
I'm working on a computer with 10k cores, but only allowed to access 1,000 at a time.
I have a script that could benefit from GNU parallel in multiple places. Processing level A in parallel, and within that script, do something 30x.
It will take a lot of work to re-write the entire script to use the parallel --link and ::: A B C ::: $(seq 30) syntax.
Is there a way that two independent calls to parallel can communicate enough that they limit the total number of jobs to 1000 between the two of them?
|
If I understand correctly, you want to run 2 (or more instances) of GNU Parallel, and you want the total number of running jobs to be <1000.
So at some point one of the may run 300 jobs, then the other should limit to 700 jobs.
--limit is made for this kind of specialized situation: It will run a script of your choice and based on the exit value it will limit the number of jobs.
So now you need some way of determining the total number of jobs running.
Maybe some kind of [ $(ps aux | grep myprogram | wc -l) -gt 1000 ]?
(What kind of monster has 10k cores?)
| Limit total number of GNU-parallel jobs between mutliple calls |
1,582,061,970,000 |
I am trying to download multiple files parallelly in bash and I came across GNU parallel. It looks very simple and straight forward. But I am having a hard time getting GNU parallel working. What am I doing wrong? Any pointers are appreciated. As you can see the output is very sequential and I expect output to be different each time. I saw a similar question in SO (GNU parallel not working at all) but that solutions mention there did not work for me.
svarkey@svarkey-Precision-5510:~$ seq 1 3 | xargs -I{} -n 1 -P 4 kubectl version --short=true --context cs-prod{} --v=6
I0904 11:33:10.635636 24861 loader.go:375] Config loaded from file: /home/svarkey/.kube/config
I0904 11:33:10.640718 24863 loader.go:375] Config loaded from file: /home/svarkey/.kube/config
I0904 11:33:10.640806 24862 loader.go:375] Config loaded from file: /home/svarkey/.kube/config
I0904 11:33:11.727974 24863 round_trippers.go:443] GET https://kube-api.awsw3.cld.dtvops.net/version?timeout=32s 200 OK in 1086 milliseconds
Client Version: v1.18.7
Server Version: v1.14.6
I0904 11:33:11.741985 24861 round_trippers.go:443] GET https://kube-api.awsw1.cld.dtvops.net/version?timeout=32s 200 OK in 1105 milliseconds
Client Version: v1.18.7
Server Version: v1.14.6
I0904 11:33:11.859882 24862 round_trippers.go:443] GET https://kube-api.awsw2.cld.dtvops.net/version?timeout=32s 200 OK in 1218 milliseconds
Client Version: v1.18.7
Server Version: v1.14.6
svarkey@svarkey-Precision-5510:~$ seq 1 3 | parallel -j 4 -I{} kubectl version --short=true --context cs-prod{} --v=6
Client Version: v1.18.7
Server Version: v1.14.6
I0904 11:33:18.584076 24923 loader.go:375] Config loaded from file: /home/svarkey/.kube/config
I0904 11:33:19.662197 24923 round_trippers.go:443] GET https://kube-api.awsw1.cld.dtvops.net/version?timeout=32s 200 OK in 1077 milliseconds
Client Version: v1.18.7
Server Version: v1.14.6
I0904 11:33:18.591033 24928 loader.go:375] Config loaded from file: /home/svarkey/.kube/config
I0904 11:33:19.691343 24928 round_trippers.go:443] GET https://kube-api.awsw3.cld.dtvops.net/version?timeout=32s 200 OK in 1099 milliseconds
Client Version: v1.18.7
Server Version: v1.14.6
I0904 11:33:18.591033 24924 loader.go:375] Config loaded from file: /home/svarkey/.kube/config
I0904 11:33:19.775152 24924 round_trippers.go:443] GET https://kube-api.awsw2.cld.dtvops.net/version?timeout=32s 200 OK in 1183 milliseconds
svarkey@svarkey-Precision-5510:/tmp/parallel-20200822$ parallel --version
GNU parallel 20200822
Copyright (C) 2007-2020 Ole Tange, http://ole.tange.dk and Free Software
Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later
This is free software: you are free to change and redistribute it.
GNU parallel comes with no warranty.
Web site: https://www.gnu.org/software/parallel
|
parallel output is sequential because it captures the processes output and prints it only when that process is finished, unlike xargs which let the processes print the output immediately.
From man parallel
GNU parallel makes sure output from the commands is the same output as
you would get had you run the commands sequentially. This makes it
possible to use output from GNU parallel as input for other programs.
| Bash parallel command is executing commands sequentially |
1,582,061,970,000 |
I made a Python simulator that runs on the basis of user-provided arguments. To use the program, I run multiple random simulations (controlled with a seed value). I use GNU parallel to run the simulator with arguments in a similar manner as shown below:
parallel 'run-sim --seed {1} --power {2}' ::: <seed args> ::: <power args>
Now, there is a third argument --num that I want to use, but want to link that argument with the seed value. So that, for every seed value only one num value is used. However, the same num argument should not be used with every power value.
In a nutshell, this table should make you understand better:
| Power | Seed | num |
|:-----------|------------:|:------------:|
| 10 | 0 | 100 |
| 10 | 1 | 150 |
| 10 | 2 | 200 |
| 10 | 3 | 250 |
|:-----------|------------:|:------------:|
| 20 | 0 | 300 |
| 20 | 1 | 350 |
| 20 | 2 | 400 |
| 20 | 3 | 450 |
....
(The table format may not be suitable for mobile devices)
If I were to write the above implementation using a for loop, I would do something like:
for p in power:
for s, n in (seed, num[p])
simulate(p, s, n)
Where power is a 1D array, seed is a 1D array and num is a 2D array where a row depicts the corresponding num values for a power p.
My Solution:
Use multiple parallel statements for each power value, and use the --link parameter of parallel to bind the seed and num arguments.
parallel --link 'run-sim --seed {1} --num {2} --power 10' ::: 0 1 2 3 ::: 100 150 200 250
parallel --link 'run-sim --seed {1} --num {2} --power 20' ::: 0 1 2 3 ::: 300 350 400 450
...
The problem with this solution would be that I would have to limit the number of jobs for each statement based upon the number of power values. My computer can handle 50 extra processes before going into cardiac arrest, therefore for 3 power values, I would have to limit the jobs for each statement to 12.
What I am looking for
A single liner such that I don't have to run multiple parallel statements and fix the number of jobs to 50.
|
For part of the answer you want this, correct?
$ parallel --link -k echo {1} {2} ::: {0..3} ::: {100..450..50}
0 100
1 150
2 200
3 250
0 300
1 350
2 400
3 450
If so, one way to do what I think you want would be
$ parallel -k echo {1} {2} ::: {10..20..10} ::: "$(parallel --link -k echo {1} {2} ::: {0..3} ::: {100..450..50})"
10 0 100
10 1 150
10 2 200
10 3 250
10 0 300
10 1 350
10 2 400
10 3 450
20 0 100
20 1 150
20 2 200
20 3 250
20 0 300
20 1 350
20 2 400
20 3 450
Another way would be (with a sort thrown in to show it in the order you want; it wouldn't be necessary in the actual run):
$ parallel --link -k echo {1} {2} ::: {0..3} ::: {100..450..50} | parallel -a- echo {2} {1} ::: {10..20..10} | sort -k 1,1 -k3,3 -k2,2
10 0 100
10 1 150
10 2 200
10 3 250
10 0 300
10 1 350
10 2 400
10 3 450
20 0 100
20 1 150
20 2 200
20 3 250
20 0 300
20 1 350
20 2 400
20 3 450
Yet another way would be to have parallel invoke parallel:
$ parallel parallel --link --arg-sep ,,, echo {1} ,,, {0..3} ,,, {100..450..50} ::: {10..20..10}
10 0 100
10 1 150
10 2 200
10 3 250
10 0 300
10 1 350
10 2 400
10 3 450
20 0 100
20 1 150
20 2 200
20 3 250
20 0 300
20 1 350
20 2 400
20 3 450
This works because the “inner” parallel uses commas instead of colons for argument separators, so the “outer” parallel doesn't “see” the linked argument.
While I was working on a way to make that more understandable (there's an assumed ‘{}’ in there) I realized that that last example wouldn't exactly work for you, because the 2nd and 3rd arguments are one string. So I added the clarification, and (yet another!) parallel, to demonstrate how you'd run your Python simulator.
$ parallel parallel --link --arg-sep ,,, -I [] echo {1} [] ,,, {0..3} ,,, {100..450..50} ::: {10..20..10} | parallel -C' ' echo foo {1} bar {2} blat {3}
foo 10 bar 0 blat 100
foo 10 bar 1 blat 150
foo 10 bar 2 blat 200
foo 10 bar 3 blat 250
foo 10 bar 1 blat 350
foo 10 bar 0 blat 300
foo 10 bar 2 blat 400
foo 10 bar 3 blat 450
foo 20 bar 0 blat 100
foo 20 bar 1 blat 150
foo 20 bar 2 blat 200
foo 20 bar 3 blat 250
foo 20 bar 0 blat 300
foo 20 bar 1 blat 350
foo 20 bar 2 blat 400
foo 20 bar 3 blat 450
For any enumerated list of values
$ parallel parallel --link --arg-sep ,,, -I [] echo {1} [] ,,, {0..3} ,,, v0.0 v0.1 v0.2 v0.3 v1.0 v1.1 v1.2 v1.3 ::: {10..20..10} | parallel -C' ' echo power {1} seed {2} num {3}
power 20 seed 0 num v0.0
power 20 seed 1 num v0.1
power 20 seed 2 num v0.2
power 20 seed 3 num v0.3
power 20 seed 0 num v1.0
power 20 seed 1 num v1.1
power 20 seed 2 num v1.2
power 20 seed 3 num v1.3
power 10 seed 0 num v0.0
power 10 seed 1 num v0.1
power 10 seed 2 num v0.2
power 10 seed 3 num v0.3
power 10 seed 0 num v1.0
power 10 seed 1 num v1.1
power 10 seed 2 num v1.2
power 10 seed 3 num v1.3
This is getting to be a very long answer. I think maybe you want something more like this, where 1 through 12 (number of powers times number of seeds) are the unique values for each combination of power and seed, and could be an enumerated list of values rather than {1..12}? Note I'm linking power and seed rather than num and seed.
$ parallel --link echo {1} {2} ::: "$(parallel echo {1} {2} ::: {10..30..10} ::: {0..3})" ::: {1..12} | parallel -C' ' echo run-sim --power {1} --seed {2} --num {3}
run-sim --power 10 --seed 0 --num 1
run-sim --power 10 --seed 1 --num 2
run-sim --power 10 --seed 2 --num 3
run-sim --power 10 --seed 3 --num 4
run-sim --power 20 --seed 0 --num 5
run-sim --power 20 --seed 1 --num 6
run-sim --power 20 --seed 2 --num 7
run-sim --power 20 --seed 3 --num 8
run-sim --power 30 --seed 0 --num 9
run-sim --power 30 --seed 1 --num 10
run-sim --power 30 --seed 2 --num 11
run-sim --power 30 --seed 3 --num 12
| GNU Parallel linking arguments with alternating arguments |
1,582,061,970,000 |
Currently I have the following script for using the HaploTypeCaller program on my Unix system on a repeatable environment I created:
#!/bin/bash
#parallel call SNPs with chromosomes by GATK
for i in 1 2 3 4 5 6 7
do
for o in A B D
do
for u in _part1 _part2
do
(gatk HaplotypeCaller \
-R /storage/ppl/wentao/GATK_R_index/genome.fa \
-I GATK/MarkDuplicates/ApproachBsortedstettler.bam \
-L chr$i$o$u \
-O GATK/HaplotypeCaller/HaploSample.chr$i$o$u.raw.vcf &)
done
done
done
gatk HaplotypeCaller \
-R /storage/ppl/wentao/GATK_R_index/genome.fa \
-I GATK/MarkDuplicates/ApproachBsortedstettler.bam \
-L chrUn \
-O GATK/HaplotypeCaller/HaploSample.chrUn.raw.vcf&
How can I change this piece of code to parallel at least partially?
Is it worth to do I am trying to incorporate this whole script in a different script that you can see on a different question here
should I?
Will I get quite the boost on performance?
|
parallel echo HaploSample.chr{1}{2}{3}.raw.vcf ::: 1 2 3 4 5 6 7 ::: A B D ::: _part1 _part2
| Converting for loops on script that is called by another script into GNU parallel commands |
1,582,061,970,000 |
I tried performing the full installation from: http://git.savannah.gnu.org/cgit/parallel.git/tree/README
The installation was successful. It's working well when installed on Mac OS but on Amazon Linux (RHEL64) I am facing below issues:
On running just parallel the command exits silently.
dev-dsk % parallel
dev-dsk %
On running any command even parallel --version gives following error:
dev-dsk % parallel --version
parallel: invalid option -- '-'
parallel [OPTIONS] command -- arguments
for each argument, run command with argument, in parallel
parallel [OPTIONS] -- commands
run specified commands in parallel
Same error with running parallel --gnu.
IMO, there is no conflict with NO conflict with Tollef's parallel of moreutils package as moreutils doesn't exit on my machine.
How to make GNU Parallel work on RHEL64?
|
Not sure why it was not working as for me there was just one executable named parallel in my system path.
But I was able to fix it as below:
Run whereis parallel. This gives all the paths where executables named parallel is present. For my case there was just one path /usr/local/bin/parallel. Running using this path works just fine.
You can add an alias for this in ~/.bashrc or ~/.zshrc file like alias parallel='/usr/local/bin/parallel'
And now parallel works like charm.
dev-dsk % parallel --version
GNU parallel 20190322
Copyright (C) 2007-2019 Ole Tange and Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
GNU parallel comes with no warranty.
| GNU Parallel facing silent exit and invalid option error |
1,582,061,970,000 |
I want to run a task where I specify two commands which will be run in an alternating fashion with different parameters. E.g:
1. exec --foo $inputfile1 $inputfile.outfile1
2. exec --bar $inputfile2 $inputfile.outfile2
3. exec --foo $inputfile3 $inputfile.outfile3
4. exec --bar $inputfile4 $inputfile.outfile4
I could probably get away with specyfing two parallel commands or with specyfying two inputs but I need something more universal. The files will be specified using pipelined "find" command.
EDIT:
My command for one action would look like this:
find . -name 'somefiles*' -print0 | parallel -0 -j10 --verbose 'exec --foo {} {.}.outfile'
I just do not know how to do this in alternate fashion between two commands
So basically what I need parallel -j10 to do is to run 5 of this commands with foo parameter and 5 of them with bar parameter on a single set of files. I could probably get away with it not being alternating but i want parallel to take care about it being exactly 5/5 split so I don't end with more foos or more bars
|
You can first put all parameters in a file and then use
parallel -a filename command
For example:
echo "--fullscreen $(find /tmp -name *MAY*.pdf) $(find /tmp -name *MAY*.pdf).out" >> /tmp/a
echo "--page-label=3 $(find /tmp -name *MAY*.pdf) $(find /tmp -name *JUNE*.pdf).out" >> /tmp/a
echo "--fullscreen $(find /tmp -name *MAY*.pdf) $(find /tmp -name *JULY*.pdf).out" >> /tmp/a
Then run the command:
parallel -a /tmp/a evince
| GNU Parallel alternating jobs |
1,582,061,970,000 |
I want to print the filename/s together with the matching pattern but only once even if the pattern match has multiple occurrence in the file.
E.g. I have a list of patterns; list_of_patterns.txt and the directory I need to find the files is /path/to/files/*.
list_of_patterns.txt:
A
B
C
D
E
/path/to/files/
/file1
/file2
/file3
Let say /file1 has the pattern A multiple times like this:
/file1:
A
4234234
A
435435435
353535
A
(Also same goes to other files where there are multiple pattern match.)
I have this grep command running but it prints the filename every time a pattern matches.
grep -Hof list_of_patterns.txt /path/to/files/*
output:
/file1:A
/file1:A
/file1:A
/file2:B
/file2:B
/file3:C
/file3:B
... and so on.
I know sort can do this when you pipe it after the grep command grep -Hof list_of_patterns.txt /path/to/files/* | sort -u but it only executes when grep is finished. In the real world, my list_of_patterns.txt has hundreds of patterns inside. It takes sometimes an hour to finish the task.
Is there a better way to speedup the process?
UPDATE: some files have more than a hundred occurrences of matching pattern. E.g. /file4 has occurrences of pattern A 900 times. That's why it's taking grep an hour to finish because it prints every occurrences of the pattern match together with the filename.
E.g. output:
/file4:A
/file4:A
/file4:A
/file4:A
/file4:A
/file4:A
/file4:A
/file4:A
... and so on til' it reach 900 occurrences.
I only want it to print only once.
E.g. Desired output:
/file4:A
/file1:A
/file2:B
/file3:A
/file4:B
|
Is there a better way to speedup the process?
Yes, it's called GNU parallel:
parallel -j0 -k "grep -Hof list_of_patterns.txt {} | sort -u" ::: /path/to/files/*
j N - number of jobslots. Run up to N jobs in parallel. 0 means as many as possible.
k (--keep-order) - keep sequence of output same as the order of input
::: arguments - use arguments from the command line as input source instead of stdin (standard input)
| How to print only 1 filename together with the matching pattern? |
1,582,061,970,000 |
I have a process like this that generates a predefined number of files but at random intervals:
#!/bin/bash
for i in {1..10}
do
sleep $(shuf -i 20-60 -n 1)
echo $i > file_$i.txt
done
I have another process that runs on each of those files independently using GNU Parallel like so:
parallel wc -l ::: file_{1..10}.txt
As expected, Parallel runs on the files that are currently available. Is there a way to make Parallel wait for the remaining files to be available and run as soon as they are?
|
Look at https://www.gnu.org/software/parallel/parallel_examples.html#example-gnu-parallel-as-queue-system-batch-manager
Terminal 1:
true >jobqueue; tail -n+0 -f jobqueue | parallel -u
(-u is needed if you want output on screen immediately. Otherwise the output will be delayed until the next job completes. In both cases the job is run immediately).
Terminal 2:
#!/bin/bash
for i in {1..10}
do
sleep $(shuf -i 20-60 -n 1)
echo $i > file_$i.txt
echo file_$i.txt >> jobqueue
done
If the files are the only files created in my_dir look at https://www.gnu.org/software/parallel/parallel_examples.html#example-gnu-parallel-as-dir-processor
inotifywait -qmre MOVED_TO -e CLOSE_WRITE --format %w%f my_dir |
parallel -u echo
This way you do not need the jobqueue file.
| GNU Parallel: Run while partial files available and wait for the rest |
1,582,061,970,000 |
To set the stage, here is an example file structure:
test/1
test/2
test/3
I want to do this:
find -P -O3 "test/" -type f |
parallel --use-cpus-instead-of-cores -j+0 --tmpdir "test/" --files ./test.bash
And pass another manual parameter to test.bash such that if it is:
#!/usr/bin/env bash
main() {
echo "yay $1 $2!"
}
main "$1" "$2"
The output from cat test/*.par is:
yay test/1 param!
yay test/2 param!
yay test/3 param!
I've tried the following:
parallel --use-cpus-instead-of-cores -j+0 --tmpdir "test/" --files ./test.bash ::: 'param'
parallel --use-cpus-instead-of-cores -j+0 --tmpdir "test/" --files ./test.bash {} ::: 'param'
But they all fail and prioritize the manual parameter over the piped input.
Is there a way to perform this?
|
This is a bit unsafe, because file names can contain delimiters (newlines):
find -P -O3 test/ -type f \
| parallel -j+0 --use-cpus-instead-of-cores \
--tmpdir test/ \
--files \
-m ./test.bash {} 'param'
use instead:
find -P -O3 test/ -type f --print0 \
| parallel -0 \
-j+0 --use-cpus-instead-of-cores \
--tmpdir test/ \
--files \
-m ./test.bash {} 'param'
to make find separate results by the 0 byte (which can not appear in file names), and to make parallel use these.
However, why use find at all? Use zsh, instead of bash,
setopt extendedglob # usually already be on by default
setopt nullglob # don't fail if no files found
parallel -j+0 --use-cpus-instead-of-cores \
--tmpdir test/ \
--files \
-m ./test.bash {} 'param' \
::: test/**/*(.)
### ^^------- look recursively into all folders including the current
### (doesn't follow symlinks!)
### ^----- for anything
### ^^^-- as long as it's a file
| Passing parameters to GNU Parallel manually and by piping |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.