date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,712,316,189,000
I do not think that this is Linux's disk cache. In htop, the memory bar is green (not orange for cache) and I removed the files stored in zram. No processes seem to be using a lot of memory. The load was compiling software with its build files stored in zram (PORTAGE_TMPDIR which is /var/tmp/portage in Gentoo), with s...
Short answer: Use discard mount options when mounting file systems or turning on swap created on the Zram devices. Extended: When mounting a file system use discard as a mount option, you can set mount options with -o and options separated with a ,, no space between. It should be supported on most Linux file systems, ...
After a heavy I/O load, and storing many things in Zram, used space is close to total in `free`
1,712,316,189,000
I have little Linux system with 256MB RAM. I'm little bit confused, where the RAM may be lost? It is running old linux kernel 2.6.38 and I'm not able to ubgrade it (some specific ARM board). SHM and all tmpfs mounted filesystems are almost empty shmem:448kB Everyhing is consumed by active_anon pages but running proces...
Total_vm was badly calculated by me and the OOM report is correct. app has allocated 59739 pages which is 233MB. So, this is the correct reason of OOM.
Embedded Linux OOM - help with lost RAM
1,712,316,189,000
I'm asking specifically about Linux, but an answer that applies to Unix in general (i.e. POSIX or similar) would be even better, obviously. Linux uses free memory (i.e. that memory which is not yet allocated to processes) for caching filesystem metadata (and maybe other things). When processes request additional memor...
You could perhaps use madvise(2)’s MADV_FREE for this — it marks pages as available for reclaim, but doesn’t necessarily drop them immediately, and the data can be read back. You’ll know the pages are gone if you get all zeroes back (per page).
Can a process allocate cache memory such that the kernel can seize it when necessary?
1,712,316,189,000
I am trying to compile the mainline Linux kernel with a custom config. This one! Running on a 64 bit system. At the last step, when linking the Kernel, it fails because it goes OOM (error 137). [...] DESCEND objtool INSTALL libsubcmd_headers CALL scripts/checksyscalls.sh LD vmlinux.o Killed make[2]: **...
It's getting unaffordable to develop Linux I am afraid it has always been. 32GB RAM is common on kernel devs desktops. And yet some of them started encountering ooms when building their allyesconfig-ed kernel. Lucky you… who are apparently not allyesconfig-ing… you should not need more than 32G… ;-) On a side note, ...
Linux build with custom config using all RAM (8GB)?
1,712,316,189,000
Edit 1: The freezing just happened and I was able recover from it. Log(syslog) from the freezing until 'now': https://ufile.io/ivred Edit 2: It seems a bug/problem with GDM3. I'll try Xubuntu. Edit 3: Now I'm using Xubuntu. The problem still happens, but a lot less often. So.. it is indeed a memory issue. I'm currentl...
The solution that "fra-san" gave at the comments here fitted perfectly. Using "cgroup-tools" package I was able to limit Chrome memory usage successfully. I tested it opening dozens of tabs at the same time and I could see the memory limit in action. However, I had to leave my script running, since, as much as I mostl...
Shell-script to periodically free some memory up with Ubuntu 18.10 LiveCD with only 4GB of RAM. Need some improvement
1,509,018,664,000
We build a system that's intended to be on all the time - it collects and displays graphs of data. If we leave it without changing anything for long enough, we end up with an oom-killer event. That kills our main process (it's got the high oom-score) and our software gets restarted. Basics: The system is CentOS 6, ker...
You've asked two questions. 1) If the OOM Killer runs + you have no swapping, likely this relates to your vm.swappiness setting. Try setting this to 1. On your antiquated + highly hackable kernel (shudder), setting to 0 (as I recall), disables swapping completely, which likely isn't what you're after. 2) Determining y...
Debugging Linux oom-killer - little to no swap use
1,509,018,664,000
Running ArchLinux uname -a: Linux localhost 4.7.2-1-ARCH #1 SMP PREEMPT Sat Aug 20 23:02:56 CEST 2016 x86_64 GNU/Linux 16gb of RAM 14gb swap When I run big ansible jobs, it triggers my oom-killer. I would think 16gb is enough to run such jobs but I'm no oom log expert (or linux memory expert for that matter), here...
Ansible should definitely not be using that much memory. Could you elaborate on the jobs you are running? (How many, what are they doing, modules used, examples, etc.) I see firefox get killed in there, are you opening doing a lot with firefox?
Ansible triggering oom-killer
1,509,018,664,000
The system has killed a process due to "Out of memory" but I cannot understand these messages. I am not able to find the memory issue. [Mon Jul 20 21:20:39 2020] crawler invoked oom-killer: gfp_mask=0x24201ca, order=0, oom_score_adj=0 [Mon Jul 20 21:20:39 2020] crawler cpuset=/ mems_allowed=0 [Mon Jul 20 21:20:39 2020...
The kernel killed: Killed process 24355 (crawler) total-vm:9099416kB, anon-rss:7805456kB, file-rss:0kB The process tried to allocate close to 9GB of RAM which is more than your system can handle. Looks like you have just 2GB of RAM and you've got SWAP disabled. I'd say in advance that having SWAP in a situation like ...
Explanation for "Killed Process"
1,509,018,664,000
What happens if a Linux, let’s say Arch Linux or Debian, is installed with no swap partition or swap file. Then, when running the OS while almost out of RAM, the user opens a new application. Considering that this new application needs more RAM memory than what’s needed, what will happen? What part of the operating s...
The Linux kernel has a component called the OOM killer (out of memory). As Patrick pointed out in the comments the OOM killer can be disabled but the default setting is to allow overcommit (and thus enable the OOM killer). Applications ask the kernel for more memory and the kernel can refuse to give it to them (becaus...
What happens if a Linux distro is installed with no swap and when it’s almost out of RAM executes a new application? [duplicate]
1,509,018,664,000
I use Linux Mint 21.2 and my machine is Intel Core i-7 6700 3.4GHZ. I wrote a prime test in Python, and I checked it with large numbers. It is like many tests, a variation of little Fermat, so I do some modular exponentiation like powmod(3, n-1, n). I could testify that n = k * 2^k+1 is prime for k = 6679881. This is ...
You're trying to use 512GB of RAM. Either you optimize your program or you rent a server. I don't think swap could help you.
What does Memory allocation mean?
1,509,018,664,000
Hardware specs : Product Name HP ProBook 430 G4 Processor 1 Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz (x86) Memory Size 4096 MB System BIOS P85 Ver. 01.03 12/05/2016 Serial Number 5CD7097FPZ I wrote the MX image to removable media i.e dd if=mxlinux.iso of=/dev/sda status=progress && sudo eject /dev/s...
I think HP doesn't play well with Linux : https://www.quora.com/Which-laptop-is-a-better-option-for-running-Linux-Dell-HP-or-Lenovo https://h30434.www3.hp.com/t5/Notebook-Software-and-How-To-Questions/Can-t-install-ubuntu-on-HP-probook-440-G4/td-p/6933204 I sync and eject the media, verified my downloads, even so any ...
System ran out of memory during MX Linux installation
1,509,018,664,000
The question is simple but I haven't found information (to be more precise, I have found information about both options (options below) but without saying which one is used in each situation). Option 1: The kernel decides which is the best page that can evict from memory and swap to disk and eviction occurs so that th...
Both options are used, depending on the circumstances. When the kernel needs to allocate pages, and there are none available (or the watermarks have been reached), it will try to reclaim pages from the inactive lists (look for “Inactive” in /proc/meminfo). Reclaiming a page there doesn’t necessarily involve swap: non...
What is done when memory gets filled: One page eviction or an entire process is killed?
1,509,018,664,000
I am using a CentOS 7.5 instance in AWS which has 16 CPUs and 32GB memory. I found when I run the following command, the whole system will be unresponsive, I cannot run any commands on it anymore, cannot even establish a new SSH session (but can still ping it). And I do not see OOM killer triggered at all, it seems th...
Oh, this topic has been discussed ad neseum already. See: The Linux kernel's inability to gracefully handle low memory pressure Let's talk about the elephant in the room - the Linux kernel's inability to gracefully handle low memory pressure The solution? Various daemons like earlyoom (installed and enabled by defaul...
Why does Linux become unresponsive when a large number of memory is used (OOM cannot be triggered)
1,509,018,664,000
I am trying to configure my .service file to limit how much memory a given service can use up before being terminated, by percentage of system memory (10% as an upper limit in this case): [Unit] Description=MQTT Loop After=radioLoop.service [Service] Type=simple Environment=PYTHONIOENCODING=UTF-8 ExecStart=/usr/bin/p...
Yes, turning that option on was enough to enable the MemoryMax to work as expected.
system MemoryMax by percentage not working?
1,509,018,664,000
I strongly despise any kinds of automatic OOM killers, and would like to resolve such situations manually. So for a long time I have vm.overcommit_memory=1 vm.overcommit_ratio=200 But this way, when the memory is overflowed, the system becomes unresponsive. On my old laptop with HDD and 6 GB of RAM, I sometimes had t...
For your use case try the mlockall system call to force a specific process to never be swapped, thus avoid swap thrashing slowdown. I would recommend earlyoom with custom rules over this hack.
Is it possible to reserve resources for an always-up emergency console?
1,509,018,664,000
First and foremost, I'm trying to purge an improperly uninstalled version of Intellij-Idea on Raspberry Pi and reinstall. I installed IntelliJ-Idea on the Raspberry Pi using the below page as a guide: Install Intellij-Idea on Raspberry Pi After progressive success decreasing CPU/memory usage, and countless lock-up and...
I manually downloaded a new installation since the Add/Remove software option wasn't letting me go further. It wasn't perfect, but it was better than a dead-end. In my search, I stumbled across this: JetBrains ends 32-bit OS support JetBrains only supported 32bit up to 2021.1.x, and the current version is 2021.3.x. ...
How to fix improperly uninstalled software on Raspberry Pi (Buster)
1,433,885,782,000
I want to compile as fast as possible. Go figure. And would like to automate the choice of the number following the -j option. How can I programmatically choose that value, e.g. in a shell script? Is the output of nproc equivalent to the number of threads I have available to compile with? make -j1 make -j16
nproc gives the number of CPU cores/threads available, e.g. 8 on a quad-core CPU supporting two-way SMT. The number of jobs you can run in parallel with make using the -j option depends on a number of factors: the amount of available memory the amount of memory used by each make job the extent to which make jobs are ...
How to determine the maximum number to pass to make -j option?
1,433,885,782,000
There is a list of IP addresses in a .txt file, ex.: 1.1.1.1 2.2.2.2 3.3.3.3 Behind every IP address there is a server, and on every server there is an sshd running on port 22. Not every server is in the known_hosts list (on my PC, Ubuntu 10.04 LTS/bash). How can I run commands on these servers, and collect the outpu...
Assuming that you are not able to get pssh or others installed, you could do something similar to: tmpdir=${TMPDIR:-/tmp}/pssh.$$ mkdir -p $tmpdir count=0 while IFS= read -r userhost; do ssh -n -o BatchMode=yes ${userhost} 'uname -a' > ${tmpdir}/${userhost} 2>&1 & count=`expr $count + 1` done < userhost.lst wh...
Automatically run commands over SSH on many servers
1,433,885,782,000
Is there any tool/command in Linux that I can use to run a command in more than one tab simultaneously? I want to run the same command: ./myprog argument1 argument2 simultaneously in more than one shell to check if the mutexes are working fine in a threaded program. I want to be able to increase the number of instance...
As mavillan already suggested, just use terminator. It allows to display many terminals in a tiled way. When enabling the broadcasting feature by clicking on the grid icon (top-left) and choosing "Broadcast All", you can enter the very same command simultaneously on each terminal. Here is an example with the date comm...
How do I run the same linux command in more than one tab/shell simultaneously?
1,433,885,782,000
I need to download a large file (1GB). I also have access to multiple computers running Linux, but each is limited to a 50kB/s download speed by an admin policy. How do I distribute downloading this file on several computers and merge them after all segments have been downloaded, so that I can receive it faster?
The common protocols HTTP, FTP and SFTP support range requests, so you can request part of a file. Note that this also requires server support, so it might or might not work in practice. You can use curl and the -r or --range option to specify the range and eventually just catting the files together. Example: curl -r ...
How do I distribute a large download over multiple computers?
1,433,885,782,000
Under the assumption that disk I/O and free RAM is a bottleneck (while CPU time is not the limitation), does a tool exist that can calculate multiple message digests at once? I am particularly interested in calculating the MD-5 and SHA-256 digests of large files (size in gigabytes), preferably in parallel. I have trie...
Check out pee ("tee standard input to pipes") from moreutils. This is basically equivalent to Marco's tee command, but a little simpler to type. $ echo foo | pee md5sum sha256sum d3b07384d113edec49eaa6238ad5ff00 - b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c - $ pee md5sum sha256sum <foo.iso f10...
Simultaneously calculate multiple digests (md5, sha256)?
1,433,885,782,000
I have a bunch of PNG images on a directory. I have an application called pngout that I run to compress these images. This application is called by a script I did. The problem is that this script does one at a time, something like this: FILES=(./*.png) for f in "${FILES[@]}" do echo "Processing $f file..." ...
If you have a copy of xargs that supports parallel execution with -P, you can simply do printf '%s\0' *.png | xargs -0 -I {} -P 4 ./pngout -s0 {} R{} For other ideas, the Wooledge Bash wiki has a section in the Process Management article describing exactly what you want.
Four tasks in parallel... how do I do that?
1,433,885,782,000
Suppose that I have three (or more) bash scripts: script1.sh, script2.sh, and script3.sh. I would like to call all three of these scripts and run them in parallel. One way to do this is to just execute the following commands: nohup bash script1.sh & nohup bash script2.sh & nohup bash script3.sh & (In general, the s...
for((i=1;i<100;i++)); do nohup bash script${i}.sh & done
Calling multiple bash scripts and running them in parallel, not in sequence
1,433,885,782,000
Consider the following scenario. I have two programs A and B. Program A outputs to stdout lines of strings while program B process lines from stdin. The way to use these two programs is of course: foo@bar:~$ A | B Now I've noticed that this eats up only one core; hence I am wondering: Are programs A and B sharing t...
A problem with split --filter is that the output can be mixed up, so you get half a line from process 1 followed by half a line from process 2. GNU Parallel guarantees there will be no mixup. So assume you want to do: A | B | C But that B is terribly slow, and thus you want to parallelize that. Then you can do: A | ...
Executing piped commands in parallel
1,433,885,782,000
I'm using xargs with the option --max-args=0 (alternatively -P 0). However, the output of the processes is merged into the stdout stream without regard for proper line separation. So I'll often end up with lines such as: <start-of-line-1><line-2><end-of-line-1> As I'm using egrep with ^ in my pattern on the whole xar...
This should do the trick: echo -n $IPs | xargs --max-args=1 -I {} --delimiter ' ' --max-procs=0 \ sh -c "wget -q -O- 'http://{}/somepage.html' | egrep --count '^string'" | \ { NUM=0; while read i; do NUM=$(($NUM + $i)); done; echo $NUM; } The idea here is to make separate counts and sum these at the end. Might fa...
How to stop xargs from badly merging output from multiple processes?
1,433,885,782,000
In a common Linux distribution, do utilities like rm, mv, ls, grep, wc, etc. run in parallel on their arguments? In other words, if I grep a huge file on a 32-threaded CPU, will it go faster than on dual-core CPU?
You can get a first impression by checking whether the utility is linked with the pthread library. Any dynamically linked program that uses OS threads should use the pthread library. ldd /bin/grep | grep -F libpthread.so So for example on Ubuntu: for x in $(dpkg -L coreutils grep findutils util-linux | grep /bin/); d...
Are basic POSIX utilities parallelized?
1,433,885,782,000
I have a shell scripting problem where I'm given a directory full of input files (each file containing many input lines), and I need to process them individually, redirecting each of their outputs to a unique file (aka, file_1.input needs to be captured in file_1.output, and so on). Pre-parallel, I would just iterate ...
GNU Parallel is designed for this kind of tasks: parallel customScript -c 33 -I -file {} -a -v 55 '>' {.}.output ::: *.input or: ls | parallel customScript -c 33 -I -file {} -a -v 55 '>' {.}.output It will run one jobs per CPU core. You can install GNU Parallel simply by: wget https://git.savannah.gnu.org/cgit/paral...
using parallel to process unique input files to unique output files
1,433,885,782,000
I have a bash shell script in which I pipe some data through about 5 or 6 different programs then the final results into a tab delimited file. I then do the same again for a separate similar dataset and output to a second file. Then both files are input into another program for comparative analysis. e.g. to simplify ...
Use wait. For example: Data1 ... > Data1Res.csv & Data2 ... > Data2Res.csv & wait AnalysisProg will: run the Data1 and Data2 pipes as background jobs wait for them both to finish run AnalysisProg. See, e.g., this question.
How to run parallel processes and combine outputs when both finished
1,433,885,782,000
I've found only puf (Parallel URL fetcher) but I couldn't get it to read urls from a file; something like puf < urls.txt does not work either. The operating system installed on the server is Ubuntu.
Using GNU Parallel, $ parallel -j ${jobs} wget < urls.txt or xargs from GNU Findutils, $ xargs -n 1 -P ${jobs} wget < urls.txt where ${jobs} is the maximum number of wget you want to allow to run concurrently (setting -n to 1 to get one wget invocation per line in urls.txt). Without -j/-P, parallel will run as ma...
Is there parallel wget? Something like fping but only for downloading?
1,433,885,782,000
Suppose I have two resources, named 0 and 1, that can only be accessed exclusively. Is there any way to recover the "index" of the "parallel processor" that xargs launches in order to use it as a free mutual exclusion service? E.g., consider the following parallelized computation: $ echo {1..8} | xargs -d " " -P 2 -I ...
If you're using GNU xargs, there's --process-slot-var: --process-slot-var=environment-variable-name Set the environment variable environment-variable-name to a unique value in each running child process. Each value is a decimal integer. Values are reused once child processes exit. This can be used in a rudime...
How can I get the index of the xargs "parallel processor"?
1,433,885,782,000
I can ssh into a remote machine that has 64 cores. Lets say I need to run 640 shell scripts in parallel on this machine. How do I do this? I can see splitting the 640 scripts into 64 groups each of 10 scripts. How would I then run each of these groups in parallel, i.e. one group on each of one of the available cores. ...
This looks like a job for gnu parallel: parallel bash -c ::: script_* The advantage is that you don't have to group your scripts by cores, parallel will do that for you. Of course, if you don't want to babysit the SSH session while the scripts are running, you should use nohup or screen
How to run scripts in parallel on a remote machine?
1,433,885,782,000
I need to upload a directory with a rather complicated tree (lots of subdirectories, etc.) by FTP. I am unable to compress this directory, since I do not have any access to the destination apart from FTP - e.g. no tar. Since this is over a very long distance (USA => Australia), latency is quite high. Following the adv...
lftp would do this with the command mirror -R -P 20 localpath - mirror syncs between locations, and -R uses the remote server as the destination , with P doing 20 parallel transfers at once. As explained in man lftp: mirror [OPTS] [source [target]] Mirror specified source directory to local target directory. I...
How can I parallelise the upload of a directory by FTP?
1,433,885,782,000
I know that on the command line I can use & to run a command in the background. But I'm wondering if I can do it in a script. I have a script like this: date_stamp=$(date +"%Y-%m-%d" --date='yesterday') shopt -s extglob cd /my/working/directory/ sh ./stay/get_it_ios.sh sh ./stay/get_it_mix.sh cd stay zip ../stay_$...
Yes, it is. If you want to do two things concurrently, and wait for them both to complete, you can do something like: sh ./stay/get_it_ios.sh & PIDIOS=$! sh ./stay/get_it_mix.sh & PIDMIX=$! wait $PIDIOS wait $PIDMIX Your script will then run both scripts in parallel, and wait for both scripts to complete before co...
Is it possible to run two commands at the same time in a shell script?
1,433,885,782,000
I've read that the color red indicates "kernel processes." Does that mean little daemons that are regulating which task gets to use the CPU? And by extension, transaction costs in an oversubscribed system? I'm running some large-scale geoprocessing jobs, and I've got two scripts running in parallel at the same time...
Red represents the time spent in the kernel, typically processing system calls on behalf of processes. This includes time spent on I/O. There’s no point in trying to reduce it just for the sake of reducing it, because it’s not time that’s wasted — it’s time that’s spent by the kernel doing useful stuff (as long as you...
Lots of red in htop -- does that mean my tasks are tripping over each other?
1,433,885,782,000
I have written a bash script which is in following format: #!/bin/bash start=$(date +%s) inFile="input.txt" outFile="output.csv" rm -f $inFile $outFile while read line do -- Block of Commands done < "$inFile" end=$(date +%s) runtime=$((end-start)) echo "Program has finished execution in $runtime seconds." ...
GNU parallel is made for just this sort of thing. You can run your script many times at once, with different data from your input piped in for each one: cat input.txt | parallel --pipe your-script.sh By default it will spawn processes according to the number of processors on your system, but you can customise that wi...
Multi-Threading/Forking in a bash script
1,433,885,782,000
I need to run performance tests for my concurrent program and my requirement is that it should be run on only one CPU core. (I don't want to cooperative threads - I want always have a context switching). So I have two questions: The best solution - How to sign and reserve only one CPU core only for my program (to for...
On linux, the system call to set the CPU affinity for a process is sched_setaffinity. Then there's the taskset tool to do it on the command line. To have that single program run on only one CPU, I think you'd want something like taskset -c 1 ./myprogram (set any CPU number as an argument to the -c switch.) That sho...
Using only one cpu core
1,433,885,782,000
I have the following bash script: for i in {0800..9999}; do for j in {001..032}; do wget http://example.com/"$i-$j".jpg done done All photos are exist and in fact each iteration does not depend from another. How to parallelize it with possibility of control the number of threads?
Confiq's answer is a good one for small i and j. However, given the size of i and j in your question, you will likely want to limit the overall number of processes spawned. You can do this with the parallel command or some versions of xargs. For example, using an xargs that supports the -P flag you could paralleliz...
Download several files with wget in parallel
1,433,885,782,000
Say I have multiple bash scripts that run in parallel, with code like the following: #!/bin/bash tail -f /dev/null & echo "pid is "$! Is $! guaranteed to give me the PID of the most recent background task in that script, or is it the most recent background task globally? I'm just curious if relying on this feature c...
$! is guaranteed to give you the pid of the process in which the shell executed that tail command. Shells are single threaded, each shell lives in its own process with its own set of variables. There's no way the $! of one shell is going to leak into another shell, just like assigning a shell variable in one shell is ...
Can $! cause race conditions when used in scripts running in parallel?
1,433,885,782,000
Is this the correct way to start multiple sequential processings in the background? for i in {1..10}; do for j in {1..10}; do run_command $i $j; done & done; All j should be processed after each other for a given i, but all i should be processed simultaneously.
The outer loop that you have is basically for i in {1..10}; do some_compound_command & done This would start ten concurrent instances of some_compound_command in the background. They will be started as fast as possible, but not quite "all at the same time" (i.e. if some_compound_command takes very little time, t...
Bash: Multiple for loops in Background
1,433,885,782,000
I have an embarrassingly parallel process that creates a huge amount of nearly (but not completely) identical files. Is there a way to archive the files "on the fly", so that the data does not consume more space than necessary? The process itself accepts command-line parameters and prints the name of each file created...
It seems tar wants to know all the file names upfront. So it is less on-the-fly and more after-the-fly. cpio does not seem to have that problem: | cpio -vo 2>&1 > >(gzip > /tmp/arc.cpio.gz) | parallel rm
Virtual write-only file system for storing files in archive
1,433,885,782,000
I have a small script that loops through all files of a folder and executes a (usually long lasting) command. Basically it's for file in ./folder/*; do ./bin/myProgram $file > ./done/$file done (Please Ignore syntax errors, it's just pseudo code). I now wanted to run this script twice at the same time. Obviously,...
This is possible and does occur in reality. Use a lock file to avoid this situation. An example, from said page: if mkdir /var/lock/mylock; then echo "Locking succeeded" >&2 else echo "Lock failed - exit" >&2 exit 1 fi # ... program code ... rmdir /var/lock/mylock
Parallel execution of a program on multiple files
1,433,885,782,000
In bash script, I have a program like this for i in {1..1000} do foo i done Where I call the function foo 1000 times with parameter i If I want to make it run in multi-process, but not all at once, what should I do? So if I have for i in {1..1000} do foo i & done It would start all 1000 processes at once, whic...
#!/bin/bash jobs_to_run_num=10 simult_jobs_num=3 have_runned_jobs_cntr=0 check_interval=0.1 while ((have_runned_jobs_cntr < jobs_to_run_num)); do cur_jobs_num=$(wc -l < <(jobs -r)) if ((cur_jobs_num < simult_jobs_num)); then ./random_time_script.sh & echo -e "cur_jobs_num\t$((cur_jobs_num +...
Start 100 process at a time in bash script
1,433,885,782,000
I've a problem modifying the files' names in my Music/ directory. I have a list of names like these: $ ls 01 American Idiot.mp3 01 Articolo 31 - Domani Smetto.mp3 01 Bohemian rapsody.mp3 01 Eye of the Tiger.mp3 04 Halo.mp3 04 Indietro.mp3 04 You Can't Hurry Love.mp3 05 Beautiful girls.mp3 16 Apologize.mp3 16 Christma...
To list all files start with number in a directory, find . -maxdepth 1 -regextype "posix-egrep" -regex '.*/[0-9]+.*\.mp3' -type f Problem with your approach is that the find returns a relative path of a file and you are just expecting a filename itself.
Remove numbers from filenames
1,433,885,782,000
I have 3 functions, like function WatchDog { sleep 1 #something } function TempControl { sleep 480 #somthing } function GPUcontrol { sleep 480 #somethimg } And i am runing it like WatchDog | TempControl | GPUcontrol This script is in rc.local file. So, logically it should run at automatically. The thing is that ...
Pipe sends the output of one command to the next. You are looking for the & (ampersand). This forks processes and runs them in the background. So if you ran: WatchDog & TempControl & GPUcontrol It should run all three simultaneously. Also when you run sudo bash /etc/rc.local I believe that is running them in serie...
Parallel running of functions
1,433,885,782,000
I have script I'd always like to run 'x' instances in parallel. The code looks a like that: for A in do for B in do (script1.sh $A $B;script2.sh $A $B) & done #B done #A The scripts itself run DB queries, so it would benefit from parallel running. Problem is 1) 'wait' doesn't work (because it finished all ...
Using GNU Parallel it looks like this: parallel script1.sh {}';' script2.sh {} ::: a b c ::: d e f It will spawn one job per CPU. GNU Parallel is a general parallelizer and makes is easy to run jobs in parallel on the same machine or on multiple machines you have ssh access to. It can often replace a for loop. If you...
How to run x instances of a script parallel?
1,433,885,782,000
Here is what I do right now, sort -T /some_dir/ --parallel=4 -uo file_sort.csv -k 1,3 file_unsort.csv the file is 90GB,I got this error message sort: close failed: /some_dir/sortmdWWn4: Disk quota exceeded Previously, I didn't use the -T option and apparently the tmp dir is not large enough to handle this. My current ...
The problem is that you seem to have a disk quota set up and your user doesn't have the right to take up so much space in /some_dir. And no, the --parallel option shouldn't affect this. As a workaround, you can split the file into smaller files, sort each of those separately and then merge them back into a single fil...
Sort large CSV files (90GB), Disk quota exceeded
1,433,885,782,000
With the following Makefile, GNU make runs the two commands in parallel. Since the first one takes time to finish, rm *.log is run before the log file is created, and fails. dummy.pdf: dummy.tex tex dummy.tex &> /dev/null; rm *.log The file dummy.tex one line: \bye (a short empty file for TeX). Replacing tex ...
Actually, you don't have a problem with make, but with your command: tex dummy.tex &> /dev/null; Runs 'tex' in the background. You don't need to remove '>/dev/null', but '&' is sending 'tex' to the background. Try this, it must be fine for you: tex dummy.tex > /dev/null; or run everything in the same subshell, like ...
Forcing GNU make to run commands in order
1,433,885,782,000
I would like to ask if there is a out of the box multicore equivalent for a '| sort | uniq -c | sort -n' command? I know that I can use below procedure split -l5000000 data.tsv '_tmp'; ls -1 _tmp* | while read FILE; do sort $FILE -o $FILE & done; sort -m _tmp* -o data.tsv.sorted But it tastes a bit overhelming.
GNU sort has a --parallel flag: sort --parallel=8 data.tsv | uniq -c | sort --parallel=8 -n This would use eight concurrent processes/threads to do each of the two sorting steps. The uniq -c part will still be using a single process. As Stéphane Chazelas points out in comments, the GNU implementation of sort is alre...
multicore equivalent for '| sort | uniq -c | sort -n' command
1,433,885,782,000
I am thinking methods to make the search faster and/or better which principally uses fgrep or ag. Code which searches the word and case-insensitively at $HOME, and redirects a list of matches to vim find -L $HOME -xtype f -name "*.tex" \ -exec fgrep -l -i "and" {} + 2>/dev/null | vim -R - It is faster with ag bec...
You could probably make it a little bit faster by running multiple find calls in parallel. For example, first get all toplevel directories and run N find calls, one for each dir. If you run the in a subshell, you can collect the output and pass it to vim or anything else: shopt -s dotglob ## So the glob also finds hid...
How to make this search faster in fgrep/Ag?
1,433,885,782,000
I'm trying to set several environment variables with the results from command substitution. I want to run the commands in parallel with & and wait. What I've got currently looks something like export foo=`somecommand bar` & export fizz=`somecommand baz` & export rick=`somecommand morty` & wait But apparently when usi...
After some ruminations, I came up with an ugly workaround: #!/bin/bash proc1=$(mktemp) proc2=$(mktemp) proc3=$(mktemp) /path/to/longprocess1 > "$proc1" & pid1=$! /path/to/longprocess2 > "$proc2" & pid2=$! /path/to/longprocess3 > "$proc3" & pid3=$! wait "$pid1" "$pid2" "$pid3" export var1="<("$proc1")" export var2="<...
How to assign environment variables in parallel in bash
1,433,885,782,000
I have a words.txt with 10000 words (one to a line). I have 5,000 documents. I want to see which documents contain which of those words (with a regex pattern around the word). I have a script.sh that greps the documents and outputs hits. I want to (1) split my input file into smaller files (2) feed each of the files t...
You can use the split tool: split -l 1000 words.txt words- will split your words.txt file into files with no more than 1000 lines each named words-aa words-ab words-ac ... words-ba words-bb ... If you omit the prefix (words- in the above example), split uses x as the default prefix. For using the generated files wit...
split a file, pass each piece as a param to a script, run each script in parallel
1,539,810,852,000
I understand that find -tests -execdir <command> '{}' ';' runs command for every matching file against the test(s) specified. The command, when using -execdir, is executed in the same parent directory as the matching file (for every matching file), as {} stands for the basename of the matching file. Now the question i...
Let's say your find locates the following files: ./foo/bar ./foo/baz ./foo/quux If you use -execdir [...]+, the effective resultant command will be: ( cd ./foo; command bar baz quux ) As opposed to (effectively) this, if you use -execdir [...] \;: ( cd ./foo; command bar ) ( cd ./foo; command baz ) ( cd ./foo; comma...
How does find -execdir <command> + work?
1,539,810,852,000
Possible Duplicate: Parallelizing a for loop The original code might look like this: for i in *; do something.py $i; done I was wondering whether I can run these jobs parallelly in the backgroud, such as: for i in *; do something.py $i &; done I tried and found the & here won't work.. Moreover, a better way mi...
Simply remove the ; character, so in final : for i in *; do something.py $i & done And for running N instance of your script at the same time, see man 1 parallel See http://www.gnu.org/software/parallel/
Is there a way to run process parallelly in the loop of a bash script [duplicate]
1,539,810,852,000
I am doing a build on a Linux machine with Ubuntu 10.04 on it. How can I really speed up my build? I have 4 CPUs and lots of RAM. I already reniced the process group to -20. Is there something else I can do?
Most software build processes use make. Make sure you make make use the -j argument with a number usually about twice the number of CPUs you have, so make -j 8 would be appropriate for your case.
How to speed up my build
1,539,810,852,000
Sometimes I need to remove all the contents of a directory and create new files there. Can I do something like this and expect all new files to remain intact: % rm -rf regression/* & ( sleep 10 ; run_regression ) where run_regression timestamps its output files so that they would have unique names and places them in ...
This is not safe. You have not specified what the problem is that you are trying to solve. If your problem is that you want your directory to always be there but be cleaned up from time to time, I would suggest explicitly removing files older than a check file (the sleep 1 is me being paranoid): touch regression.delet...
How safe it is to output to <dir> simultaneously with rm <dir>/*
1,539,810,852,000
I want to run a sequence of command pipelines with pv on each one. Here's an example: for p in 1 2 3 do cat /dev/zero | pv -N $p | dd of=/dev/null & done The actual commands in the pipe don't matter (cat/dd are just an example)... The goal being 4 concurrently running pipelines, each with their own pv output. Howev...
Found that I can do this with xargs and the -P option: josh@subdivisions:/# seq 1 10 | xargs -P 4 -I {} bash -c "dd if=/dev/zero bs=1024 count=10000000 | pv -c -N {} | dd of=/dev/null" 3: 7.35GiB 0:00:29 [ 280MiB/s] [ <=> ...
How can I run multiple pv commands in parallel?
1,539,810,852,000
What I'm really trying to do is run X number of jobs, with X amount in parallel for testing an API race condition. I've come up with this echo {1..10} | xargs -n1 | parallel -m 'echo "{}"'; which prints 7 8 9 10 4 5 6 1 2 3 but what I really want to see is (note order doesn't actually matter). 1 2 3 4 5 6 7 8 9 10 ...
I want 4 processes at a time, each process should process 1 record parallel -j4 -k --no-notice 'echo "{}"' ::: {1..10} -j4 - number of jobslots. Run up to 4 jobs in parallel -k - keep sequence of output same as the order of input. Normally the output of a job will be printed as soon as the job completes ::: - argu...
How can I run GNU parallel in record per job, with 1 process per core
1,539,810,852,000
I have text file contain the following commands command1 file1_input; command2 file1_output command1 file2_input; command2 file2_output command1 file3_input; command2 file3_output command1 file4_input; command2 file4_output command1 file5_input; command2 file5_output command1 file6_input; command2 file6_output command...
Make the lines as such: (command1 file1_input; command2 file1_output) & (command1 file2_input; command2 file2_output) & ... And each line will execute its two commands in sequence, but each line will be forked off as parallel background jobs. If you want the second command to execute only if the first command complet...
Running commands at once
1,539,810,852,000
I am trying to write a script that has the purpose to parallelize an execution (a program that creates some files) running the processes in background and, when all commands in the for loop are done, will perform an extra command (namely move all produced files in another folder). This is what I came out with for the ...
If the csv files are generated by the java command, this will fail because the mv will run before any files have been generated. Since all java processes are sent to the background, the loop will finish almost immediately, so the script continues to the mv which finds no files to move and so does nothing. A simple sol...
Bash background execution not returning
1,539,810,852,000
I have 1000 gzipped files which I want to sort. Doing this sequentially, the procedure looks pretty straightforward: find . -name *.gz -exec zcat {} | sort > {}.txt \; Not sure that the code above works (please correct me if I did a mistake somewhere), but I hope you understand the idea. Anyway, I'd like to paralleli...
A quick trip to Google reveals this interesting approach: http://pebblesinthesand.wordpress.com/2008/05/22/a-srcipt-for-running-processes-in-parallel-in-bash/ for ARG in $*; do command $ARG & NPROC=$(($NPROC+1)) if [ "$NPROC" -ge 4 ]; then wait NPROC=0 fi done
How to create a bounded queue for shell tasks?
1,539,810,852,000
Is there any limit for parallel execution? if yes, how to find out the maximum limit? I am creating a script which create a string of scripts concatenated by '&' and uses eval to execute them all together. Something like this: scriptBuilder="ksh -x script1.sh & ksh -x script2.sh & ksh -x script3.sh"; eval $scriptBuild...
The command ulimit -u shows the maximum number of processes that you can start. However, do not actually start that many processes in the background: your machine would spend time switching between processes and wouldn't get around to getting actual work done. For CPU-bound tasks, run as many tasks as there are cores ...
How to find max parallel execution limit?
1,539,810,852,000
I have a bash script that takes as input three arrays with equal length: METHODS, INFILES and OUTFILES. This script will let METHODS[i] solves problem INFILES[i] and saves the result to OUTFILES[i], for all indices i (0 <= i <= length-1). Each element in METHODSis a string of the form: $HOME/program/solver -a <method...
Summary of the comments: The machine is fast but doesn't have enough memory to run everything in parallel. In addition the problem needs to read a lot of data and the disk bandwidth is not enough, so the cpus are idle most of the time waiting for data. Rearranging the tasks helps. Not yet investigated compressing the ...
BASH: parallel run
1,539,810,852,000
I would like to solve the following issue about submitting a job that has been parallelised to a specific node. Let me start with explaining the structure of my problem I have two very simple Matlab scripts 1) main.m clear rng default P=2; grid=randn(4,3); jobs=1; 2) f.m sgetasknum_grid=grid(jobs*(str2double(getenv(...
What about not using wait at all, in the while loop? while [ "$SGE_TASK_ID" -le "$J" ]; do # grep count of matlab processes out of list of user processes n = $(ps ux | grep -c "matlab") ## if [ "$n" -le "$N" ]; then if [ "$n" -eq "$N" ]; then # sleep 1 sec if already max processes started ...
Alternative to wait -n (because server has old version of bash)
1,539,810,852,000
Say I have a 4-core workstation, what would Linux (Ubuntu) do if I execute mpirun -np 9 XXX Will 9 run immediately together, or they will run 4 after 4? I suppose that using 9 is not good, because the remainder 1 will make the computer confused, (I don't know is it going to be confused at all, or the "head" of the c...
They will all run at the same time The load will be distributed by your OS to be worked on as many cores as there are available. The time might not be proportional to the number of threads. Here is a silly example why. Assume you have one job that you want to do three times, and it takes the same amount of time every...
`mpirun -np N`: what if `N` is larger than my physical cores?
1,539,810,852,000
I have a few Linux machines laying around and I wanted to make a cluster computer network. There will be 1 monitor that would be for the controller. The controller would execute a script that would perform a task and split the load onto the computers. Lets say I have 4 computers that are all connected to the controlle...
You could try making a Beowulf Cluster. You set up one host as a master and the rest as nodes. It's been done in the past by others, including NASA as the wikipedia entry on Beowulf Cluster says. Building your own cluster computer farm might cost more in power than you'd gain in compute resources. I have not tried...
Remotely execute commands but still have control of the host
1,539,810,852,000
Normally, pipelines in Unix are used to connect two commands and use the output of the first command as the input of the second command. However, I recently come up with the idea (which may not be new, but I didn't find much Googling) of using pipeline to run several commands in parallel, like this: command1 | command...
Command pipelines already run in parallel. With the command: command1 | command2 Both command1 and command2 are started. If command2 is scheduled and the pipe is empty, it blocks waiting to read. If command1 tries to write to the pipe and its full, command1 blocks until there's room to write. Otherwise, both comm...
Pipeline as parallel command
1,539,810,852,000
I'm running something like this: find . -maxdepth 1 -type f -note -iname "*.gpg" | sort | while read file ; do echo "Encrypting $file..." gpg --trust-model always --recipient "[email protected]" --output "$file.gpg" \ --encrypt "$file" && rm "$file" done This runs great, but it seems that GPG is not ...
If you install the GNU Parallel tool you can make pretty easy work of what you're trying to accomplish: $ find . -maxdepth 1 -type f -note -iname "*.gpg" | sort | \ parallel --gnu -j 8 --workdir $PWD ' \ echo "Encrypting {}..."; \ gpg --trust-model alway...
Running up to X commands in parallel
1,539,810,852,000
Possible Duplicate: How to run a command when a directory's contents are updated? I'm trying to write a simple etl process that would look for files in a directory each minute, and if so, load them onto a remote system (via a script) and then delete them. Things that complicate this: the loading may take more than...
It sounds as if you simply should write a small processing script and use GNU Parallel for parallel processing: http://www.gnu.org/software/parallel/man.html#example__gnu_parallel_as_dir_processor So something like this: inotifywait -q -m -r -e CLOSE_WRITE --format %w%f my_dir | parallel 'mv {} /tmp/processing/{/};m...
process files in a directory as they appear [duplicate]
1,539,810,852,000
parallel from moreutils is a great tool for, among other things, distributing m independent tasks evenly over n CPUs. Does anybody know of a tool that accomplishes the same thing for multiple machines? Such a tool of course wouldn't have to know about the concept of multiple machines or networking or anyhting like tha...
GNU Parallel does that and more (using ssh). It can even deal with mixed speed of machines, as it simply has a queue of jobs, that are started on the list of machines (e.g. one per CPU core). When one jobs finishes another one is started. So it does not divide the jobs into clusters before starting, but does it dynami...
Multi-machine tool in the spirit of moreutils' `parallel`?
1,539,810,852,000
GNU Parallel, without any command line options, allows you to easily parallelize a command whose last argument is determined by a line of STDIN: $ seq 3 | parallel echo 2 1 3 Note that parallel does not wait for EOF on STDIN before it begins executing jobs — running yes | parallel echo will begin printing infinitely ...
A bug in GNU Parallel does, that it only starts processing after having read one job for each jobslot. After that it reads one job at a time. In older versions the output will also be delayed by the number of jobslots. Newer versions only delay output by a single job. So if you sent one job per second to parallel -j10...
Make GNU Parallel not delay before executing arguments from STDIN
1,539,810,852,000
I've run into a couple of similar situations where I can break a single-core bound task up into multiple parts and run each part as separate job in bash to parallelize it, but I struggle with collating the returned data back to a single data stream. My naive method so far has to create a temporary folder, track the PI...
My naive method so far has to create a temporary folder, track the PID's, have each thread write to a file with its pid, then once all jobs complete read all pids and merge them into a single file in order of PID's spawned. This is almost exactly what GNU Parallel does. parallel do_stuff ::: job1 job2 job3 ... jobn ...
How to merge data from multiple background jobs back to a single data stream in bash
1,539,810,852,000
I plan to get a new notebook and try to find out if a quad-core processor gives me any advantages over a regular dual-core machine. I use common Linux Distributions (Ubuntu, Arch etc.) and mostly Graphics Software: Scribus, Inkscape, Gimp. I want to use this new processor for a few years. I've done a lot of research ...
You haven't found any reliable answer because there is no widely applicable reliable answer. The performance gain from multiple cores is hard to predict except for well-defined tasks, and even then it can depend on many other factors such as available memory (no benefit from multiple cores if they're all waiting for s...
Linux & Linux-Software: Advantages of a multi core processor [closed]
1,539,810,852,000
I want to run some simulations using a Python tool that I had made. The catch is that I would have to call it multiple times with different parameters/arguments and everything. For now, I am using multiple for loops for the task, like: for simSeed in 1 2 3 4 5 do for launchPower in 17.76 20.01 21.510 23.76 do ...
GNU parallel has several options to limit resource usage when starting jobs in parallel. The basic usage for two nested loops would be parallel python sim -a {1} -p {2} ::: 1 2 3 4 5 ::: 17.76 20.01 21.510 23.76 If you want to launch at most 5 jobs at the same time, e.g., you could say parallel -j5 python <etc.> Al...
Bash consecutive and parallel loops/commands
1,539,810,852,000
I work on a shared cluster. I've seen people run parallelized c code on this cluster which, when I use top to see what processes are running, are shown to be using (for example) 400% of the CPU, since they are using four processors for a single instance of their code. Now someone is running (what I hear to be) a para...
What you see in C is using threads, so the process usage is the total of all its threads. If there are 4 threads with 100% CPU usage each, the process will show as 400% What you see in python is almost certainly parallelism via the multiprocess model. That's a model meant to overcome Python's threading limitations. Py...
How does a parallelized Python program look with top command?
1,539,810,852,000
I have two scripts running parallelly and they are echoing to the same file. One script is echoing +++++++++++++++ to the file while the other script is echoing =========== to the file. Below is the first script #!/bin/bash while [ 1==1 ]; do echo "+++++++++++++++" >> log.txt # commands done Below is the seco...
Simply, the echo command triggers one write syscall which is atomic. Note that write doesn’t guarantee to write all bytes it is given, but in this case (few data), it does. Then in theory write(fd, buffer, n) can write less than n bytes and return the actual written number of written bytes to enable the program to wri...
Race condition not seen while two scripts write to a same file
1,539,810,852,000
When I'm running a task on a computer network? I've just started to realize that if I qsub any task, then the task won't hog up my terminal, and I can do other things on the same terminal (which is quite useful even if the task only takes a single minute to finish). And then I run qstat to see which taks have finishe...
In these cases I'd rather open another terminal. What is the reason that you don't want to do that? Downside of running qsub, is that you have to write a tiny script file for a trivial operation, which takes you some time. I don't know how many other users are working on the same network, but the purpose is meant as a...
Are there any disadvantages against using qsub to run tasks all the time?
1,539,810,852,000
I have a simple Bash script I am running to parallelize and automate the execution of a program written in Sage MATH: #!/bin/bash for i in {1..500}; do echo Spinning up threads... echo Round $i for j in {1..8}; do ../sage ./loader.sage.py & done wait done 2>/dev/null I would like to add a timeout so tha...
Using GNU Parallel: parallel --timeout 5 -j 8 -N0 ../sage ./loader.sage.py ::: {1..4000} 2>/dev/null This will execute ../sage ./loader.sage.py 4000 times, 8 jobs at a time, each with a timeout of 5 seconds From the parallel man page: --timeout duration Time out for command. If the command runs for longer ...
Adding a timeout to a parallelized call in Bash
1,539,810,852,000
files: $ ls a.md b.md c.md d.md e.md Command: pandoc file.md -f markdown file.pdf How would I parallely process two pandoc instances simulatneously? Possibly with xargs or parallel. It would work like Iteration/ cmd 1 / cmd 2 1 / pandoc a.md -f markdown a.pdf / pandoc b.md -f markdown b.pdf 2 / pandoc c.md -f markdow...
Crudely, #!/bin/sh set -- *.md while [ $# -gt 0 ] do pandoc "${1} -f markdown -o ${1%.md}.pdf" & shift if [ $# -gt 0 ] then pandoc "${1} -f markdown -o ${1%.md}.pdf" & shift fi wait done With xargs: find . -type f -name '*.md' -print0 | xargs -0 -n2 -P2 -I{} pandoc {} -f markdown -o {}.pdf you w...
How to process multiple files with pandoc?
1,539,810,852,000
1. Summary I don't understand, how I can combine parallel and sequential commands in Linux. 2. Expected behavior Pseudocode: pip install pipenv sequential pipenv install --dev parallel task npm install -g grunt-cli sequential npm install Windows batch working equivalent: start cmd /C "pip install pipenv & pipenv inst...
Simply with GNU parallel: parallel ::: 'pip install pipenv && pipenv install --dev' \ 'npm install -g grunt-cli && npm install'
Combine parallel and sequential commands
1,539,810,852,000
I have this code to check the status of all of my git folders. find / -maxdepth 3 -not -path / -path '/[[:upper:]]*' -type d -name .git -not -path "*/Trash/*" -not -path "*/Temp/*" -not -path "*/opt/*" -print 2>/dev/null | { while read gitFolder; do ( parent=$(dirname $gitFolder); S...
GNU Parallel is built for exactly this: doit() { gitFolder="$1" parent=$(dirname $gitFolder); Status=$(git -C $parent status) if [[ $Status == *Changes* ]]; then echo $parent; git -C $parent status --porcelain echo "" elif [[ $Status == *ahead...
Is it possible to bundle subshell output?
1,446,735,876,000
I have the following in a shell script: for file in $local_dir/myfile.log.*; do file_name=$(basename $file); server_name=$(echo $file_name | cut -f 3 -d '.'); file_location=$(echo $file); mv $file_location $local_dir/in_progress1.log mysql -hxxx -P3306 -uxxx -pxxx -e "s...
One would assume that "60 seconds" (and even "5 minutes") is just a good estimate, and that there is a risk that the first batch is still in progress when the second batch is started. If you want to separate the batches (and if there is no problem aside from the log-files in an occasional overlap), a better approach ...
query r.e running for loop scripts in parallel
1,446,735,876,000
I am running my below shell script from machineA which is copying the files machineB and machineC into machineA. If the files are not there in machineB, then it should be there in machineC. The below shell script will copy the files into TEST1 and TEST2 directory in machineA.. #!/bin/bash set -e readonly TEST1=/data...
In addition to sending them to the background, use the wait built in to wait for all background processes to finish before continuing. for el in $test1_partition do (scp david@${SERVER_LOCATION[0]}:$dir1/pp_monthly_9800_"$el"_200003_5.data $TEST1/. || scp david@${SERVER_LOCATION[1]}:$dir2/pp_monthly_9800_"$el"_200...
How to parallelize the for loop while scp the files? [duplicate]
1,446,735,876,000
In some unix shells there is the time command which prints how much time a given command takes to be executed. The output looks like real  1m0.000s user 10m0.000s sys   0m0.000s If I write a program that uses parallelization on multiple cores, the user-time can be a multiple of the real-time. My question is, ...
In one simple word: No. What wastes a lot of effort is to switch between kernel space and user space, such switching is where the most waste is produced. There is (a lot) of work done just to get to where the real operation needs to be executed. The less switches needed the most efficient an operation should be. Ther...
Is an optimal user-time to real-time ratio an indicator for efficient parallelization?
1,446,735,876,000
I have a machine where apt-get update hangs for ever with a "Waiting for headers" message, suggesting that one source is not responding. From this question I know I can do sudo apt-get -o Debug::Acquire::http=true update to identify the culprit. However, it is still complicated to find out which query is not respondin...
Put this in your apt configuration file: Acquire::Queue-Mode "access"; or use it on the command line like this: apt-get -o Acquire::Queue-mode=access update
How can I tell `apt-get update` to download only one file at a time?
1,446,735,876,000
I am computing Monte-Carlo simulations using GNU Octave 4.0.0 on my 4-core PC. The simulation takes almost 4 hours to compute the script for 50,000 times (specific to my problem), which is a lot of time spent for computation. I was wondering if there is a way to run Octave on multiple cores simultaneously to reduce th...
GNU Parallel will not do multithreading, but it will do multiprocessing, which might be enough for you: seq 50000 | parallel my_MC_sim --iteration {} It will default to 1 process per CPU core and it will make sure the output of two parallel jobs will not be mixed. You can even put this parallelization in the Octave s...
Run GNU Octave script on multiple cores
1,446,735,876,000
So I have .lzo files in /test01/primary folder which I need to uncompress and then delete all the .lzo files. Same thing I need to do in /test02/secondary folder as well. I will have around 150.lzo files in both folders so total around 300 files. From a command line I was running like this to uncomressed one file lzop...
Per the man page: -U, --unlink, --delete Delete input files after succesful compression or decompression. so you could simply run lzop -dU -- {"$PRIMARY","$SECONDARY"}/*.lzo to delete each lzo file as soon as it's successfully decompressed. lzop is single-threaded so if you want parallel processing ...
Uncompressed .lzo files in parallel and then delete the original .lzo files
1,446,735,876,000
I would like to run tasks a_1, a_2, b_1, b_2, c_1, c_2 in the following fashion: a_i, b_j, c_k (where i, j, k are 0 or 1) can be run in parallel. But a_2 should be run right after a_1 completion (they use the same resources so a_2 should wait for a_1 to free the resources). Same with b, c. How can I do this in bash?
( a_1; a_2 ) & ( b_1; b_2 ) & ( c_1; c_2 ) & wait This would run three background jobs and then wait for all to finish. Each of the three background jobs would run its commands one after the other. For a slightly more complicated variation: for task in a b c; do for num in 1 2; do "${task}_$num"; done & done wai...
Running multiple jobs: a combination of parallel and serial
1,446,735,876,000
I want to do the following conversion: for f in *.m4a; do ( ffmpeg -i "$f" -f wav - | opusenc --bitrate 38 - "${f%.m4a}.opus" ) & done I know I could use ffmpeg directly to convert to opus, but I want to use opusenc in this case, since it's a newer version. When I run opusenc after the ffmpeg it works fine, bu...
If you use GNU Parallel then this works: parallel 'ffmpeg -i {} -f wav - | opusenc --bitrate 38 - {.}.opus' ::: *m4a Maybe that is good enough? It has the added benefit that it only runs 1 job per cpu thread, so if you have 1000 files you will not overload your machine.
How do I run an on-the-fly ffmpeg (pipe) conversion in parallel?
1,446,735,876,000
Apologies if this is off topic - it concerns the relative efficiencies of running I/O-heavy Perl/Java scripts in parallel on a Ubuntu system. I have written two simple versions of a file copy script (Perl and Java) - see below. When I run the scripts on a 15GB file, each takes a similar amount of time on a 48-core mac...
The performance difference is most likely in how buffering works between Perl and Java. In this case, you used A bufferedReader in java which gives it an advantage. Perl does buffer around 4k from disk. You could try a few things here. One is to use the read function in perl to get larger blocks at a time. That ...
Running jobs in parallel on Ubuntu - I/O contention differences between Perl and Java
1,446,735,876,000
Let's say I have a command accepting a single argument which is a file path: mycommand myfile.txt Now I want to execute this command over multiple files in parallel, more specifically, file matching pattern myfile*. Is there an easy way to achieve this?
With GNU xargs and a shell with support for process substitution xargs -r -0 -P4 -n1 -a <(printf '%s\0' myfile*) mycommand Would run up to 4 mycommands in parallel. If mycommand doesn't use its stdin, you can also do: printf '%s\0' myfile* | xargs -r -0 -P4 -n1 mycommand Which would also work with the xargs of moder...
Execute command on multiple files matching a pattern in parallel
1,446,735,876,000
How would I execute a bash script in parallel for each line ? Actually, I will be tailing to log file and, for each line found, I want to execute a script in the background; something like the example below: tailf logfile.log | grep 'patternline' | while read line ; do bash scriptname.sh "$line" & ; done I woul...
You'd like to read the xargs manual and look up the -L and the -P flags in there. tail -f logfile.log | grep 'patternline' | xargs -P 4 -L 1 bash scriptname.sh This will execute at most four instances of the command at a time (-P 4), and with one line of input for each invocation (-L 1). Add -t to xargs to see what g...
parallel processing using xargs
1,446,735,876,000
Say I have a great number of jobs (dozens or hundreds) that need doing, but they're CPU intensive and only a few can be run at once. Is there an easy way to run X jobs at once and start a new one when one has finished? The only thing I can come up with is something like below (pseudo-code): jobs=(...); MAX_JOBS=4; c...
If you have GNU Parallel you can do this: parallel do_it {} --option foo < argumentlist GNU Parallel is a general parallelizer and makes is easy to run jobs in parallel on the same machine or on multiple machines you have ssh access to. If you have 32 different jobs you want to run on 4 CPUs, a straight forward way t...
Can you make a process pool with shell scripts?
1,446,735,876,000
When I wanted to know how long systemd actually needed to boot the default target, how would I do that ? An then, is it possible to create a graph to show which unit takes how much time to initialize and to what degree they are run in parallel ?
Use systemd-analyze built-in tool. You are especially interested in two options: blame and plot systemd-analyze blame systemd-analyze plot > graph.svg blame: Print list of running units ordered by time to init plot: Output SVG graphic showing service initialization
How can I count the time that needs systemd to boot a default target and then graph it?
1,446,735,876,000
As part of my research project I'm processing huge amount of data splitted up into many files. All files in folder foo have to be processed by the script myScript involving all elements of folder bar. This is myScript: for f in bar/* do awk 'NR==FNR{a[$0]=$0;next}!a[$0]' $f $1 > tmp cp tmp $1 done The first i...
While you can do this with a shell script, this is going at it the hard way. Shell scripts aren't very good at manipulating multiple background jobs. My recommendation is to use GNU make or some other version of make that has a -j option to execute multiple jobs in parallel. Write each subtask as a makefile rule. I th...
control number of started programs in bash
1,446,735,876,000
So, I have 10 CPU core and 20 data to process. I want to process the data in parallel but I am afraid if I just process 20 at once it will make some problem. So, I want to process 10 data 2 times. Is there any command to do this? Add info: The data are in file format. It is quite huge, per file can reach 10GB. In my e...
Using GNU Parallel: parallel my_process {} ::: files* This will run one my_process file per CPU thread. You can tell GNU Parallel to make sure there is 10G of RAM free before it starts the next job: parallel --memfree 10G my_process {} ::: files* If the free mem goes below 5G then GNU Parallel will kill the newest j...
Processing command parallel per batch
1,446,735,876,000
I want to recursively delete all files that end with .in. This is taking a long time, and I have many cores available, so I would like to parallelize this process. From this thread, it looks like it's possible to use xargs or make to parallelize find. Is this application of find possible to parallelize? Here is my cur...
Replacing -delete with -print (which is the default) and piping into GNU parallel should mostly do it: find . -name '*.in' -type f | parallel rm -- This will run one job per core; use -j N to use N parallel jobs instead. It's not completely obvious that this will run much faster than deleting in sequence, since delet...
Parallelize recursive deletion with find
1,446,735,876,000
I want to run two commands on terminal on my virtual machine at the same time. I have this as of now: sudo ptpd -c -g -b eth1 -h -D; sudo tcpdump -nni eth1 -e icmp[icmptype] == 8 -w capmasv6.pcap However, the tcpdump command only starts running when I press CtrlC, and I don't want to cancel the first command. If I ju...
Running each command in a different terminal will work; you can also start them in a single terminal with & at the end of the first to put it in the background (see Run script and not lose access to prompt / terminal): sudo ptpd -c -g -b eth1 -h -D & sudo tcpdump -nni eth1 -e icmp[icmptype] == 8 -w capmasv6.pcap
Running multiple commands at the same time
1,446,735,876,000
I need this to be more efficient Right now it takes up to 20 hrs depending on the line (these are fairly large MCS datasets). Split large data file into its "shots" Creates a list of each shot name to be used in for loop Loops through each shot and performs the same processes Appends each shot to a new data file, so...
I assume it is the for loop you want parallelized: #! /bin/bash # Split the input file into one file for each shot. NB mustclose each o/p file at the earliest opportunity otherwise it will crash! susplit <$1 key=fldr stem=fldr_ verbose=1 close=1 sucit() { i=$1 echo $i suchw key1=tstat key2=tstat a=200...
How can I run this bash script in parallel?
1,446,735,876,000
I have a large input file which contains 30M lines, new lines in \r\n. I want to process this file in parallel by sending chunks of 1000 lines (or less, for the remainder of the file) to a REST API with curl. I tried the following: < input.xt tr -d '\r' | xargs -P 8 -r -d '\n' -n 1000 -I {} curl -s -X POST --data-bina...
GNU Parallel is built for this: < input.xt parallel -P 8 -d '\r\n' -n 1000 curl -s -X POST --data-binary '{}' http://... If you want to keep the \r\n, use --pipe. This defaults to passing chunks of ~1 MB: < input.xt parallel -P 8 --pipe curl -s -X POST --data-binary @- http://...
How to combine multiple lines with xargs
1,446,735,876,000
Let's say that I have 10 GBs of RAM and unlimited swap. I want to run 10 jobs in parallel (gnu parallel is an option but not the only one necessarily). These jobs progressively need more and more memory but they start small. These are CPU hungry jobs, each running at 1 core. For example, assume that each job runs for ...
Things have changed since June. Git version e81a0eba now has --memsuspend --memsuspend size (alpha testing) Suspend jobs when there is less than 2 * size memory free. The size can be postfixed with K, M, G, T, P, k, m, g, t, or p which would multiply the size with 1024, 1048576, 1073741824, 1099511627776, 11258999068...
parallel: Pausing (swapping out) long-running progress when above memory limit threshold