date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,327,107,993,000
I want to assign the result of an expression (i.e., the output from a command) to a variable and then manipulate it – for example, concatenate it with a string, then echo it.  Here's what I've got: #!/bin/bash cd ~/Desktop; thefile= ls -t -U | grep -m 1 "Screen Shot"; echo "Most recent screenshot is: "$thefile; But that outputs: Screen Shot 2011-07-03 at 1.55.43 PM.png Most recent screenshot is: So, it looks like that isn't getting assigned to $thefile, and is being printed as it's executed. What am I missing?
A shell assignment is a single word, with no space after the equal sign. So what you wrote assigns an empty value to thefile; furthermore, since the assignment is grouped with a command, it makes thefile an environment variable and the assignment is local to that particular command, i.e. only the call to ls sees the assigned value. You want to capture the output of a command, so you need to use command substitution: thefile=$(ls -t -U | grep -m 1 "Screen Shot") (Some literature shows an alternate syntax thefile=`ls …` ; the backquote syntax is equivalent to the dollar-parentheses syntax except that quoting inside backquotes is weird sometimes, so just use $(…).) Other remarks about your script: Combining -t (sort by time) with -U (don't sort with GNU ls) doesn't make sense; just use -t. Rather than using grep to match screenshots, it's clearer to pass a wildcard to ls and use head to capture the first file: thefile=$(ls -td -- *"Screen Shot"* | head -n 1) It's generally a bad idea to parse the output of ls. This could fail quite badly if you have file names with nonprintable characters. However, sorting files by date is difficult without ls, so it's an acceptable solution if you know you won't have unprintable characters or backslashes in file names. Always use double quotes around variable substitutions, i.e. here write echo "Most recent screenshot is: $thefile" Without double quotes, the value of the variable is reexpanded, which will cause trouble if it contains whitespace or other special characters. You don't need semicolons at the end of a line. They're redundant but harmless. In a shell script, it's often a good idea to include set -e. This tells the shell to exit if any command fails (by returning a nonzero status). If you have GNU find and sort (in particular if you're running non-embedded Linux or Cygwin), there's another approach to finding the most recent file: have find list the files and their dates, and use sort and read (here assuming bash or zsh for -d '' to read a NUL-delimited record) to extract the youngest file. IFS=/ read -rd '' ignored thefile < <( find -maxdepth 1 -type f -name "*Screen Shot*" -printf "%T@/%p\0" | sort -rnz) If you're willing to write this script in zsh instead of bash, there's a much easier way to catch the newest file, because zsh has glob qualifiers that permit wildcard matches not only on names but also on file metadata. The (om[1]) part after the pattern is the glob qualifiers; om sorts matches by increasing age (i.e. by modification time, newest first) and [1] extracts the first match only. The whole match needs to be in parentheses because it's technically an array, since globbing returns a list of files, even if the [1] means that in this particular case the list contains (at most) one file. #!/bin/zsh set -e cd ~/Desktop thefile=(*"Screen Shot"*(om[1])) print -r "Most recent screenshot is: $thefile"
How can I assign the output of a command to a shell variable?
1,327,107,993,000
Is there an easy way to substitute/evaluate environment variables in a file? Like let's say I have a file config.xml that contains: <property> <name>instanceId</name> <value>$INSTANCE_ID</value> </property> <property> <name>rootPath</name> <value>/services/$SERVICE_NAME</value> </property> ...etc. I want to replace $INSTANCE_ID in the file with the value of the INSTANCE_ID environment variable, $SERVICE_NAME with the value of the SERVICE_NAME env var. I won't know a priori which environment vars are needed (or rather, I don't want to have to update the script if someone adds a new environment variable to the config file).
You could use envsubst (part of gnu gettext): envsubst < infile will replace the environment variables in your file with their corresponding value. The variable names must consist solely of alphanumeric or underscore ASCII characters, not start with a digit and be nonempty; otherwise such a variable reference is ignored. Some alternatives to gettext envsubst that support ${VAR:-default} and extra features: rust alternative go alternative node.js alternative To replace only certain environment variables, see this question.
Replace environment variables in a file with their actual values?
1,327,107,993,000
If there's a "First World Problems" for scripting, this would be it. I have the following code in a script I'm updating: if [ $diffLines -eq 1 ]; then dateLastChanged=$(stat --format '%y' /.bbdata | awk '{print $1" "$2}' | sed 's/\.[0-9]*//g') mailx -r "Systems and Operations <sysadmin@[redacted].edu>" -s "Warning Stale BB Data" jadavis6@[redacted].edu <<EOI Last Change: $dateLastChanged This is an automated warning of stale data for the UNC-G Blackboard Snapshot process. EOI else echo "$diffLines have changed" fi The script sends email without issues, but the mailx command is nested within an if statement so I appear to be left with two choices: Put EOI on a new line and break indentation patterns or Keep with indentation but use something like an echo statement to get mailx to suck up my email. I'm open to alternatives to heredoc, but if there's a way to get around this it's my preferred syntax.
You can change the here-doc operator to <<-.  You can then indent both the here-doc and the delimiter with tabs: #! /bin/bash cat <<-EOF indented EOF echo Done Note that you must use tabs, not spaces, to indent both the here-doc and the delimiter. This means the above example won't work copied (Stack Exchange replaces tabs with spaces). If you put any quotes around the first EOF delimiter, then parameter expansion, command substitution, and arithmetic expansion will not be in effect.
Can't indent heredoc to match code block's indentation
1,327,107,993,000
I have the grep command. I'm searching for a keyword from a file, but I don't want to display the match. I just want to know the exit status of the grep.
Any POSIX compliant version of grep has the switch -q for quiet: -q Quiet. Nothing shall be written to the standard output, regardless of matching lines. Exit with zero status if an input line is selected. In GNU grep (and possibly others) you can use long-option synonyms as well: -q, --quiet, --silent suppress all normal output Example String exists: $ echo "here" | grep -q "here" $ echo $? 0 String doesn't exist: $ echo "here" | grep -q "not here" $ echo $? 1
How can I suppress output from grep, so that it only returns the exit status?
1,445,607,289,000
I'm trying to copy a batch of files with scp but it is very slow. This is an example with 10 files: $ time scp cap_* user@host:~/dir cap_20151023T113018_704979707.png 100% 413KB 413.2KB/s 00:00 cap_20151023T113019_999990226.png 100% 413KB 412.6KB/s 00:00 cap_20151023T113020_649251955.png 100% 417KB 416.8KB/s 00:00 cap_20151023T113021_284028464.png 100% 417KB 416.8KB/s 00:00 cap_20151023T113021_927950468.png 100% 413KB 413.0KB/s 00:00 cap_20151023T113022_567641507.png 100% 413KB 413.1KB/s 00:00 cap_20151023T113023_203534753.png 100% 414KB 413.5KB/s 00:00 cap_20151023T113023_855350640.png 100% 412KB 411.7KB/s 00:00 cap_20151023T113024_496387641.png 100% 412KB 412.3KB/s 00:00 cap_20151023T113025_138012848.png 100% 414KB 413.8KB/s 00:00 cap_20151023T113025_778042791.png 100% 413KB 413.4KB/s 00:00 real 0m43.932s user 0m0.074s sys 0m0.030s The strange thing is that the transfer rate is about 413KB/s and the file size is about 413KB so really it should transfer one file per second, however it's taking about 4.3 seconds per file. Any idea where this overhead comes from, and is there any way to make it faster?
@wurtel's comment is probably correct: there's a lot of overhead establishing each connection. If you can fix that you'll get faster transfers (and if you can't, just use @roaima's rsync workaround). I did an experiment transferring similar-sized files (head -c 417K /dev/urandom > foo.1 and made some copies of that file) to a host that takes a while to connect (HOST4) and one that responds very quickly (HOST1): $ time ssh $HOST1 echo real 0m0.146s user 0m0.016s sys 0m0.008s $ time scp * $HOST1: foo.1 100% 417KB 417.0KB/s 00:00 foo.2 100% 417KB 417.0KB/s 00:00 foo.3 100% 417KB 417.0KB/s 00:00 foo.4 100% 417KB 417.0KB/s 00:00 foo.5 100% 417KB 417.0KB/s 00:00 real 0m0.337s user 0m0.032s sys 0m0.016s $ time ssh $HOST4 echo real 0m1.369s user 0m0.020s sys 0m0.016s $ time scp * $HOST4: foo.1 100% 417KB 417.0KB/s 00:00 foo.2 100% 417KB 417.0KB/s 00:00 foo.3 100% 417KB 417.0KB/s 00:00 foo.4 100% 417KB 417.0KB/s 00:00 foo.5 100% 417KB 417.0KB/s 00:00 real 0m6.489s user 0m0.052s sys 0m0.020s $
Why is scp so slow and how to make it faster?
1,445,607,289,000
How to use wget to download files from Onedrive? (and batch files and entire folders, if possible)
Using Chrome (but Firefox will probably also work). Open DevTools Click the Download button. Download but cancel immediately Open the 'Network' tab in DevTools. Search for 'Zip?authKey=' in DevTools and open it (click). This is a POST request. Click 'View source' to the right of 'Form data' at the bottom. Construct the command as follows:   wget --post-data='<raw form data>' '<Download URL>' Or: wget --post-data='resIds=xxx&canary=yyy&authkey=zzz' 'https://cid--foobar.users.storage.live.com/downloadfiles/V1/Zip?authKey=zzz' This even works on a different host (with a different IP address).
How to download files and folders from Onedrive using wget?
1,445,607,289,000
When I run several jobs on a head node, I like to monitor the progress using the command top. However, when I'm using PBS to run several jobs on a cluster, top will of course not show these jobs, and I have resorted to using 'qstat'. However the qstat command needs to be run repeatedly in order to continue monitoring the jobs. top updates in real-time, which means I can have the terminal window open on the side and glance at it occasionally while doing other work. Is there a way to monitor in real-time (as the top command would do) the jobs on a cluster that I've submitted using the PBS command qsub? I was surprised to see so little, after extensive searching on Google.
If you want to be a super-boss, you can always use 'pbstop' It's basically a PBS cluster version of what 'htop' is for local processes. (Note that your cluster may not have this installed. Ask the admins for it!) (Also, supports interactive filtering by user, queue, etc) EG:
PBS equivalent of 'top' command: avoid running 'qstat' repeatedly
1,445,607,289,000
After importing several 1000 Files from a camera onto a hard drive I realized, that the counter, used in the process of renaming the file - does not start from 0. This leads to file structure like this: My vacation 2018-05-03 2345.jpg My vacation 2018-05-03 2346.jpg My vacation 2018-05-04 2347.jpg I would like to batch rename all those files in a wax, that the index starts with 0 My vacation 2018-05-03 0001.jpg My vacation 2018-05-03 0002.jpg My vacation 2018-05-04 0003.jpg I went already through some topics dealing with batch renaming files and adding an counter/index (bash loop) or usig rename/prename but I was not able to get a working solution for my case. Basically, I would like to match the part of the filename with the description and the date using the regular expression .*(\d\d\d\d\-\d\d\-\d\d){1} and add a suffix counter on the end.
In the file names, we need to substitute a sequence of digits followed by dot — \d+. — by a 4-zero padded counter followed by dot — sprintf("%04d.", ++$c). rename -n -- 'our $c; s/\d+\./sprintf("%04d.", ++$c)/e' *.jpg For no zero padding, we don't need sprintf, but only to concatenate the counter and the dot. Since the concatenation operator is also a dot, rename -n -- 'our $c; s/\d+\./++$c . "."/e' *.jpg Notes Remove the -n when convinced it works correctly. In some distributions rename may be called perl-rename. our $c; was introduced to solve the 'Global symbol "$c" requires explicit package name' error. In my system it is not necessary... ¯\(ツ)/¯
Replacing counter in a filename for all files in a directory
1,445,607,289,000
I'm used to IBM i (AS/400) batch processing with flexible queueing configuration possibilities. I'm searching a similar facility for Linux Batch processing. Most important is that one job at a time being taken from the queue and executed. After that, the queue is scanned for more entries. If there are any, the next task will be fetched to run. at -b comes relatively close but if a job takes so little resources that the load stays down, more tasks get executed in parallel which is not desired. Run-parts would be another option but it is too static in behavior and doesn't run as a daemon to look for new entries in a given queue. I could run it as a regular cron job but I need to take care of jobs to delete from the queue myself. Do I have to code my own or extend at to achieve my desired functionality or are there alternate batch queueing mechanisms available?
ts - task spooler. A simple unix batch system should be up for the task. It runs a task spooler/queue server and you can add a new task with a simple command like: tsp yourcommand You can specifify the number of slots - aka how many jobs should be executed at a time. By default this is set to one, AFAIK. It also has some advanced functionality, like running a command only if a certain previous command succeeds. You can view the queue and details for each job as well. You can find more details in the manpage.
Batch Processing with one task at a time
1,445,607,289,000
I am launching non interactive jobs using batch, and I would like to increase the load limiting factor in order to use all 8 of my cores. I am on Ubuntu 16.04 LTS. From what I understand, batch uses atd to do the jobs. Jobs start when the load factor goes under a threshold, called the load limiting factor. It is said in the man of atd that we can change this factor using the -l option. My question: how can I use this atd -l XX option? When I type, for instance, atd -l 7.2 before batch, it doesn't seem to be changing anything. What I have found so far: In this question How to run bash script via multithreading, one contributor proposes to do this in the 'atd service starting script'. I guess that it refers to the /etc/init.d/atd, but I do not know what to change there, cf next bullet point. I have found pages, such as this one http://searchitchannel.techtarget.com/feature/Understanding-run-level-scripts-in-Fedora-11-and-RHEL, where they propose to: "modify the following line (in the start section) of the /etc/init.d/atd script: daemon /usr/sbin/atd. Replace it with this line, using the -l argument to specify the new minimum system load value: daemon /usr/sbin/atd -l 1.6". However, there is no such a line in /etc/init.d/atd. It seems that it can be introduced in the /etc/init.d/atd, but I do not know where. I have never changed such files. So, how can I change the load limiting factor used by the batch command?
Found a solution: Create a file: /etc/init/atd.override Add a line exec atd -l 7.2 Then sudo service atd restart It has to do with how the 'Upstart init daemon' works. Explanations there: http://linux.die.net/man/5/init If the file /etc/init/atd.override already exists with an line starting with exec, edit this line.
atd, batch // Setting the load limiting factor
1,445,607,289,000
I have a surveillance camera and a program records videos when motion is detected. Basically, this program saves the video in a very heavy format. My solution was to call a script that converts the video using ffmpeg. The script is called each time a video is made. The conversion takes a relatively short time, but a lot of CPU use. The problem comes when a new video arrives to be converted and the conversion of the other hasn't finished. In that situation, the script is called again, resulting in two or more ffmpeg processes that make the machine almost hang during a considerable time. How can I queue the conversion job, to be executed after the prior one is done? What I want is to have just one ffmpeg process at a time.
You can use batch program that is part of at package (tools for job queuing). It is installed by default on many systems.
How can I queue processes?
1,445,607,289,000
It seems that I cannot use ./ in qsub as in qsub -q hpc-pool ./myScript.sh where myScript.sh contains several ./. After checking, ./ somewhat is translated to ~/. Why is this the case?
Batch jobs submitted by qsub are executed in your home directory by default. Some versions of qsub support the -d option to specify a different directory. To execute the script in the same directory where you ran qsub, use qsub -d "$PWD" -q hpc-pool ./myScript.sh If the -d option is not available, you can access the directory where you ran qsub in your script, in the PBS_O_WORKDIR variable. So add this line near the beginning of your script: cd "$PBS_O_WORKDIR" || exit $?
Current directory ./ in qsub?
1,445,607,289,000
I've got an old Mac with 24 cores, and I'd like to run several hundred/thousands one-core jobs automatically. I've made a bash script that runs the processes in the background, but if I set too many going at once the computer freezes (apparently 300 is okay, 400 too much...). Ideally, what I'd like to do is run 24, then when one finishes, the 25th, then when the next finishes, the 26th, and so on. Unfortunately each job can take a different, and variable run time, so I can't do some kind of chron to set them going at staggered times. I've seen some things with "wait", but I'm not sure if I sent 24 then, say, 976 with a wait command, would it give me the desired behaviour, or would it just run 976 in series after the first of the 24 finish? EDIT: Thanks, this could very well be a duplicate, but as I see that question's answers only point towards parallel, can I please continue to explore here how to do it with xargs? Reason for this, is that the Mac in question is currently on another continent and I absolutely need it to work for the next few days and run all these jobs - installing something always has the potential to mess up the machine, and so I don't want to install parallel at this point while I can't physically get to it. But it has xargs in bash, so I'm exploring using that. Thus far, I've rewritten my bash script to meet what appears to be the situation expected by both xargs and parallel, that I can run it with a variety of input. So now, what I have is a bash script that runs my jobs on each file in a folder. I've currently tried: ls -d myfolder/* | xargs -P 2 -L 1 ~/bin/myscript.sh But this still seems to run them all simultaneously, thus I'm not sure what I've done wrong. (here I'm using max 2 just so I can keep looking and testing! I put only 4 in the folder - didn't want to send hundreds by accident) FINAL EDIT: Ahah!!! MUCH later I figured out what I'd done wrong. xargs was likely running my script in parallel, but not the program I'd written the script to run. I wrote a script because I hadn't been able to figure out how to insert the filename into the arguments list, which expected parameter=value pairs. I eventually figured out how I could do this with the -I flag in xargs. This finally worked: ls -d myfolder/* | xargs -I foo -P 2 -L 1 myprogram arg1 arg2 arg3=foo arg4 (I think -I and -L 1 are redundant, but as it works I'm not messing with it...) Here, foo was replaced in the arguments list to myprogram with each filename. I note that one reason it took me ages to figure out is most instructions with -I use {} as the element to replace, and for some reason on my Macs it couldn't handle that. So I thought -I wasn't working, but it worked fine with foo.
I encountered a similar problem recently. As far as I know you have two options: xargs -0 -P 24 -L 1 and Gnu Parallel For example, to convert every flac file found by the find command to ogg I tried running: find -name "*.flac" -print0 | xargs -0 -P 24 -L 1 oggenc This runs up to -P 24 processes at a time using -L 1 lines from the find command. I'm sure you can use this to customize it to your needs but we will need more details from your question.
queue-like behaviour for multiple one-core jobs on single machine? [duplicate]
1,445,607,289,000
I need to get a clean txt document and my first approach is to use aspell. The issue is I need it on batch, no interactive mode. Every txt file is piped to aspell and must be returned a new document with the non-dictionnary words deleted. I've found just the inverse behaviour: list the non-dictionary words using cat $file | aspell list | sort -u -f Is aspell the correct tool to achieve that cleaned document folder? What about automatic substitution of misspelled words? (using a predefined list file)
sed -E -e "s/$(aspell list <file | sort -u | paste -s -d'|' | sed -e 's/^/\\b(/; s/$/)\\b/' )//g" \ file > newfile This uses command substitution $(...) to insert the output of aspell list <$file into a sed search and replace operation. aspell's output is also unique sorted and paste is used to join each line with |. Finally it is piped through sed to add \b word-boundary anchors as well as open and close parentheses. All of which constructs a valid extended regular expression like \b(word1|word2|word3|...)\b to use as the search regexp in the sed search and replace command. You can test the result of the entire command with, e.g., diff -u file newfile AFAIK, aspell doesn't have an auto-correct mode. This is probably a Good Thing.
filter document via aspell
1,445,607,289,000
Consider a simple processing queue like: cat list.txt | xargs -n1 -P20 process.sh (-P or --max-procs) How to have something like that in AIX ?
You could emulate the same thing by replacing your xargs by a ksh script. Eg: #!/bin/ksh nproc=0 max=20 trap 'let nproc--' sigchld while read file do while [ $nproc -ge $max ] do sleep 1 done process.sh "$file" & let nproc++ done wait The shell variable nproc counts the number of processes it has run in the background. When a process ends the shell traps the SIGCLD signal to decrement the variable. A sleep polling loop stops more than max processes being started.
How to have a process queue in AIX like xargs with "--max-procs"?
1,445,607,289,000
I am still pretty new to linux and have been trying very hard at getting this right. Please help me to get this right. I am trying to merge 2 videos (1 from each folder) multiple times in like a batch process, automatically 1 set after the next. I am trying to do it with ffmpeg and for loop in order to take one file from top of list of folder-1 and merge it with one file from top of list of folder-2, and then repeat the process all the way down the folder lists until all videos have been paired up with one another. To imagine this, picture 2 folders side by side with files in each, now line up the files from the folder on the left to the folder on the right. I want to merge 2 videos into 1 multiple times. I was going to draw a diagram but I think its understood, hopefully. Here is my code, I have changed this so many times but my latest one was: (and I attempted to run this in the directory of folder-1, hoping it would read the files and join 1 to 1 from folder-2, but sadly no luck. for filename in *.mp4; do folder2="/path/to/folder2" xout="/output/file/as/$(date +'%Y-%m-%d_%H:%M:%S').mp4" ffmpeg -f concat -i "${filename%}" -i "$vid2/${filename%}" -acodec copy -vcodec copy "$xout" done Here is another attempt that is giving me same errors. No such file or directory for filename in *.mp4; do vid1="/path/folder-1/${filename%.*}" vid2="path/folder-2/${filename%.*}" out1="/path/output/$(date +'%Y-%m-%d_%H:%M:%S').mp4" ffmpeg -f concat -i "$vid1" -i "$vid2" -acodec copy -vcodec copy "$out1" done Can anyone please, please, please tell me what it is I am doing wrong, I can't get this right, it has been around 4 hours and I have tried so many things and read up on so many articles regarding for loops, while loops, ffmpeg commands, etc. Thank you so much for your precious time, it is greatly appreciated!
In your first example, ${filename%} doesn't change $filename at all, and as you told ffmpeg to open the .mp4 file with the concat demuxer -f concat, the error message should have been <actual name of $filename>: Invalid data found when processing input, but you recieved No such file or directory, so I suspect the glob was not working - maybe your working directory wasn't actually folder-1, in which case the full 'file not found' error message from ffmpeg would have been *.mp4: No such file or directory - the glob didn't match any files, so the parameter filename was set to <literal asterisk><dot>mp4. In your second example, the paramater subtitution ${filename%.*} is removing .mp4 from the end of the names you are providing to ffmpeg - possibly why you recieved No such file or directory. In addition, in both your examples the usage of ffmpeg's concat demuxer is incorrect. The concat demuxer requires a text file as its input (or appropriate shell substitution as used in the below example with <()). In your example, input files were specified directly which is what you might do if you wanted to mux all the streams to a single container - the streams would be in "parallel" (eg. to add subtitle streams, or a secondary audio track). Concatenation will join the files sequentially. If what you want is to concatenate an .mp4 file in folder-1 with another .mp4 file in folder-2, where the filenames match... Here's an example which I have tested, using absolute paths - /tmp/a and /tmp/b are used instead of /path/folder-1 and /path/folder-2, /tmp as /path/output: seconddirectory="/tmp/b" for i in /tmp/a/*.mp4 do if ! [[ -e "$seconddirectory/${i##*/}" ]] then >&2 echo "no matching file in $seconddirectory for $i" continue fi out="${i##*/}" out="/tmp/${out%.*}-$(date +'%Y-%m-%d_%H:%M:%S')" ffmpeg -f concat -safe 0 -i <(printf '%s\n' "file '$i'" "inpoint 0" "file '$seconddirectory/${i##*/}'" "inpoint 0") -c copy "$out.mp4" done This will output all .mp4 files in /tmp/a for which a matching filename exists in /tmp/b to /tmp/*-date.mp4, concatenating the matching files. (note: don't use only the date for the output as it will cause conflicting filenames - use something unique to avoid this - in this example the basename of the input file/s is used). The ${i##*/} substitutions are to remove the path component from the absolute paths, leaving just the filename component - using absolute paths will mean that the current working directory won't interfere with * glob matching. If you wanted to join files with non-matching filenames, you would need to work something different out. eg. to match the first file from each folder then the second, etc. (in the order that bash glob sorts them): a=(/tmp/a/*.mp4) b=(/tmp/b/*.mp4) a=("${a[@]:0:${#b[@]}}") b=("${b[@]:0:${#a[@]}}") for (( i=0; i<${#a[@]}; i++ )) do out="${a[i]##*/}" out="${out%.*}-${b[i]##*/}" out="/tmp/${out%.*}-$(date +'%Y-%m-%d_%H:%M:%S')" ffmpeg -f concat -safe 0 -i <(printf '%s\n' "file '${a[i]}'" "inpoint 0" "file '${b[i]}'" "inpoint 0") -c copy "$out.mp4" done This uses array variables, in conjuction with globbing to build lists of .mp4 files in each directory, and will concatenate a pair (one from each list), until no more pairs exist. Instead of the process substitution <(), you could match pairs of input files and write them to a text file in the format required by ffmpeg, then process that file. eg. for the first example it would look like: seconddirectory="/tmp/b" for i in /tmp/a/*.mp4 do if ! [[ -e "$seconddirectory/${i##*/}" ]] then >&2 echo "no matching file in $seconddirectory for $i" continue fi out="${i##*/}" out="/tmp/${out%.*}-$(date +'%Y-%m-%d_%H:%M:%S')" printf '%s\n' "file '$i'" "inpoint 0" "file '$seconddirectory/${i##*/}'" "inpoint 0" > "$out.ffcat" done for i in /tmp/*.ffcat do ffmpeg -f concat -safe 0 -i "$i" -c copy "${i/%.ffcat/.mp4}" done An alternative to ffmpeg for this job would be mkvmerge (from mkvtoolnix). It offers a way to concatenate files without needing a text file as input. In the first example from above the entire ffmpeg line could be replaced with: mkvmerge -o "$out.mkv" "$i" + "$seconddirectory/${i##*/}" The resulting output file will be in a .mkv matroska container, instead of the .mp4 container used in ffmpeg example above. Putting all this together in a reusable function: function concatenation_example() { local a b c i out mf if type mkvmerge >/dev/null then mf=m elif type ffmpeg >/dev/null then mf=f else >&2 echo "This function won't work without either mkvmerge or ffmpeg installed." return 1 fi if [[ ! -d "$1" || ! -d "$2" || ! -d "$3" ]] then >&2 printf '%s\n' "concatenation_example FIRSTDIR SECONDDIR OUTDIR" "all arguments must be directories" return 1 fi for i in "$1"/*.mp4 do if ! [[ -e "$2/${i##*/}" ]] then >&2 echo "no matching file in $2 for $i" continue fi out="${i##*/}" out="$3/${out%.*}-$(date +'%Y-%m-%d_%H:%M:%S')" case "$mf" in (m) mkvmerge -o "$out.mkv" "$i" + "$2/${i##*/}" ;; (f) ffmpeg -f concat -safe 0 -i <(printf '%s\n' "file '$i'" "inpoint 0" "file '$2/${i##*/}'" "inpoint 0") -c copy "$out.mp4" ;; esac done }
ffmpeg merge multiple sets of 2 videos in for loop
1,445,607,289,000
How can I count the unique log lines in a text file only until the first "-" and print the line with the count org.springframework. - initialization started org.springframework. - initialization started pushAttemptLogger - initialization started pushAttemptLogger - initialization started example result org.springframework. 2 pushAttemptLogger 2 reviewed: https://stackoverflow.com/questions/6712437/find-duplicate-lines-in-a-file-and-count-how-many-time-each-line-was-duplicated
cut -f1 -d'-' inputfile | sort | uniq -c cut -f1 -d'-' will treat the file as dash-delimited and return only the first column in each line. sort is necessary for uniq to work properly. uniq -c shows only unique lines from the sorted input, including a count.
Count unique lines only to a set pattern
1,445,607,289,000
I would like write a batch script that checks used or available memory allow me to run commands if available memory less than X mb. I googled but page they refer didn't work for me I am using centos 7 basically I would like to do if availablememory < 26000m do command=forever stopall do command=pkill -f checkurl.php end BEFORE PROGRAM START [root@www ~]# free -m total used free shared buff/cache available Mem: 32002 3471 802 1121 27728 26529 Swap: 38112 234 37878 [root@www ~]# AFTER PROGRAM START [root@www ~]# free -m total used free shared buff/cache available Mem: 32002 13913 200 1121 17887 16381 Swap: 38112 234 37878
if [ $(awk '/^MemAvailable:/ { print $2; }' /proc/meminfo) -lt 123456 ]; then : do someting done
batch script run command if available memory less than X mb
1,445,607,289,000
I have a bunch of images that I'd like to rename as follows: *.png.png --> *.png *.jpeg.jpg --> *.jpg *.JPEG --> *.jpg The only thing I've tried thus far is mv *.png.png *.png, but I knew that wouldn't work, but took a chance nevertheless. Is there a simple (or maybe not) way to batch rename files with this pattern?
Here is something using find to rename *.png.png -> *.png: find ./ -name '*.png.png' -type f \ -exec sh -c 'mv {} ./$(basename -s .png.png {}).png' \; It isn't really gerenic, so you have to customize it for the other file extensions.
Mac Terminal - rename *.png.png to *.png
1,445,607,289,000
In batch: @echo off :step1 if exist "file1.txt" (GOTO step2) ELSE (GOTO X) :step2 if exist "file2.txt" (GOTO Z) ELSE (GOTO Y) :X run the script from the beginning and file1.txt and file2.txt are created and the rest of the script is executed :Y run the commands to file1.txt and run the rest of the script :Z run the commands to file2.txt and run the rest of the script I know "goto" and "labels" don't exist in bash, but what is the alternative to do something similar to the above in bash? attempt: #!/bin/bash if [ -e file1.txt ]; then echo "file1.txt ok" if [ -e file2.txt ]; then echo "file2.txt ok" alternative to goto label Z else alternative to goto label Y fi else echo "file1.txt doesn't exist" alternative to goto label X fi PD: rudimentary scripts only for the purpose of conveying the idea What does the batch (and should also do the bash) (Important to avoid confusion): The full script runs a few commands and creates the files file1.txt and file2.txt. So since the script is very long and can be interrupted at some point, what I want is to start the script from where it was interrupted. This is what verification is for: If file1.txt exists (Y), then the script has already created it, and goes on to check if file2.txt (Z) exists (another part of the script). If file2.txt exists (Z), it means that the commands start from that point If file2.txt (Z) does not exist, but file1.txt (Y) exists, it means that the script was interrupted somewhere between creating file1.txt (Y) and file2.txt (Z), then it starts at file1.txt (Y) If neither file1.txt (Y) nor file2.txt (Z) exists then I have to start from the beginning of script (X)
After knowing exactly what your batch script should do, I would start with simplifying it a bit: @echo off if exist "file1.txt" goto skip_part1 run the script from the beginning and file1.txt and file2.txt are created and the rest of the script is executed :skip_part1 if exist "file2.txt" goto skip_part2 run the commands to file1.txt and run the rest of the script :skip_part2 run the commands to file2.txt and run the rest of the script which can be equivalently written as: @echo off if not exist "file1.txt" ( run the script from the beginning and file1.txt and file2.txt are created and the rest of the script is executed ) if not exist "file2.txt" ( run the commands to file1.txt and run the rest of the script ) run the commands to file2.txt and run the rest of the script and this can be directly translated to the following bash script: #!/bin/bash if [ ! -e file1.txt ]; then run the script from the beginning and file1.txt and file2.txt are created and the rest of the script is executed fi if [ ! -e file2.txt ]; then run the commands to file1.txt and run the rest of the script fi run the commands to file2.txt and run the rest of the script
how to create a bash script, with alternative of "goto" and "labels"
1,445,607,289,000
I'm on OpenSUSE 12.1, so no tmux, and we're not allowed to install anything - wget is too old to download a binary as well. Often I and other users have to run long scripts that take several hours, and our SSH client will crash in the middle. I'm aware that this is a bad practice but my opinion isn't valued. What's a good way to "schedule" or somehow run these long scripts without the danger of them ending if the client crashes? Cron jobs maybe?
One option would be screen, if it is available. (You mentioned tmux, but not screen) Another option would be to run the script with "nohup" which will disassociate it from your shell. You would then need to use its pid to monitor it. Redirecting the output to files would also be recommended.
What's the best way to run a long script without the SSH client crashing?
1,445,607,289,000
I've been reading the recent blogpost "Winding down my Debian involvement" by Michael Stapelberg. Sad details aside, it's been mentioned that within Debian infrastructure batch jobs run four times a day at XX:52 UTC: When you want to make a package available in Debian, you upload GPG-signed files via anonymous FTP. There are several batch jobs (the queue daemon, unchecked, dinstall, possibly others) which run on fixed schedules (e.g. dinstall runs at 01:52 UTC, 07:52 UTC, 13:52 UTC and 19:52 UTC). Is there a reason to choose XX:52 UTC exactly and not to use time rounded to the nearest hour, e.g 02:00, 08:00, 08:00 and 14:00? Should I also start my cron jobs slightly before the new hour starts, or this was a random choice by the Debian team?
It is not random, and it is something that a system administrator should think about. Notice that your cron.hourly, your cron.daily, your cron.weekly, and your cron.monthly are all run at different times. These times have varied over the years, and have been moved back and forth, because these jobs interact with one another, sometimes badly. The same is true of other Debian infrastructure. This is a thing to think about in general with scheduled jobs running in batch. (It's not just cron jobs, but this sort of job in general. And it's not just Debian.) The job that cleans up files might interact with the job that scans the filesystem for stuff, which might interact with the job that makes temporary files as it is working, … Further reading bam (1998-05-31). cron: race condition from run-parts --report /etc/cron.daily. Debian bug #23023. Christoph Anton Mitterer (2012-01-22). cron: align /etc/cron.{daily, hourly, monthly, weekly} with @daily, @hourly, @monthly, @weekly. Debian bug #656835. https://wiki.debian.org/ItsSixAmAndIveBeenCracked
Starting batch jobs at exact time slightly before the new hour starts
1,445,607,289,000
I have a submit script looks like below, it tries to run a large number of instances of csce.py in backgrounds with 3 nodes.... in a laptop, this usually could successfully automatically distribute all the background tasks into 16 cores.... However, I am not sure if in a cluster, it would also automatically distribute the 4*13*9 tasks in 3 nodes (48 cores). #!/bin/bash #SBATCH -N 3 # Total number of nodes requested (16 cores/node) #SBATCH -n 48 # Total number of mpi tasks requested for simplify in 0.1 0.15 0.2 0.25 do for lmbda in 0.5 1 2 5 10 20 50 100 200 500 1000 2000 5000 do for mu in 0.005 0.01 0.05 0.1 0.5 1 5 10 50 do rm eci.out csce.py --mu $mu --lmbda $lmbda --simplify $simplify --favor-low-energy 0.01 --bias-stable --save-energies lmbda_$lmbda\_mu_$mu\_simplify_$simplify\_ce-energies.dat --save-weights lmbda_$lmbda\_mu_$mu\_simplify_$simplify\_ce-weights.dat --casm-eci-file eci.in lmbda_$lmbda\_mu_$mu\_simplify_$simplify\_eci.out --save-hull lmbda_$lmbda\_mu_$mu\_simplify_$simplify\_ce-hull.dat --preserve-ground-state 10000 2> lmbda_$lmbda\_mu_$mu\_simplify_$simplify\_error 1> lmbda_$lmbda\_mu_$mu\_simplify_$simplify\_output & done done done wait
No, if you would have multiple nodes (machines) there is nothing in there that takes advantages of that, everything will run on the machine you run this script. The & at the end of the csce.py line just makes the operation run in the background on the current machine. So wit this setup you will get 4x12x9 tasks running in parallel on your current machine. GNU parallel supports remote execution, for that you need to setup automated access to the other machines and think about how any input data is accessed (if it is not stored for reading on some volume shared by all machines you might need to copy the data to work on).
submitting jobs to get 3 nodes running parallel executions
1,445,607,289,000
I have multiple text files, and I wish to extract specific columns from these files and save them to *_2.txt files. awk '{print $(NF-3), $5}' *.txt > *_2.txt But this command is not working. How can I achieve this batch column extraction using awk? Input: a.txt aaa bbb ccc 109.6136 93.1900 1.0000 269.7332 35703.1790 ddd eee fff 48.8760 34.2100 1.0000 215.0926 35918.2717 ggg hhh iii 17.3588 -65.4900 0.7000 14008.0228 49926.2945 ... b.txt qq ss rr 105 71.6239 68.1500 3.0000 1.3408 4329.5373 aa bb nn 110 271.3443 231.4200 10.0000 15.9395 4345.4768 rr uu ii 115 338.2163 415.6700 25.0000 9.5985 4355.0753 zz xx yy 120 536.0957 584.7900 50.0000 0.9485 4356.0238 ... Target output: a_2.txt 109.6136 93.1900 1.0000 48.8760 34.2100 1.0000 17.3588 -65.4900 0.7000 ... b_2.txt 105 71.6239 68.1500 110 271.3443 231.4200 115 338.2163 415.6700 120 536.0957 584.7900 ... I wish to extract specific columns from each text file and save them to each text file with _2 added to the name. Target column is $(NF-5), $(NF-4), $(NF-3)
You have to do this within AWK script like that: awk 'FNR == 1 { sub(/\.txt$/, "_2.txt", FILENAME) } { print $(NF-3), $5 > FILENAME }' *.txt
batch awk print from multiple input file to multiple output file
1,445,607,289,000
I'm trying to rename multiple numbered files according to a list of names. Example: 1.pdf, 2.pdf, …, n.pdf And a file called names.txt, with a value per line: Fabio Joao n-name So we will have 1.pdf → Fabio.pdf 2.pdf → Joao.pdf n.pdf → n-name.pdf Any ideas on how to accomplish this?
If the files are really just "lineNumber.pdf", then this is very easy to do. In the shell: c=0 while IFS= read -r name; do ((c++)) echo mv -- $c.pdf "$name.pdf" done < names.txt Once you're sure that works as you want it, remove the echo from the mv command. If you have very many files, you might want to consider doing it in Perl instead which will be much faster: perl -lne 'rename("$..pdf","$_.pdf")' names.txt
Rename multiple files according using a names-list
1,445,607,289,000
I have submitted a job on a PBS managed cluster using qsub -l nodes=5:ppn=16. My job is queued and is waiting for other jobs to be completed. Is there a command that allows me change the number of allocated nodes for queued jobs? I can simply delete the job using qdel and submit it again with more number of nodes. However, doing this would mean that I will lose my position in the queue (my job will go to the bottom of the queue).
You could try qalter, but it might be that there are restrictions on what you can change. qalter -f <jobID> shows a list of resources available. qalter -l nodes=5:ppn=16 will be used to alter the nodes.
Increase allocated nodes for queued jobs on a PBS cluster
1,445,607,289,000
I have a bunch of files in a single directory that I would like to rename as follows, example existing names: 1937 - Snow White and the Seven Dwarves.avi 1940 - Pinocchio.avi Target names: Snow White and the Seven Dwarves (1937).avi Pinocchio (1940).avi Cheers
If you have the Perl based rename (sometimes known as prename) this is indeed possible. If you understand Regular Expresssions it's even straightforward. rename -n 's!^(\d+) - (.*)\.(...)$!$2 ($1).$3!' *.avi What this does is split the source filename into three components. Using your first example these would be 1937 Snow White and the Seven Dwarves avi These are assigned to $1, $2, $3 within the rename command. (These are not bash variables.) It then puts them back together again in the different order. When you are happy with the proposed result, change the -n to -v, or even remove it entirely.
Batch rename long filenames of variable length with years
1,445,607,289,000
I have following image files in some directory which I want to be renamed: 8 -rw-rw-r-- 1 6661 sep 24 10:28 dbConnectionOkBostjan.png 8 -rw-rw-r-- 1 6548 sep 24 10:29 dbConnectionErrorBostjan.png 8 -rw-rw-r-- 1 5708 sep 24 10:29 btConnectionErrorBostjan.png 8 -rw-rw-r-- 1 5911 sep 24 10:30 btConnectionOkBostjan.png 8 -rw-rw-r-- 1 6916 sep 24 10:31 userLogOkBostjan.png 8 -rw-rw-r-- 1 6924 sep 24 10:44 userLogErrorBostjan.png Now, I know how to use mv command to rename file and I even know how to rename multiple files, but in this case I want to rename every this file with new name same as original file, but without word Bostjan. For example, dbConnectionOkBostjan.png must rename to dbConnectionOk.png and same for all other files. How do achieve this task using terminal? I wish to solve this using ordinary mv command. If I use proposed solution from Answer 1, I get following errors: user@testcomp:~/Pictures/testAppIcons$ for i in *Bostjan*; do mv $i $(echo $i | sed @Bostjan@@); done sed: -e expression #1, char 1: unknown command: `@' mv: missing destination file operand after ‘btConnectionErrorBostjan.png’ Try 'mv --help' for more information. sed: -e expression #1, char 1: unknown command: `@' mv: missing destination file operand after ‘btConnectionOkBostjan.png’ Try 'mv --help' for more information. sed: -e expression #1, char 1: unknown command: `@' mv: missing destination file operand after ‘dbConnectionErrorBostjan.png’ Try 'mv --help' for more information. sed: -e expression #1, char 1: unknown command: `@' mv: missing destination file operand after ‘dbConnectionOkBostjan.png’ Try 'mv --help' for more information. sed: -e expression #1, char 1: unknown command: `@' mv: missing destination file operand after ‘userLogErrorBostjan.png’ Try 'mv --help' for more information. sed: -e expression #1, char 1: unknown command: `@' mv: missing destination file operand after ‘userLogOkBostjan.png’ Try 'mv --help' for more information. user@testcomp:~/Pictures/testAppIcons$ I am using KUbuntu 15.04.
That can be done in one line, though for legibility I'll split. I echo the filename and modify it using sed in the target argument of mv: for i in *Bostjan*; do mv $i $(echo $i | sed s@Bostjan@@) done
multiple files rename - filenames patterns [duplicate]
1,445,607,289,000
I want to share my script to do this with Media Info CLI and python. At first I tried with pure bash but should have just gone python at first, much quicker and adaptable (for me). My task was to recursively go through all files in a specified folder (in this case on a NAS), and print as well as store in a txt file all the video codec and profile level used in each. The reason be I found some older Samsung TV wont play H264 with profile level greater than 4.1 , so some re-encoding was in order, also the latest Samsung TV have dropped support for xvid/divx.
usage: ./your_script_name.py ./your_path | tee output.txt if you want different/additional details from media info check those available with "mediainfo --Info-Parameters" #! /usr/bin/env python3 from glob import glob import os import sys import subprocess codecSummary = set() #dictionary path = sys.argv[1] print(path) files = [f for f in glob(path+'/**', recursive=True) if os.path.isfile(f)] #print(files) for file in files: result = subprocess.check_output('mediainfo "'+file+'" "--Inform=Video;%Format% %Format_Profile%"', shell=True).decode().rstrip() if result: codecSummary.add(result) print(result + ' '+ file) print(codecSummary)
Recursive (batch) video codec details with MediaInfo CLI
1,445,607,289,000
This is a bit of a crazy one, but I am wondering if it is even possible. I have maybe 300 folders in my /var/www folder that were all renamed to the same name [Name](Num) so kindred (19) for example. Almost every single one of these folders has a file called package.json which all have a name key value pair which then has the name of the project in side that. { "name": "a useful project name", ...... "main": "src/index.js", } I would like the folder that contains the package.json to be renamed to whatever the value of "name" is inside the package.json file.
One of the better tools to use when you need to parse json files is jq. That makes it easy to extract the name field from the package.json file in a directory. Performing the rename is a simple bit of shell scripting: $ cd /var/www $ for d in */; do # *1 > if [ -f "${d}package.json" ]; then # *2 > new_name=$(jq -e -M -r .name "${d}package.json") # *3 > if [ $? -eq 0 ] && ! [ -e "${new_name}" ]; then # *4 > mv "${d}" "${new_name}" # *5 > fi > fi > done Some notes: *1: */ expands to all the directories in the current directory. Each directory name will include a / on the end, so we do not put one in later at *2 and *3. *2: Only process directories that have a package.json file. *3: Invoke jq to extract the name field from package.json. We invoke jq with -r to output raw strings (i.e. leave off the double quotes), with -M to not have colored output, and -e to have jq exit with an error if there is no name field. *4: Check that jq ran successfully (there was a name field) and that the new name for the directory does not already exist. You may want to split these up and add an else if you want to output an error message for the two cases where you're skipping the rename. *5: Rename the directory. For a test run, I'd put echo in front of the mv command at *5 and check the output to see that the renaming looks right. I haven't tested this myself as I don't have a bunch of directories with package.json files.
batch rename folders with value from json value in package.json
1,445,607,289,000
I am trying to batch rename the following files: art-faculty-3_29060055362_o.jpeg fine-arts-division-faculty-2016-2017-5_29165851925_o.jpeg theatre-faculty-2016-2017-1_29132529356_o.jpeg art-history-faculty-2016-2017-1_29060057642_o.jpeg music-faculty-2016-2017-1_29132523816_o.jpeg I would like to rename them to: art-faculty.jpeg fine-arts-division-faculty.jpeg theatre-faculty.jpeg art-history-faculty.jpeg music-faculty.jpeg Here is what I have so far: rename -n -D '/faculty(.*)/g' -X -v * This returns: Using expression: sub { use feature ':5.18'; s/\/faculty\(\.\*\)\/g//g; s/\. ([^.]+)\z//x and do { push @EXT, $1; $EXT = join ".", reverse @EXT } } 'art-faculty-3_29060055362_o.jpeg' unchanged 'art-history-faculty-2016-2017-1_29060057642_o.jpeg' unchanged 'fine-arts-division-faculty-2016-2017-5_29165851925_o.jpeg' unchanged 'music-faculty-2016-2017-1_29132523816_o.jpeg' unchanged 'theatre-faculty-2016-2017-1_29132529356_o.jpeg' unchanged Is it possible to use REGEX with the delete (-D) transformation? If so, how would I use it to make the transformation I show above? If not, please point me in the right direction for performing transformations with rename using REGEX.
for i in *.jpeg; do echo mv "$i" "${i%faculty*}faculty.jpeg" ; done if okay as per requirements, remove echo to change the file names The perl rename command on my system has only the options -v -f -n $ rename -n 's/faculty\K.*(?=\.jpeg)//' *.jpeg art-faculty-3_29060055362_o.jpeg renamed as art-faculty.jpeg art-history-faculty-2016-2017-1_29060057642_o.jpeg renamed as art-history-faculty.jpeg fine-arts-division-faculty-2016-2017-5_29165851925_o.jpeg renamed as fine-arts-division-faculty.jpeg music-faculty-2016-2017-1_29132523816_o.jpeg renamed as music-faculty.jpeg theatre-faculty-2016-2017-1_29132529356_o.jpeg renamed as theatre-faculty.jpeg
Rename command to delete substring
1,445,607,289,000
Hi there I'm trying to download a large number of files at once; 279 to be precise. These are large BAM (~90GB) each. The cluster where I'm working has several nodes and fortunately I can allocate multiple instances at once. Given this situation, I would like to know whether I can use wget from a batch file (see example below) to assign each download to a separate node to carry out independently. batch_file.txt <https_link_1> -O DNK07.bam <https_link_2> -O mixe0007.bam <https_link_3> -O IHW9118.bam . . In principle, this will not only speed up things but also prevent the run from failing since the wall-time for this execution is 24h, and it won't be enough to download all those files on a single machine consecutively. This is what my BASH script looks like: #!/bin/bash # #SBATCH --nodes=279 --ntasks=1 --cpus-per-task=1 #SBATCH --time=24:00:00 #SBATCH --mem=10gb # #SBATCH --job-name=download #SBATCH --output=sgdp.out ##SBATCH --array=[1-279]%279 # #SBATCH --partition=<partition_name> #SBATCH --qos=<qos_type> # #SBATCH --account=<user_account> #NAMES=$1 #d=$(sed -n "$SLURM_ARRAY_TASK_ID"p $NAMES) wget -i sgdp-download-list.txt As you can see I was thinking to use an array job (not sure whether will work); alternatively, I thought about allocating 279 nodes hoping SLURM would haven been clever enough to send each download to a separate node (not sure about it...). If you are aware of a way to do so efficiently, any suggestion is welcome. Thanks in advance!
Expand the command into multiple wget commands so you can send them to SLURM as a list: while IFS= read -r url; do printf 'wget "%s"\n' "$url" done < sgdp-download-list.txt > wget.sh Or, if your sgdp-download-list.txt is just a list of wget command missing the wget at the beginning (which is what your example suggests), just use: sed 's/^/wget /' sgdp-download-list.txt > wget.sh Then, submit the wget.sh as the job.
wget — download multiple files over multiple nodes on a cluster
1,445,607,289,000
I need to convert a lot of *.wav in different folder, i try to use find and flac to convert all this without missing name of the track in the folder. This is the line I try to use : find ./ -type f -iname "*.wav" -exec sh -c flac -8 *.wav \; I don't know what missing but flac show me the man. I think I need some help :) ps : I find a "solution" but with ffmpeg, I prefer using flac.
There is no need to call a subshell for the find arguments here, you can call directly flac. Also the command you execute accepts multiple arguments, so you can call one flac process to convert all your files, like this: find . -type f -iname "*.wav" -exec flac -8 {} +
Need to convert a batch of wav in flac in different folder
1,445,607,289,000
For one file I know I can (being my video and sub files in the same directory): mkvmerge -o output-file.mkv --default-track 0 --language 0:es subtitle-file.ass video-file.mkv But how can I do the same for 50 files. My video and subtitle files name are the same: video-1.mkv video-2.mkv video-3.mkv video-1.ass video-2.ass video-3.ass And my output file should be something like video-1-sub-mkv, video-2-sub-mkv, etc.
If each XYZ.mkv has a corresponding XYZ.ass, it's possible to use a for loop: for i in *.mkv; do if [ -f "${i%.*}".ass ] && [ ! -e "${i%.*}"-sub.mkv ]; then mkvmerge -o "${i%.*}"-sub.mkv "$i" --default-track 0 --language 0:es "${i%.*}".ass fi done Note: I did rearrange the order of your input files, so that the ass track is added after tracks from the mkv
Batch mergin mkv video with subtitles using MKVToolNix
1,524,491,145,000
The Linux kernel minimal building requirements specifies that the calculator bc is required to build kernel v4.10, the minimal version of the tool being 1.06.95. What use is made of bc in this context, and why isn't the C language directly used instead of bc for these operations?
bc is used during the kernel build to generate time constants in header files. You can see it invoked in Kbuild, where it processes kernel/time/timeconst.bc to generate timeconst.h. This could be implemented as a C program which is built and run during the build, but it’s easier to use bc (which is small and common; in fact it’s part of the set of tools which are mandatory on a POSIX systems — the kernel does expect GNU bc though). bc is used here instead of Perl. The commit message suggests that bc was used previously, but I can’t find a trace of that; Perl has been used since 2008 (much to some people’s chagrin, although that patch set was never merged).
Why is 'bc' required to build the Linux kernel?
1,524,491,145,000
It looks like bc doesn't support float operations. When I do echo 1/8 | bc it gets me a zero. I checked the manual page bc (1), but it doesn't even mention float, so I wonder if it's supported?
bc doesn't do floating point but it does do fixed precision decimal numbers. The -l flag Hauke mentions loads a math library for eg. trig functions but it also means [...] the default scale is 20 scale is one of a number of "special variables" mentioned in the man page. You can set it: scale=4 Anytime you want (whether -l was used or not). It refers to the number of significant digits used in a decimal. In other words, subsequent solutions will be rounded down to that number of digits after the decimal scale (== fixed precision). The default scale sans -l is 0, meaning rounded (down) to whole numbers.
Are operations on floats supported with bc?
1,524,491,145,000
What are the differences between dc and bc calculators? When should I use dc and when bc?
dc is a very archaic tool and somewhat older than bc. To quote the Wikipedia page: It is one of the oldest Unix utilities, predating even the invention of the C programming language; like other utilities of that vintage, it has a powerful set of features but an extremely terse syntax. The syntax is a reverse polish notation, which basically means that the arguments (ie numbers) come first followed by the operator. A basic example of the dc usage is: echo '3 4 * p' | dc Where the p is required to print the result of the calculation. bc on the other hand uses the more familiar infix notation and thus is more intuitive to use. Here is an example of bc usage: echo '3 * 4' | bc Which one to use? bc is standardised by POSIX and so is probably the more portable of the two (at least on modern systems). If you are doing manual calculator work then it is definitely the choice (unless you are somewhat of a masochist). dc can still have its uses though, here is a case where the reverse polish notation comes in handy. Imagine you have a program which outputs a stream of numbers that you want to total up, eg: 23 7 90 74 29 To do this with dc is very simple (at least with modern implementations where each operator can take more than two numbers) since you only have to append a +p to the stream, eg: { gen_nums; echo +p } | dc But with bc it is more complex since we not only need to put a + between each number and make sure everything is on the same line, but also make sure there is a newline at the end: { gen_nums | sed '$ !s/$/+/' | tr -d '\n'; echo; } | bc
How is bc different from dc?
1,524,491,145,000
I have a simple bash function dividing two numbers: echo "750/12.5" | bc I'd like to take the output from bc and append /24 and pipe said result to another instance of bc. Something like: echo "750/12.5" | bc | echo $1 + "/24" | bc Where $1 is the piped result. P.S. I realize I could just do echo "750/12.5/24" | bc my question is more in regards to the appending of text to a pipe result.
In the simplest of the options, this does append to the pipe stream: $ echo "750/12.5" | { bc; echo "/24"; } 60 /24 However that has an unexpected newline, to avoid that you need to either use tr: $ echo "750/12.5" | { bc | tr -d '\n' ; echo "/24"; } 60/24 Or, given the fact that a command expansion removes trailing newlines: $ printf '%s' $( echo "750/12.5" | bc ); echo "/24" 60/24 But probably, the correct way should be similar to: $ echo "$(echo "750/12.5" | bc )/24" 60/24 Which, to be used in bc, could be written as this: $ bc <<<"$(bc <<<"750/12.5")/24" 2 Which, to get a reasonable floating number precision should be something like: $ bc <<<"scale=10;$(bc <<<"scale=5;750/12.5")/24" 2.5000000000 Note the need of two scale, as there are two instances of bc. Of course, one instance of bc needs only one scale: $ bc <<<"scale=5;750/12.5/24" In fact, what you should be thinking about is in terms of an string: $ a=$(echo "750/12.5") # capture first string. $ echo "$a/24" | bc # extend the string 2 The comment about scale from above is still valid here.
Append to a pipe and pass on?
1,524,491,145,000
I often use bc utility for converting hex to decimal and vice versa. However, it is always bit trial and error how ibase and obase should be configured. For example here I want to convert hex value C0 to decimal: $ echo "ibase=F;obase=A;C0" | bc 180 $ echo "ibase=F;obase=10;C0" | bc C0 $ echo "ibase=16;obase=A;C0" | bc 192 What is the logic here? obase(A in my third example) needs to be in the same base as the value which is converted(C0 in my examples) and ibase(16 in my third example) has to be in the base where I am converting to?
What you actually want to say is: $ echo "ibase=16; C0" | bc 192 for hex-to-decimal, and: $ echo "obase=16; 192" | bc C0 for decimal-to-hex. You don't need to give both ibase and obase for any conversion involving decimal numbers, since these settings default to 10. You do need to give both for conversions such as binary-to-hex. In that case, I find it easiest to make sense of things if you give obase first: $ echo "obase=16; ibase=2; 11000000" | bc C0 If you give ibase first instead, it changes the interpretation of the following obase setting, so that the command has to be: $ echo "ibase=2; obase=10000; 11000000" | bc C0 This is because in this order, the obase value is interpreted as a binary number, so you need to give 10000₂=16 to get output in hex. That's clumsy. Now let’s work out why your three examples behave as they do. echo "ibase=F;obase=A;C0" | bc 180 That sets the input base to 15 and the output base to 10, since a single-digit value is interpreted in hex, according to POSIX. This asks bc to tell you what C0₁₅ is in base A₁₅=10, and it is correctly answering 180₁₀, though this is certainly not the question you meant to ask. echo "ibase=F;obase=10;C0" | bc C0 This is a null conversion in base 15. Why? First, because the single F digit is interpreted in hex, as I pointed out in the previous example. But now that you've set it to base 15, the following output base setting is interpreted that way, and 10₁₅=15, so you have a null conversion from C0₁₅ to C0₁₅. That's right, the output isn't in hex as you were assuming, it's in base 15! You can prove this to yourself by trying to convert F0 instead of C0. Since there is no F digit in base 15, bc clamps it to E0, and gives E0 as the output. echo "ibase=16; obase=A; C0" 192 This is the only one of your three examples that likely has any practical use. It is changing the input base to hex first, so that you no longer need to dig into the POSIX spec to understand why A is interpreted as hex, 10 in this case. The only problem with it is that it is redundant to set the output base to A₁₆=10, since that's its default value.
Understand "ibase" and "obase" in case of conversions with bc?
1,524,491,145,000
Sometimes I need to divide one number by another. It would be great if I could just define a bash function for this. So far, I am forced to use expressions like echo 'scale=25;65320/670' | bc but it would be great if I could define a .bashrc function that looked like divide () { bc -d $1 / $2 }
I have a handy bash function called calc: calc () { bc -l <<< "$@" } Example usage: $ calc 65320/670 97.49253731343283582089 $ calc 65320*670 43764400 You can change this to suit yourself. For example: divide() { bc -l <<< "$1/$2" } Note: <<< is a here string which is fed into the stdin of bc. You don't need to invoke echo.
Doing simple math on the command line using bash functions: $1 divided by $2 (using bc perhaps)
1,524,491,145,000
I am evaluating the expression 6^6^6 using python and bc separately. The content of the python file is print 6**6**6. When I execute time python test.py, I get the output as real 0m0.067s user 0m0.050s sys 0m0.011s And then, I ran the command time echo 6^6^6 | bc which gave me the following output real 0m0.205s user 0m0.197s sys 0m0.005s From these results it is clear that the sys time taken by python and bc was 11ms and 5ms respectively. The bc command outperformed python at sys time level but when it comes to user and real time python was almost 4 times faster than bc. What might have gone there. I haven't given any priority to the processes as such. I am trying to understand this situation.
Python imports a large number of files at startup: % python -c 'import sys; print len(sys.modules)' 39 Each of these requires an even greater number of attempts at opening a Python file, because there are many ways to define a module: % python -vv -c 'pass' # installing zipimport hook import zipimport # builtin # installed zipimport hook # trying site.so # trying sitemodule.so # trying site.py # trying site.pyc # trying /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site.so # trying /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/sitemodule.so # trying /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site.py # /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site.pyc matches /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site.py import site # precompiled from /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site.pyc # trying os.so # trying osmodule.so # trying os.py # trying os.pyc # trying /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/os.so # trying /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/osmodule.so # trying /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/os.py # /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/os.pyc matches /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/os.py import os # precompiled from /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/os.pyc ... Each "trying", except for those which are builtin, requires an os-level/system calls, and each "import" seems to trigger about 8 "trying" messages. (There was ways to reduce this using zipimport, and each path in your PYTHONPATH may require another call.) This means there are almost 200 stat system calls before Python starts on my machine, and "time" assigns that to "sys" rather than "user", because the user program is waiting on the system to do things. By comparison, and like terdon said, "bc" doesn't have that high of a startup cost. Looking at the dtruss output (I have a Mac; "strace" for a Linux-based OS), I see that bc doesn't make any open() or stat() system calls of its own, except for loading a few shared libraries are the start, which of course Python does as well. In addition, Python has more files to read, before it's ready to process anything. Waiting for disk is slow. You can get a sense for Python's startup cost by doing: time python -c pass It's 0.032s on my machine, while 'print 6**6**6' is 0.072s, so startup cost is 1/2rd of the overall time and the calculation + conversion to decimal is the other half. While: time echo 1 | bc takes 0.005s, and "6^6^6" takes 0.184s so bc's exponentiation is over 4x slower than Python's even though it's 7x faster to get started.
python vs bc in evaluating 6^6^6
1,524,491,145,000
I'm trying to do a hex calculation directly with bc, I already specified the scale. echo 'scale=16;c06b1000-c06a5e78' | bc But I still get a zero. What could be wrong?
echo 'ibase=16;C06D1000-C06A5E78' | bc 176520 Note that only UPPER CASE hex digits are supported as lower case ones would conflict with function and variable names, which is why you got 0 in your example (var1 - var2) If you need the answer in hex too, just set the obase variable: echo 'obase=16;ibase=16;C06D1000-C06A5E78' | bc 2B188 PS: FYI scale isn't designed for conversion base. From man bc: scale defines how some operations use digits after the decimal point. The default value of scale is 0.
Does bc support hex calculations?
1,524,491,145,000
I've been using bc to convert numbers between binary to hex, octal to decimal and others. In the following example, I was trying to convert base 16 (hex) number to binary, octal and decimal. I don't have any problem with the first 2 attempts. $ echo 'ibase=16; obase=2; FF' | bc 11111111 $ echo 'ibase=16; obase=8; FF' | bc 377 But when I tried to convert base 16 (hex) number to base 10 (decimal), I was getting wrong answer. The answer should be 255 $ echo 'ibase=16; obase=10; FF' | bc FF
Once ibase=16 is done, further input numbers are in hexadecimal, including 10 in obase=10 which represents the decimal value 16. So either set obase before, or set it after, using the new input base (now hexadecimal): $ echo 'obase=10; ibase=16; FF' | bc 255 $ echo 'ibase=16; obase=A; FF' | bc 255
bc: Why does `ibase=16; obase=10; FF` returns FF and not 255? [duplicate]
1,524,491,145,000
echo "scale=3;1/8" | bc shows .125 on the screen. How to show 0.125 if the output result is less than one?
bc can not output zero before decimal point, you can use printf: $ printf '%.3f\n' "$(echo "scale=3;1/8" | bc)" 0.125
How to show zero before decimal point in bc?
1,524,491,145,000
The only calculator I know is bc. I want to add 1 to a variable, and output to another variable. I got the nextnum variable from counting string in a file: nextnum=`grep -o stringtocount file.tpl.php | wc -w` Lets say the nextnum value is 1. When added with 1, it will become 2. To calculate, I run: rownum=`$nextnum+1 | bc` but got error: 1+1: command not found I just failed in calculation part. I've tried changing the backtick but still not works. I have no idea how to calculate variables and output it to another variable.
The substring inside the ` ` must be a valid command itself: rownum=`echo $nextnum+1 | bc` But is preferable to use $( ) instead of ` `: rownum=$(echo $nextnum+1 | bc) But there is no need for bc, the shell is able to do integer arithmetic: rownum=$((nextnum+1)) Or even simpler in bash and ksh: ((rownum=nextnum+1))
Calculate variable, and output it to another variable
1,524,491,145,000
#!/bin/bash q=$(bc <<< "scale=2;$p*100") head -n$q numbers.txt > secondcoordinate.txt That's just part of the script, but I think it's enough to clarify my intentions. p is a variable with just two decimals, so q should be an integer... Nevertheless, bc shows, for example, 10.00 instead of 10. How can I solve this?
You can't do this with the obvious scale=0 because of the way that the scale is determined. The documentation indirectly explains that dividing by one is sufficient to reset the output to match the value of scale, which defaults to zero: expr1 / expr2 The result of the expression is the quotient of the two expressions. The scale of the result is the value of the variable scale. p=12.34; echo "($p*100)" | bc 1234.00 p=12.34; echo "($p*100)/1" | bc 1234 If your version of bc does not handle this, pipe it through sed instead: p=12.34; echo "($p*100)" | bc | sed -E -e 's!(\.[0-9]*[1-9])0*$!\1!' -e 's!(\.0*)$!!' 1234 This pair of REs will strip trailing zeros from the decimal part of a number. So 3.00 will reduce to 3, and 3.10 will reduce to 3.1, but 300 will remain unchanged. Alternatively, use perl and dispense with bc in the first place: p=12.34; perl -e '$p = shift; print $p * 100, "\n"' "$p"
how to make bc to show me 10 and not 10.00
1,524,491,145,000
bash has a handy file .bash_history in which it saves the history of commands and on the next execution of bash the history is populated with saved commands. Is it possible to save bc command history to a file in the same way and then load it on startup so that bc history is preserved? I tried reading GNU bc manual and it mentions readline and libedit. From ldd /usr/bin/bc I see mine uses readline and readline has write_history and read_history functions. Is this functionality implemented in bc or to do it I'll need to patch bc?
If you aren't happy with the command line editing features that are built into a program, you can run it through rlwrap. This is a wrapper around a command line processor (a REPL) that lets you edit each line before it's sent. Rlwrap uses the readline library and saves history separately for each command. Running rlwrap bc won't do anything for you because rlwrap detects that your bc wants to do its own command line editing, so rlwrap turns itself off. Since you do want rlwrap's command line editing features and not the underlying command's, run rlwrap -a bc The command history will be saved in ~/.bc_history. The main downside of relying on rlwrap rather than using the program's own readline integration is that rlwrap can't do any context-sensitive completion. For example, the python toplevel completes known variables and fields, but rlwrap python cannot do that. Since bc doesn't appear to have any custom completion, rlwrap -a bc doesn't lose functionality over bc.
Is it possible to save `bc` command line history?
1,524,491,145,000
High, I need to test my arbitrary precision calculator, and bc seems like a nice yardstick to compare to, however, bc does truncate the result of each multiplication to what seems to be the maximum scale of the involved operands each. Is there a quick way to turn this off or to automatically set the scale of each multiplication to the sum of scales of the factors so that it doesn't lose any precision? If you have a more elegant solution to this involving something other than bc, I would appreciate your sharing it. Example: $ bc <<< '1.5 * 1.5' 2.2 The real answer is 2.25.
You can control the scale that bc outputs with the scale=<#> argument. $ echo "scale=10; 5.1234 * 5.5678" | bc 28.52606652 $ echo "scale=5; 5.1234 * 5.5678" | bc 28.52606 Using your example: $ bc <<< 'scale=2; 1.5 * 1.5' 2.25 You can also use the -l switch (thanks to @manatwork) which will initialize the scale to 20 instead of the default of 0. For example: $ bc -l <<< '1.5 * 1.5' 2.25 $ bc -l <<< '1.52 * 1.52' 2.3104 You can read more about scale in the bc man page.
BC—automatic full precision multiplication [duplicate]
1,524,491,145,000
I've always found bc kind of mysterious and intriguing. It was one of the original Unix programs. And it's a programming language unto itself. So I gladly take any chance I can find to use it. Since bc doesn't seem to include a factorial function, I want to define one like so: define fact(x) { if (x>1) { return (x * fact(x-1)) } return (1) } But … I can't then reuse that, can I? I'd want to be able to do something like me@home$ bc <<< "1/fact(937)"
Save your function definitions in a file like factorial.bc, and then run bc factorial.bc <<< '1/fact(937)' If you want the factorial function to always load when you run bc, I'd suggest wrapping the bc binary with a shell script or function (whether a script or function is best depends on how you want to use it). Script (bc, to put in ~/bin) #!/bin/sh /usr/bin/bc ~/factorial.bc << EOF $* EOF Function (to put in shell rc file) bc () { command bc ~/factorial.bc << EOF $* EOF } From the bc POSIX specifications: It shall take input from any files given, then read from the standard input.
How to define a `bc` function for later use?
1,524,491,145,000
In Ubuntu 14.04.1 LTS 64-bit bash I am declearing floating point variables by multiplying floating point bash variables in bc with scale set to 3; however, I cannot get the number of digits after the decimal point to be zero and get rid of the zero to the left of the decimal point. How can I transform, say 0.005000000 into .005? This is necessary due to my file naming convention. Thanks for your recommendations. UPDATE: Can I use it for already defined shell variables and redefining them? The following code gives me an error. ~/Desktop/MEEP$ printf "%.3f\n" $w bash: printf: 0.005000: invalid number 0,000 The output of locale @vesnog:~$ locale LANG=en_US.UTF-8 LANGUAGE=en_US LC_CTYPE="en_US.UTF-8" LC_NUMERIC=tr_TR.UTF-8 LC_TIME=tr_TR.UTF-8 LC_COLLATE="en_US.UTF-8" LC_MONETARY=tr_TR.UTF-8 LC_MESSAGES="en_US.UTF-8" LC_PAPER=tr_TR.UTF-8 LC_NAME=tr_TR.UTF-8 LC_ADDRESS=tr_TR.UTF-8 LC_TELEPHONE=tr_TR.UTF-8 LC_MEASUREMENT=tr_TR.UTF-8 LC_IDENTIFICATION=tr_TR.UTF-8 LC_ALL= The output of echo $w @vesnog:~$ echo $w 0.005000
A simple way is to use printf: $ printf "%.3f\n" 0.005000000000 0.005 To remove the leading 0, just parse it out with sed: $ printf "%.3f\n" 0.005000000000 | sed 's/^0//' .005
Bash limiting precision of floating point variables
1,524,491,145,000
I'm trying to calculate the average entropy of files contained in a folder using: { echo '('; find . -type f -exec entropy {} \; | \ grep -Eo '[0-9.]+$' | \ sed -r 's/$/+/g'; echo '0)/'; find . -type f | wc -l; } | \ tr -d '\n' | bc -l entropy being an executable which calculates the Shannon entropy of a file, giving outputs of the form: $ entropy foo foo: 5.13232 The aforementioned command errors out with: (standard_in) 1: syntax error However, the generated output seems to have no problems: $ { echo '('; find . -type f -exec entropy {} \; | \ grep -Eo '[0-9.]+$' | \ sed -r 's/$/+/g'; echo '0)/'; \ find . -type f | wc -l; } | \ tr -d '\n' (5.13232+2.479+1.4311+0)/3 And this works too: $ echo '(2.1+2.1)/2' | bc -l 2.1 What is wrong with the mentioned command?
And this works too: echo '(2.1+2.1)/2' | bc -l Ah, but did you try: echo '(2.1+2.1)/2' | tr -d '\n' | bc -l (standard_in) 1: syntax error Using echo -n will accomplish the same thing -- there's no terminating newline, and that's your problem.
Cannot sum numbers received from stdin using bc
1,524,491,145,000
I have a file with the following: 37 * 60 + 55.52 34 * 60 + 51.75 36 * 60 + 2.88 36 * 60 + 14.94 36 * 60 + 18.82 36 * 60 + 8.37 37 * 60 + 48.71 36 * 60 + 34.17 37 * 60 + 42.52 37 * 60 + 51.55 35 * 60 + 34.76 34 * 60 + 18.90 33 * 60 + 49.63 34 * 60 + 37.73 36 * 60 + 4.49 I need to write a shell command or Bash script that, for each line in this file, evaluates the equation and prints the result. For example, for line one I expect to see 2275.52 printed. Each result should print once per line. I've tried cat math.txt | xargs -n1 expr, but this doesn't work. It also seems like awk might be able to do this, but I'm unfamiliar with that command's syntax, so I don't know what it would be.
This awk seems to do the trick: while IFS= read i; do awk "BEGIN { print ($i) }" done < math.txt From here Note that we're using ($i) instead of $i to avoid problems with arithmetic expressions like 1 > 2 (print 1 > 2 would print 1 into a file called 2, while print (1 > 2) prints 0, the result of that arithmetic expression). Note that since the expansion of the $i shell variable ends up being interpreted as code by awk, that's essentially a code injection vulnerability. If you can't guarantee the file only contains valid arithmetic expressions, you'd want to put some input validation in place. For instance, if the file had a system("rm -rf ~") line, that could have dramatic consequences.
How can I evaluate a math equation, one per line in a file?
1,524,491,145,000
If you load the bc math library you get the trig functions s() and c() and a() which are sine, cosine, and arctangent respectively. Why these three functions? I know why it's those three from the mathematical perspective: it's because those are the three you need to translate directly between Cartesian and polar coordinates. I'm a math teacher, and this is unfortunately the only place I've seen sine/cosine/arctangent established as the set of primitive trigonometric functions, so I was hoping someone could tell me why in a more historical context. Idk I mostly need ammo when talking with math educators about why it's not a blasphemous idea to introduce arctangent to students before tangent.
Not a full answer, but perhaps somewhat useful. More of a list of examples of use of trig functions in early adaptions. Also a look into the UNIX world. ALGOL Interesting paper concerning the history: The History of the ALGOL Effort, by HT de Beer ALGOL was developed back in 1950's. In a joint meeting between European and American computer scientists in 1958 - where one also got Preliminary Report on the International Algorithmic Language aka The Zurich Report. In the times the work was to unify the notation and how one write algorithms for computers. As an excerpt from the 58' report to show some of the discussion in that regard: “Identifiers designating functions, just as in the case of variables, may be chosen according to taste. However, certain identifiers should be reserved for the standard functions of analysis. This reserved list should contain: abs (E) for the modulus (absolute value) of the value of the expression E sign (E) for the sign of the value of E entire (E) for the largest integer not greater than the value of E sqrt (E) for the square root of the value of E sin (E) for the sine of the value of E and so on according to common mathematical notation.” From ALGOL 58 one got ALGOL 60 where, one perhaps, can say that the work also is more concrete on what to have as a basic (in regards to trig functions: Report on the algorithmic language ALGOL 60 In short it recommends that sin, cos and arctan as standard functions. ALGO If one look at installations performing math in the digital era one early machine was the Bendix G-15 computer (late 1950's). It uses ALGO which was influenced by ALGOL 58. It has a library which is not part of the Algo system. The routines in the library is as follows, SIN, COS, ARCTN: Manual for ALGO – Operating instructions Programmers reference manual (G15D - Side note: has some interesting sections on explaining various aspects, for example how bits, bytes, words are grouped and the use of the magnetic drum as RAM) Programs and subroutines Has for example routines for calculating arcsine and arccosine by use of arctan. (The routine cards are dated 1957, so not sure if it was part of some preliminary experimenting:?) To use these routines was loaded by using code words: SIN 0101000 COS 0168000 ARCTN 0164000 Loaded as for example: LIBRAry SIN{0101000} As it states "Machine language routines may be added to the library.", but these three was the ones included in the library. (It also uses sexadecimal for hex - but that is not on point here, but fun.) UNIX Version 1 of UNIX included bas, a dialect of basic (owned by Thompson). It included the following builtin functions: arg, exp, log, sin, cos, atn, rnd, expr and int. Version 2 also had bas and in addition one find a list of subroutines where it list among others: atan, hypot, log, sin (sine / cosine). It also was bundled with dc. There is also bc, but that was for compiling B program. Also worth mentioning: ttt (tick-tack-toe), bj (black-jack), moo (the game of MOO). Version 5. If one want to look at the source code for sin/cos, atan etc. one can for example look at this code: Subroutines: usr/source/s3/{atan.s,sin.s} BASIC builtins: usr/source/s1/bas4.s NB! Archives in for example 1972-stuff (s2) has absolute paths! The mathlib found in V7 was expanded to include tan etc. Also includes Fortran77. BC BC saw the light of day back in 1975, and as noted in question, also includes these three basic methods. Developed by Robert Morris and Lorinda Cherry. From /usr/doc/bc/bc in the V6 release (1975): 3. There is a library of math functions which may be obtained by typing at command level bc –l This command will load a set of library functions which, at the time of writing, consists of sine (named `s'), cosine (`c'), arctangent (`a'), natural logarithm (`l'), exponential (`e') and Bessel functions of integer order (`j(n,x)'). Doubtless more functions will be added in time. The library sets the scale to 20. You can reset it to something else if you like. The design of these mathematical library routines is discussed elsewhere [4]. [4] Robert Morris, A Library of Reference Standard Mathematical Subroutines, That paper however looks to be rather hard to find. So from the listings it looks like the basic trig functions was part of the system as early as V1. bc utilized these in the load routine. Notes from Unix Heritage Wiki (cc) Robert Morris Life with Unix says: Wrote dc and be with Lorinda Cherry. A Research Unix Reader says: Bob (Robert) Morris stepped in wherever mathematics was involved, whether it was numerical analysis or number theory. Bob invented the distinctively original utilities typo, and dc-bc (with Lorinda Cherry), wrote most of the math library, and wrote primes and factor (with Thompson). His series of crypt programs fostered the Center's continuing interest in cryptography. Lorinda Cherry Life with Unix says: Writer of the Writer’s Workbench (diction, style, etc.), be, and dc. Wrote eqn with bwk. A Research Unix Reader says: Lorinda L. Cherry collaborated with Morris on dc-bc and typo. Always fascinated by text processing, Lorinda initiated eqn and invented parts, an approximate parser that was exploited in the celebrated Writer's Workbench®, ww6(v8). Elliott 803 There are of course not then that one do not have systems that implemented more functions, or perhaps was not having these as core functions. But that is history ... :P Elliot 803 Additions: arccos, arcsin, tan - which are additions to sin, cos, arctan. FORTRAN 77 1977: sin, cos, tan, asin, acos, atan, ... II 1958: SIN, COS, ATAN, TANH as Library Tape Functions. BASIC BASIC born 1964 has SIN, COS, TAN and ATN. BASIC Manual (1964) As per comment by @roaima. Most dialects of BASIC used on home computers (circa 1975 onward) also had SIN, COS, TAN, ATN (arctan). No other inverses. I assume TAN was included to minimize the error bound when otherwise using SIN/COS because all these trig functions were generated via a rather small lookup table. APOLLO 11 The source code for the APOLLO 11 command- and lunar module show they had at least a subroutine for ARCTAN SIN, COS - Used for example here ARCTAN Approximation of TAN(-20) by SIN(-20) (deg) You can argue they managed to land on the moon without a subroutine for TAN ;) CORDIC CORDIC (Volder's algorithm) is a noteworthy mention when it comes to trig implementation. Statistics An interesting addition by @Stephen Kitt, from comments: Another interesting paper is Statistics on the use of mathematical subroutines from a computer center library, published in 1973, which indicates that, at Purdue in early 1973, sin / cos / atan were the most commonly used trig functions, quite far ahead of tan / asin / acos / tanh: sin / cos 39,462 atan 27,248 tan 4,707 asin / acos 4,139 tanh 2,546 Dive Not a deep-dive, but at least a little more on the subject. The paper of ALGOL is perhaps the most on the mark. As for BC it was without finding a direct quote a decision by Morris / Cherry to include these specific basic functions by loading from library by the -l option. In short, it is not that one do not want for example tan, but the history show which trig functions was chosen to implement as a base - in the light of resources and use.
Who decided the bc math library will define sine cosine and arctangent?
1,524,491,145,000
I want to compare two numbers with bc. Per this highly rated answer on StackOverflow, I can do it in a manner as such: printf '%s\n' '1.2 > 0.4' | bc bc sends 1 to STDOUT, indicating that the statement is true (it would have returned 0 if the statement was false). Per the POSIX page for bc: Unlike all other operators, the relational operators ( '<', '>', "<=", ">=", "==", "!=" ) shall be only valid as the object of an if, while, or inside a for statement. Perhaps I am misinterpreting, but this language seems to disallow the syntax used in the above example. Is standalone use of relational operators in bc a violation of POSIX? If so, how should I rewrite my example?
Perhaps I am misinterpreting, but this language seems to disallow the syntax used in the above example. That example assumes GNU bc, which adds its own extensions to the bc language. As documented in its manual, you should use the -s switch to make it process the exact POSIX bc language, or the -w switch if you want it to warn about extensions: $ echo '1.2 > 0.4' | bc -s (standard_in) 2: Error: comparison in expression $ echo '1.2 > 0.4' | bc -w (standard_in) 2: (Warning) comparison in expression 1 $ echo '1.2 > 0.4' | bc 1 If so, how should I rewrite my example? $ printf 'if(%s > %s){a=1};a\n' 1.2 0.4 | bc -s 1 thanks @icarus for the shorter, easier on the eyes version.
bc: does POSIX prohibit standalone use of relational operators?
1,524,491,145,000
I want to store the value of 2^500 in the variable DELTA. I'm doing export DELTA=$(echo "scale=2; 2^500" | bc) but this does not set DELTA to 3273390607896141870013189696827599152216642046043064789483291368096133796404674554883270092325904157150886684127560071009217256545885393053328527589376. Instead, it sets it to 32733906078961418700131896968275991522166420460430647894832913680961\ 33796404674554883270092325904157150886684127560071009217256545885393\ 053328527589376 I tried the answers in this question (3 years old), using export DELTA=$(echo "scale=2; 2^500" | bc | tr '\n' ' ') or export DELTA=$(echo "scale=2; print 2^500" | bc | tr '\n' ' ') but none of them work for setting the variable, only to echo it. Any idea?
echo "scale=2; 2^500" | bc | tr -d '\n\\' Output: 3273390607896141870013189696827599152216642046043064789483291368096133796404674554883270092325904157150886684127560071009217256545885393053328527589376
Store 2^500 in a variable in bash
1,524,491,145,000
I defined the cbrt function to return a cube root. I need to get an integer back (even if it is close to the cube root, it will be acceptable in my case). However, when I put the scale as 0 to get an integer, I get numbers which are grossly incorrect. What is going on and how do I get an integer cube root out? bc -l define cbrt(x) { return e(l(x)/3) } scale=1;cbrt(1000000) 99.4 scale=0;cbrt(1000000) 54
Setting scale before calling cbrt() has the effect of setting the scale whilst cbrt() is evaluated. The way around it is to stash the scale as part of the function: define cbrt(x) { auto z, var; z=scale; scale=5; var = e(l(x)/3); scale=z; return (var); } which when called gives: scale=0; cbrt(1000000) 99.99998 Which seems to ignore that scale=0. This is because it evaluates cbrt() which temporarily sets the scale to 5 and calculates your cube root (to 5 floating points), assigns that to z and returns that value (which implicitly has scale set to 5). You can then apply the local scale=0 by simply dividing by 1: scale=0; cbrt(100000)/1 99
BC not handling scale = 0 correctly
1,524,491,145,000
Is there a simple way to separate a very large number per thousands with printf, awk, sed ? So 10000000000000 become 10 000 000 000 000 Thanks
A simple combination of sed and rev could be employed - echo "I have 10000013984 oranges" | rev | sed "s/[0-9][0-9][0-9]/& /g" | rev first rev is required to replace number from right to left , and the second one for bringing back the original string.
printf, awk ... How to format a number with space to the thousands
1,524,491,145,000
I am using shell scripting and I am using the following expression: A=`echo "(( (($a / $b) ^ 0.3) -1 ))" |bc -l` I want to have a real number as an exponent. I noticed that If I place 0.3, it rounds off to an integer and takes the power of zero. Similarly if I use 5.5 or 5.9 in place of 0.3 in the above expression, I get the same answer. How do I calculate the power of a number with the exponent being a real number and not an integer
Why can't you use awk or perl one-liner to handle it? echo "$a $b" | awk '{ print ((($1/$2)^0.3) -1); }'
Shell Scripting: calculate power of a number with a real number as an exponent
1,524,491,145,000
I need to pass certain variable to bc to get the output in floating point, var1=$((<some operation>)) var2=$((<some operation>)) #Needs var1 var3=$((<some operation>)) #Needs var2 bc -l <<< $var3 #Need output in Floating points Output: (standard_in) 1: illegal character: $ Anyway to overcome this? Update: diff=$(($epoc2-$epoc1)) var1=$(($diff / 60)) var2=$(($var1 / 57)) var3=`bc <<< 'scale=2; $var2'`
Simple quotes don't expand $ variable. You have to use double quotes: var3=`bc <<< "scale=2; $var2"` On the other hand, $var1 and $var2 won't store float (bash doesn't manage them), so you bc instead. diff=$(($epoc2-$epoc1)) var1=$(bc <<< "scale=3 ; $diff / 60") var2=$(bc <<< "scale=3 ; $var1 / 57") var3=$(bc <<< "scale=2; $var2")
Anyway to pass a variable to bc, having a command to be executed?
1,524,491,145,000
How do I get bc to start decimal fractions with a leading zero? $ bc <<< 'scale=4; 1/3' .3333 I want 0.3333.
bc natively does not support adding zero. Workaround is: echo 'scale=4; 1/3' | bc -l | awk '{printf "%.4f\n", $0}' 0.3333 \n  – terminate the output with a newline. %f  – floating point %.4f – the number of digits to show.  This specifies 4 digits after the decimal point.
How do I get bc to start decimal fractions with a leading zero
1,524,491,145,000
How can I add a command line calculator to my bash? I have found some, but all of them use the full stop as decimal mark, but I want to have it to use the comma as decimal mark as most of the world does, see picture: (source wikipedia) blue: Full stop/Period (.) green: Comma (,) red: Momayyez (٫) gray: Data unavailable The ones that I have found (all with full stop as decimal mark) are the following, where these lines have to be put into your ~/.bashrc file: Using bc, which has the advantage, that you can calculate ridiculously large numbers: calc () { bc -l <<< "$@" } With awk, where you have mnemonic names for trigonometric and other functions and you can use fractional exponents and you can give the exponent by the two chars ** instead of the, on some keyboards difficult to type ^: calc () { awk "BEGIN { print $* ; }" }
I have found a solution. calc () { awk ' function asin(x) { return atan2(x, sqrt(1-x*x)) } function acos(x) { return atan2(sqrt(1-x*x), x) } function atan(x) { return atan2(x,1) } function tan(x) { return sin(x)/cos(x) } BEGIN { pi=atan(1)*4; print '"$(echo "$@" | tr , .)}" | tr . , } This one accepts numbers as 5,2 or 5.2 (i.e. both full stop and comma as decimal mark) it uses comma as decimal mark for the output/solution spaces and tabs are removed from input, i.e. you can enter easy readable calculations as input the number pi is defined via 4*atan(1) some common trigonometric functions are defined
how to add command line calculator to bash that uses comma as decimal mark?
1,524,491,145,000
I understand bash and some other interpreters only perform arithmetic for integers. In the following for loop, how can I accomplish this? I've read that bc can be used but am not sure how to use bc in this situation. total=0 for number in `cat /path/to/file`; do total=$(($total+$number)) done average=$(($total/10)) echo Average is $average file: 1.143362 1.193994 1.210489 1.210540 1.227611 1.243496 1.260872 1.276752 1.294121 1.427371
You may not want to use bc for this. Perhaps awk would work better: awk '{sum+=$1};END{print sum/NR}' /path/to/file
Perform floating point arithmetic in shell script variable definitions [duplicate]
1,524,491,145,000
I read topics about how to get bc to print the first zero, but that is not exactly what I want. I want more... I want a function that returns floating point numbers with eight decimal digits. I am open to any solutions, using awk or whatever to be fair. An example will illustrate what I mean: hypothenuse () { local a=${1} local b=${2} echo "This is a ${a} and b ${b}" local x=`echo "scale=8; $a^2" | bc -l` local y=`echo "scale=8; $b^2" | bc -l` echo "This is x ${x} and y ${y}" # local sum=`awk -v k=$x -v k=$y 'BEGIN {print (k + l)}'` # echo "This is sum ${sum}" local c=`echo "scale=8; sqrt($a^2 + $b^2)" | bc -l` echo "This is c ${c}" } Sometime, a and b are 0.00000000, and I need to keep all these 0s when c is returned. Currently, when this happens, this code give back the following output: This is a 0.00000000 and b 0.00000000 This is x 0 and y 0 This is c 0 And I would like it to print This is a 0.00000000 and b 0.00000000 This is x 0.00000000 and y 0.00000000 This is c 0.00000000 Help will be much appreciated!
You can externalize formatting this way, using printf: printf "%0.8f" ${x} Example: x=3 printf "%0.8f\n" ${x} 3.00000000 Note: printf output depends on your locale settings.
How to get bc to print trailing zeros?
1,524,491,145,000
This came out of one of my comments to this question regarding the use of bc in shell scripting. bc puts line breaks in large numbers, e.g.: > num=$(echo 6^6^3 | bc) > echo $num 12041208676482351082020900568572834033367326934574532243581212211450\ 20555710636789704085475234591191603986789604949502079328192358826561\ 895781636115334656050057189523456 But notice they aren't really line breaks in the variable -- or at least there are not if it is used unquoted. For example, in fooling around with more pipe in the assignment, e.g.: num=$(echo 6^6^3 | bc | perl -pne 's/\\\n//g') I realized that while there really is an \n in the bc output, checking echo $num > tmp.txt with hexdump shows the \n (ASCII 10) has definitely become a space (ASCII 32) in the variable assignment. Or at least, in the output of unquoted $num >. Why is that? As fedorqui points out, if you use quotes: echo "$num", you get newlines again. This is evident by examining the difference between echo $num > tmp.1 and echo "$num" > tmp.2 with hexdump; the former contains \ (backslash space) whereas the later contains \\n (backslash newline).
echo puts a space between each two arguments. The shell considers the newline in $num just a word separator (just like space). lines="a b c" set -x echo $lines # several arguments to echo echo "$lines" # one argument to echo See this answer (by the OP himself) for a more detailed explanation.
Automatic transformation of newlines in shell variable assignment
1,524,491,145,000
I read how to calculate using the command line calculator and a HERE-document, but nevertheless I do not get what I expected and can not find my error, what I did in the shell was: bc << HERE >ibase=2 >obase=16 >1001 >HERE 100 I expected to get 9 as result since binary 1001 is hexadecimal 9, but I got 100.
Because you set ibase=2 first, you need to use obase=10000: $ echo 'ibase=2; obase=10000; 1001' | bc 9
Difficulty converting from binary to hexadecimal using bc
1,524,491,145,000
When I'm performing float operations in shell using bc, the result is not the same if I use a regular calculator. Am I doing something wrong? For example, I need to find a volume of a sphere. User inputs the radius value. pi = 3.14 volume=$(echo "scale = 2; (4 / 3) * $pi * ($r ^ 3)" | bc) echo "Volume is $volume" If radius = 3, unix returns 112.59, and the calculator 113.1.
You need to understand the meaning of the scale of an expression in bc. bc can do arbitrary precision (which doesn't necessarily mean infinite precision) while your calculator will probably have the precision of the float or double data type of your processor. In bc. The scale is the number of decimal after the dot, so related to the precision. The scale of an expression is determined based on rules that depend on which operator is involved and the scale variable (that variable is the one that gives the arbitrary dimension of the precision of bc that is, that can make its precision as big as you want). For instance, the scale of the result of a division is scale. So 4/3 when scale is 2 is 1.33, so a very rough approximation of 4/3. The scale of x * y will be min(a+b,max(scale,a,b)) (where a is the scale of x and b the scale of y), so here 2. so 1.33 * 3.14 will be 4.17. For the rules, you can check the POSIX specification for bc. If you want a greater precision, increase scale. You can increase it indefinitely. With bc -l, scale is automatically set to 20. $ pi='(a(1)*4)' r=3 $ $ echo "(4 / 3) * $pi * ($r ^ 3)" | bc -l 113.09733552923255658339 $ echo "scale=1000; (4 / 3) * $pi * ($r ^ 3)" | bc -l 113.0973355292325565846551617980621038310980983775038095550980053230\ 81390626303523950609253712316214447357331114478163039295378405943820\ 96034211293869262532022821022769726978675980014720642616237749375071\ 94371951239736040606251233364163241939497632687292433484092445725499\ 76355759335682169861368969085854085132237827361174295734753154661853\ 14730175311724413325296040789909975753679476982929026989441793959006\ 17331673453103113187002257495740245517842677306806456786589844246678\ 87098096084205774588430168674012241047863639151096770218070228090538\ 86527847499397329973941181834655436308584829346483609858475202045257\ 72294881898002877683392804259302509384339728638724440983234852757850\ 73357828522068813321247512718420036644790591105239053753290671891767\ 15857867345960859999994142720979823815034238137946746942088054039248\ 86988951308030971204086612694295227741563601129621951039171511955017\ 31142218396089302929537125655435196874321744263099764736353375070480\ 1468800991581641650380680694035580030527317911271523 $ echo "scale=1; (4 / 3) * $pi * ($r ^ 3)" | bc -l 97.2 You can also do all your calculations with a high scale, and reduce it in the end for display: $ echo "scale=10; (4 / 3) * $pi * ($r ^ 3)" | bc -l 113.0973355107 $ echo "scale=100; x = (4 / 3) * $pi * ($r ^ 3); scale = 10; x / 1" | bc -l 113.0973355292
Float operations with bc not precise?
1,524,491,145,000
I was trying to think of a quick and illustrative way to generate a non-successful exit status and thought dividing by zero with the bc would be a good idea. I was suprised to discover that although it does generate a runtime error, the exit status is still 0: $ echo 41 + 1 | bc 42 $ echo $? 0 $ echo 42/0 | bc Runtime error (func=(main), adr=6): Divide by zero $ echo $? 0 Why does the bc utility not fail with a non-zero exit status? Note: For a quick non-zero exit status I'm using return 1 Also, from shell-tips: $ expr 1 / 0 expr: division by zero $ echo $? 2
bc implementations differ a bit in their return status, but the general idea is that if you supply valid input then bc exits with the status 0. 42/0 is valid input: there's no read error, and it's even a syntactically valid expression, so bc returns 0. If you passed a second line with another operation, bc would perform it. This is different from expr whose purpose is to evaluate a single arithmetic expression; here the outcome of that single expression determines the return status. The most straightforward way to generate an exit status that indicates failure is to call false. Things like expr 1 / 0 only have their place in obfuscated programming contests.
Why does bc exit 0 when dividing by 0?
1,524,491,145,000
I have an expression "5+50*3/20 + (19*2)/7" I need to round it up to 3 decimal places. The answer to this is 17.92857142857143. When I use the script below it is giving me 17.928. The answer should be 17.929. read exp echo "scale=3; $exp" |bc -l And one more question is how to use printf to do the same task
Python seems have your preferred behaviour: $ echo 'print(round(' "5+50*3/20 + (19*2)/7" ', 3))' | python3 17.929
Evaluation an expression and rounding up to three decimals
1,524,491,145,000
Following code calculates the Binomial Probability of a success event k out of n trials: n=144 prob=$(echo "0.0139" | bc) echo -e "Enter no.:" read passedno k=$passedno nCk2() { num=1 den=1 for((i = 1; i <= $2; ++i)); do ((num *= $1 + 1 - i)) && ((den *= i)) done echo $((num / den)) } binomcoef=$(nCk2 $n $k) binprobab=$(echo "scale=8; $binomcoef*($prob^$k)*((1-$prob)^($n-$k))" | bc) echo $binprobab When for $passedno (=k) "5" is entered, then the result is shown as 0 (instead of "0.03566482") whereas with "4" passed I get ".07261898". How can I print the output with given precision of 8 decimal digits without getting the rounded value of the output?
FWIW, prob=$(echo "0.0139" | bc) is unnecessary - you can just do prob=0.0139 Eg, $ prob=0.0139; echo "scale=5;1/$prob" | bc 71.94244 There's another problem with your code, apart from the underflow issue. Bash arithmetic may not be adequate to handle the large numbers in your nCk2 function. Eg, on a 32 bit system passing 10 to that function returns a negative number, -133461297271. To handle the underflow issue you need to calculate at a larger scale, as mentioned in the other answers. For the parameters given in the OP a scale of 25 to 30 is adequate. I've re-written your code to do all the arithmetic in bc. Rather than just piping commands into bc via echo, I've written a full bc script as a here document inside a Bash script, since that makes it easy to pass parameters from Bash to bc. #!/usr/bin/env bash # Binomial probability calculations using bc # Written by PM 2Ring 2015.07.30 n=144 p='1/72' m=16 scale=30 bc << EOF define ncr(n, r) { auto v,i v = 1 for(i=1; i<=r; i++) { v *= n-- v /= i } return v } define binprob(p, n, r) { auto v v = ncr(n, r) v *= (1 - p) ^ (n - r) v *= p ^ r return v } sc = $scale scale = sc outscale = 8 n = $n p = $p m = $m for(i=0; i<=m; i++) { v = binprob(p, n, i) scale = outscale print i,": ", v/1, "\n" scale = sc } EOF output 0: .13345127 1: .27066174 2: .27256781 3: .18171187 4: .09021610 5: .03557818 6: .01160884 7: .00322338 8: .00077747 9: .00016547 10: .00003146 11: .00000539 12: .00000084 13: .00000012 14: .00000001 15: 0 16: 0
bc scale: How to avoid rounding? (Calculate small binomial probability)
1,524,491,145,000
Since the following command using bc does not work for numbers in scientific notation, I was wondering about an alternative, e.g. using awk? sum=$( IFS="+"; bc <<< "${arrValues[*]}" )
sum=$( awk 'BEGIN {t=0; for (i in ARGV) t+=ARGV[i]; print t}' "${arrValues[@]}" ) With zsh (in case you don't have to use bash), since it supports floating point numbers internally: sum=$((${(j[+])arrValues})) With ksh93: If you need the kind of precision that bc provides, you could pre-process the numbers so that 12e23 is changed to (12*10^23): sum=$( IFS=+ sed 's/\([0-9.]*\)[eE]\([-+]*[0-9]*\)/(\1*10^\2)/g' <<< "${arrValues[*]}" | bc -l )
How to sum a bash array of numbers (some in scientific notation)?
1,524,491,145,000
I run this command in the terminal: grep "bla bla blah" blah* | echo "Blah: $(wc -l) / $(ls | wc -l) * 100" And I get this output: Blah: 44 / 89 * 100 What I expect to see: 49.4 Is there a way to obtain the desired output using just the bash commands? I don't prefer scripts I am planning to pipe the output.
Your code says to print a string. It doesn't say anywhere that this string is in fact an arithmetic expression that you want evaluated. So you can't reasonably expect your expression to be evaluated. Your code is suboptimal. $(wc -l) will count the number of matches returned by grep, but there's a simpler way: run grep -c instead. $(ls | wc -l) is an unreliable way of counting the non-dot files in the current directory, because the output of ls isn't reliable; $(set -- *; echo $#) is a reliable way of doing this (assuming there is at least one matching file; if that assumption might not hold, use $(set -- *; if [ -e "$1" ]; then echo $#; else echo 0; fi, but note that this will result in a division by zero below which you should treat as an error condition one way or another). So you can write your code this way: matches=$(grep -c "bla bla blah" blah*) files=$(set -- *; echo $#) echo "Blah: $matches / $files * 100" or you can inline the computation of the two intermediate values: echo "Blah: $(grep -c "bla bla blah" blah*) / $(set -- *; echo $#) * 100" Now, to perform the arithmetic, you can use the shell's built-in arithmetic expansion, but it's limited to integer operations, so the / operator will round down to the nearest integer. echo "Blah: $(($matches * 100 / $files))" In ksh93, zsh and yash, but not in other shells, you get floating-point arithmetic if there's something in the expression to force floating-point, such as a floating-point constant. This feature is not present in the Bourne shell, ksh88, pdksh, bash, ash. echo "Blah: $(($matches * 100.0 / $files))" The bc utility performs operations on decimal number with arbitrary precision. echo "Blah: $(echo "scale=2; $matches * 100 / $files" | bc)" Another standard utility that can perform floating-point computation (with fewer mathematical functions available) is awk. echo "$matches" "$files" | awk '{print "Blah:", $1 * 100 / $2}'
How to calculate values in a shell script?
1,524,491,145,000
This is a bc output, e.g.: Input: echo "scale=10; BLA-BLA-HERE-NOT-IMPORTANT" | bc Output: .3708446283953709207058828124021300754352578903651372655882743141882\ 77124645102027246581819139527644919407424570060822470537797066353573\ 96635.8038454068 days Two Questions: can the output be rounded to something like "0.3708..."? can I remove the "\n"-s from the end? I can't find any max width option in bc.
You can try something like this code: echo "scale = 4; 3.5678/3" | bc | tr '\n' ' ' Setting scale for bc is supposed to do the rounding job. You can substitute the division part with your desired command. The output of bc is again piped to tr, which converts the newline (\n) to white space. For the above command I get the following output: 1.1892 user@localhost:~/codes$
BC - no "\\n" at the end + start with zeros?
1,524,491,145,000
I'm trying to do an interface to bc so it can be used intuitively and without the annoyance of getting "stuck" in it. I haven't got around to test it that much, because I got stuck on another detail, namely how to present the result (which is, I think, a string). Rounding or truncating does not matter, either one is fine. Take a look below, and you'll understand immediately. I use zsh but an external tool will be just fine as I won't use this in any time or otherwise critical context, it's just a desktop tool. calc () { result=`bc <<EOF scale=3; $@ EOF` echo ${result//%0/} # doesn't work; will only remove one zero # also, if there are only zeroes, should # remove dot as well - what about .333, etc.? } Edit I'm very impressed by the below solution, especially how the noglob gets away with the quotes! But, the use of a dot to force floating point calculation is something I'll never remember (you don't use a normal calculator like that). And it is even a bit risky, especially for calculations when it's not obvious that floating point would yield an altogether different result (most likely the one you wanted). Also, the calculations below show some un-pretty output (the too long real, and the trailing dot). Perhaps I should combine this (some of it) with the output formatting of @Gille's answer below? When I get it to work perfectly, I'll post the result here. (Edit: The accepted answer works great. Be sure to read the comments to that answer, as well.) calc () { echo $(($*)); } alias calc='noglob calc' calc 1./3 0.33333333333333331 calc 7.5 - 2.5 5.
Using zsh's own arithmetic, you could do: calc() printf '%.6g\n' $(($*)) alias 'calc=noglob calc' But that would mean you'd need to enter numbers as 123. for them to be taken as floating point and trigger a floating point calculation. You could work around that by appending . to any sequence of decimal digits that is not otherwise part of a hex number (or number in another base) or of a variable name or 12e-20 type numbers like: setopt extendedglob calc() printf '%.6g\n' $((${*//(#bm)(([0-9.]##[eE][-+][0-9]##|[[:alnum:]_#]#[.#_[:alpha:]][[:alnum:]_#]#)|([0-9]##))/$MATCH${match[3]:+.}})) alias 'calc=noglob calc' By which time you may think it easier to use bc and trim the trailing 0s. See also awk: calc() awk "BEGIN{print $*}" which supports fewer operators and math functions but might be enough for you.
Round/truncate digit in string in zsh (or with external tool)
1,524,491,145,000
I'm debugging code which contains quite a few bit shift operations, and I'm using bc a lot to look at what's happening on the bit level. Here's what I use: $ echo 'obase=2;598980975283696640' | bc 100001010000000000100000011000000011000000000111010000000000 Is there a simple way to get the output as whitespace-separated nibbles? E.g. 1000 0101 0000 0000 0010 0000 0110 0000 0011 0000 0000 0111 0100 0000 0000 Thanks in advance for your answers! Edit: Thanks for the replies! However, I tried it on another number,262148. Should be: 100 0000 0000 0000 0100 But it is: 1000 0000 0000 0000 100 I guess the script has to search backwards through the string to get it right?
I would use this simple function: nibbles () { echo "obase=2; $1" | bc | rev | while read -n4 a; do echo -n "$a ";done | rev ; echo; } $ nibbles 598980975283696640 1000 0101 0000 0000 0010 0000 0110 0000 0011 0000 0000 0111 0100 0000 0000
bc output binary as nibbles separated by whitespace
1,524,491,145,000
System: Ubuntu 22.04.3 LTS GNU bash, version 5.1.16(1)-release (x86_64-pc-linux-gnu) bc 1.07.1 Observation: Both ibase and obase are unset. echo "A0" | bc 90 echo "B0" | bc 90 echo "X0" | bc 90 Question: Why does bc interpret alpha characters as 9s by default? Why wouldn't an error message be preferable here?
From man bc on a GNU system (with GNU bc 1.07 or newer): A simple expression is just a constant. bc converts constants into internal decimal numbers using the current input base, specified by the variable ibase. (There is an exception in functions.) The legal values of ibase are 2 through 36. (Bases greater than 16 are an extension.) Assigning a value outside this range to ibase will result in a value of 2 or 36. Input numbers may contain the characters 0-9 and A-Z. (Note: They must be capitals. Lower case letters are variable names.) Single digit numbers always have the value of the digit regardless of the value of ibase. (i.e. A = 10.) For multi-digit numbers, bc changes all input digits greater or equal to ibase to the value of ibase-1. This makes the number ZZZ always be the largest 3 digit number of the input base. (Emphasis mine.)
By default, bc interprets any alpha character as a 9
1,524,491,145,000
I want to perform some mathematical operations in the shelll. For example: 5+50*3/20 + (19*2)/7 I tried: #!/bin/bash read equ echo "scale=3; $equ" | bc -l Expected output: 17.929 My output: 17.928
bc is truncating, try this instead: printf "%.3f\n" $(echo "$equ" | bc -l)
How can I do basic calculations in a shell script?
1,524,491,145,000
I want to calculate an expression in shell. I use the following code: pi=$(echo "scale=10; 4*a(1)" | bc -l) i=3 d=`expr (1+c($pi*($i/10)+$pi))/2 | bc -l` But it says bad pattern: (1+c(3.1415926532*(3/10)+3.1415926532))/2 Why?
Because you're using expr in your last command where you probably should be using echo. P.S. I advise you to use the $(…) form in both bc commands (rather than `…`).
A bc problem about long expression
1,524,491,145,000
I would like to create an alias for bc that runs bc -l and specifies that pi=4*a(1). This way, I can start each session with pi already defined. What alias will do this?
I will answer your Question, but you should consider following the link provided by Julie Pelletier assuming you are using bash: alias bc-l-with-pi='bc -l <(echo "pi=4*a(1)")' explanation: we (ab)use the bash redirection to give bc a temporary file with the content "pi=4*a(1)". after that bc goes into interactive mode.
Start bc with pi defined
1,416,436,772,000
I have a CSV file generated by a script of mine, It gets CPU time used per user, however it gets this in seconds, i need it in hours so i need to divide each line by 3600. example input file **USER,TOTAL_CPU,AVERAGE_CPU user1,1234552.0,1234.3 user2,9999999.0,82772.6 user3,7777776227.9,282882,0** I can easily get what i want if i do it one column at a time with: for i in `awk -F , 'NR!=1{print $2}' myfile.out`; do bc -l <<< "scale=3; ($i/3600)"; done That gives me the output for one column at a time. I want both at once, there has to be a better way, instead of working out one column, then the next and merging the two together. I want both at once, the output should look exactly the same as the input but instead of seconds it will be in hours. example output file: USER,TOTAL_CPU,AVERAGE_CPU user1,342.931,0.342 user2,2777.777,22.992 etc.....
Awk solution: awk 'BEGIN{ FS=OFS="," } NR > 1{ $2 = sprintf("%.3f", $2/3600); $3 = sprintf("%.3f", $3/3600) }1' file The output: USER,TOTAL_CPU,AVERAGE_CPU user1,342.931,0.343 user2,2777.778,22.992 user3,2160493.397,78.578,0
divide two columns, not with each other
1,416,436,772,000
I have a big number: 2923174917395723957 that would be: 2,923*10^18 are there any parameters in bc that will give this OUTPUT? e.g.: $ echo '2923174917395723956 + 1' | bc 2,923*10^18 $ Or something similar...the point is that it must have a short look Thank you!
Try the printf command: $ printf "%e\n" 2923174917395723957 2.923175e+18 In your locale, it should use , instead of ., of course. You can also control the format more precisely such as: $ printf "%.3e\n" 2923174917395723957 2.923e+18 Some shells like bash have a built-in called printf which may be different from any printf command that comes with the system, but, in general, you shouldn't notice any difference with simple commands like above. You can also use printf from Perl which will be pretty consistent. $ perl -e 'printf "%.3e\n", 2923174917395723957' 2.923e+18
BC - output normal form?
1,416,436,772,000
I am trying to divide two values in a loop using bc, and I have set that value as a variable. My problem is that I want that value to have 2 decimal places, but I am having trouble getting scale=2 to work while defined inside a variable. Here is my test file: cat file.txt Sc0000000_hap1 0 1200 32939 Sc0000000_hap1 1199 2388 28521 Sc0000001_hap1 0 1200 540 Here is the loop I am running: while read name start stop sum; do divisor=`expr ${stop} - ${start}` avg=`scale=2; expr $sum / $divisor | bc ` #I want 2 decimal points here echo ${name} ${start} ${stop} ${avg} >> ${outfile} done < file.txt Here is the output I am getting: Sc0000000_hap1 0 1200 27 Sc0000000_hap1 1199 2388 23 Sc0000001_hap1 0 1200 0 Here is the output I want: Sc0000000_hap1 0 1200 27.45 Sc0000000_hap1 1199 2388 23.99 Sc0000001_hap1 0 1200 0.43 I have tried a few variations on my syntax but I can't seem to get it to work. Can someone show me how to code this correctly? Thanks in advance.
avg=`scale=2; expr $sum / $divisor | bc ` You are setting a shell variable scale to 2 calculating the integer division using expr and passing that value to bc (read man expr) bc does not perform any calculations, it just outputs the number that was fed into it. Let bc do the work: avg=$(echo "scale=2; $sum / ($stop - $start)" | bc) Now bc gets to do the whole calculation, and you set the bc scale variable. Braces are not the same as double quotes. Use: echo "${name} ${start} ${stop} ${avg}" >> ${outfile} Use $(...) instead of `...`
Set scale for bc inside a variable
1,416,436,772,000
I am trying to translate a simple program to the command line using unix utilities. For example, if I have a frequency list (after piping through uniq and sort) 5 x 4 y 1 z I want to print out, instead of the frequencies, the fraction of the times they occur: 0.5 x 0.4 y 0.1 z (I have a python program that does this, but I wanted to know if this could be done through the command line itself.) So far, I have tried to compute the sum <...>| awk -F" " '{print $1}' | tr '\n' +; echo 0 | bc but this is just giving me the output 5+1+4+0 without computing it. EDIT: I got the sum . I modified the above command to <...>| awk -F" " '{print $1}' | echo $(tr '\n' +; echo 0) | bc > sum and the correct result is stored in sum. Now I just want to divide the original list by sum and display it.
awk '{ f[$2] = $1; SUM += $1} END { for (i in f) { print f[i]/SUM, i } }' </tmp/data
How to divide a list of values by a number in command line?
1,416,436,772,000
I have written an .awk which executes some operation on a .tr file and writes the output to a file. The END section of .awk file prints this: printf("%15.2f\n%15.5f\n%15.2f\n%15.2f\n%15.2f\n%10.2f\n%10.2f\n%10.5f\n", rThroughput, rAverageDelay, nSentPackets, nReceivedPackets, nDropPackets, rPacketDeliveryRatio, rPacketDropRatio,rTime) ; printf("%15.5f\n%15.5f\n%15.5f\n%15.5f\n%15.0f\n%15.9f\n", total_energy_consumption, avg_energy_per_bit, avg_energy_per_byte, avg_energy_per_packet, total_retransmit,rEnergyEfficiency); I call this .awk file from a .shfile. After executing the command which runs the .awk file, I iterate over the values generated by the .awk file. awk -f Wireless_udp.awk 802_11.tr > "TEMP" while read val do l=$(($l + 1)) if [ "$l" == "1" ]; then thr=$(echo "scale=5; $thr+$val/$iteration_float" | bc) # echo -ne "throughput: $val " elif [ "$l" == "2" ]; then del=$(echo "scale=5; $del+$val/$iteration_float" | bc) # echo -ne "delay: $val " elif [ "$l" == "3" ]; then s_packet=$(echo "scale=5; $s_packet+$val/$iteration_float" | bc) # echo -ne "send packet: $val " elif [ "$l" == "4" ]; then r_packet=$(echo "scale=5; $r_packet+$val/$iteration_float" | bc) # echo -ne "received packet: $val " elif [ "$l" == "5" ]; then d_packet=$(echo "scale=5; $d_packet+$val/$iteration_float" | bc) # echo -ne ;"drop packet: $val " elif [ "$l" == "6" ]; then del_ratio=$(echo "scale=5; $del_ratio+$val/$iteration_float" | bc) # echo -ne "delivery ratio: $val " elif [ "$l" == "7" ]; then dr_ratio=$(echo "scale=5; $dr_ratio+$val/$iteration_float" | bc) # echo -ne "drop ratio: $val " elif [ "$l" == "8" ]; then time=$(echo "scale=5; $time+$val/$iteration_float" | bc) # echo -ne "time: $val " elif [ "$l" == "9" ]; then t_energy=$(echo "scale=5; $t_energy+$val/$iteration_float" | bc) # echo -ne "total_energy: $val " elif [ "$l" == "10" ]; then energy_bit=$(echo "scale=5; $energy_bit+$val/$iteration_float" | bc) # echo -ne "energy_bit: $val " elif [ "$l" == "11" ]; then energy_byte=$(echo "scale=5; $energy_byte+$val/$iteration_float" | bc) # echo -ne "energy_byte: $val " elif [ "$l" == "12" ]; then energy_packet=$(echo "scale=5; $energy_packet+$val/$iteration_float" | bc) # echo -ne "energy_packet: $val " elif [ "$l" == "13" ]; then total_retransmit=$(echo "scale=5; $total_retransmit+$val/$iteration_float" | bc) # echo -ne "total_retrnsmit: $val \n" elif [ "$l" == "14" ]; then energy_efficiency=$(echo "scale=9; $energy_efficiency+$val/$iteration_float" | bc) # echo -ne "energy_efficiency: " fi # echo "$val" done < "TEMP" Everything was running fine, but when I added the last if-else condition and executed the script, it gave a (standard_in) 1: syntax error Specifically, I'm taking about this segment of code: elif [ "$l" == "14" ]; then energy_efficiency=$(echo "scale=9; $energy_efficiency+$val/$iteration_float" | bc) # echo -ne "energy_efficiency: " TEMP file contains: 197645.74 0.32776 25000.00 7350.00 17348.00 29.40 69.39 24.99826 720.13300 0.00015 0.00117 0.09798 0 0.001166018 I'm having difficulty why it's giving an error. Link to full code: .tcl file which generates the .tr file .awk file: .sh file: bash --version gives: GNU bash, version 4.3.48(1)-release (x86_64-pc-linux-gnu) Copyright (C) 2013 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> OS: Ubuntu 16.04 LTS
Your script doesn't appear to initialize the variable energy_efficiency, so the first time around the loop echo "scale=9; $energy_efficiency+$val/$iteration_float" produces scale=9; +197645.74/1.0 which is a syntax error (bc apparently allows unary -, but not unary +) $ echo "scale=9; +197645.74/1.0" | bc (standard_in) 1: syntax error
(standard_in) 1: syntax error when using bc
1,416,436,772,000
I cannot find a way to close while statement.The following is my bash script code. bc << EOF a=0; while(a<10) a++; print a; EOF The ouput is not as expected, it prints all the a values other than the last one. Please help me.
The result of an operation is always printed unless it's an assignment. So, let's turn a++ into the assignment a=a+1. bc <<END_BC a = 0 while (a < 10) a = a + 1 print a, "\n" END_BC Alternatively, but slightly more cryptic (using an empty while loop): bc <<END_BC a = 0 while (++a < 10) print a, "\n" END_BC
how to close a while statement in bc script
1,416,436,772,000
On Ubuntu 14.04.1 64-bit LTS I am writing a shell script and if I define the start of the sequence used in the for loop with a variable instead of a constant I get really weird behavior and there are lots of errors from the bc calculator. You can run the following code snippet to reproduce the errors: #!/bin/bash S=0.030 F=0.150 N=30 DIFF=`echo $F - $S | bc -l` dw=`echo $DIFF / $N | bc -l` is=`echo $S / $dw | bc -l` if=`echo $F / $dw | bc -l` for i in `seq $is $if` do w=`echo "scale=3; $i * $dw" | bc -l` done If I change the start of the sequence to a constant it works perfectly, but when it is a variable there are problems the output is as follows: (standard_in) 1: syntax error ... ... (standard_in) 1: syntax error What may be the reason behind this behaviour? I would like to receive your suggestions and comments.
The problem turned out to be that the decimal point separator in my Ubuntu installation was set to , (comma) instead of . (dot). I changed it with the following command: sudo update-locale LC_NUMERIC="en_GB.UTF-8" And the problem was resolved.
Problematic bc calculation in shell script
1,416,436,772,000
I have object of class class X { private DateTime dt; "constructor, set/get" } I have one instance of this object serialized in file.bin. I want to show content of `file.bin on the Linux console in human readable way.
You can display the contents of a file with cat, but with binary files that will often result in "garbage". For binary files you can use od -x (or xxd): od -x file.bin that makes everything byte readable as hex words for any file (understanding what that means is more difficult and dependent on the program that wrote the file, but fortunately that is not what you asked for).
Convert serialized Java object to human readable
1,416,436,772,000
I have 3 variables and values: totalLines=14 outsideLines=6 multiplied=600 totalLines represents the total number of lines (100%), while outsideLines represents number of lines with the timestamp values outside certain limit. My purpose is to calculate the percentage of those outside lines. If I do this: percentage=$(( multiplied / totalLines)) and echo $percentage, I get result: 42 However, I would like to produce percentage in floating point number, like this: 42.85 My attempt to implement this: percentage=$( bc <<< 'scale=2; $multiplied / $totalLines' ) failed with the following errors: (standard_in) 1: illegal character: $ (standard_in) 1: illegal character: $ (standard_in) 1: illegal character: L (standard_in) 1: syntax error How should I properly use bc in order to get percentage in floating point number?
You can do this: percentage=$(echo "scale=2; $multiplied / $totalLines" | bc)
How to properly use bc to convert the value of percentage to floating point format?
1,416,436,772,000
I've written this if, that is obviously not working, and I still can't manage to get over it: #LASTEFFECTIVEHASH if (( $(echo "$LASTEFFECTIVEHASHMINVAL < $LASTEFFECTIVEHASH < $LASTEFFECTIVEHASHMAXVAL" | $BC -l) )); then echo "$DATESTAMP - LASTEFFECTIVEHASH=$LASTEFFECTIVEHASH is between $LASTEFFECTIVEHASHMINVAL and $LASTEFFECTIVEHASHMAXVAL"|tee -a $LOGFILE else echo "$DATESTAMP - LASTEFFECTIVEHASH=$LASTEFFECTIVEHASH is not between $LASTEFFECTIVEHASHMINVAL and $LASTEFFECTIVEHASHMAXVAL"|tee -a $MSGFILE $LOGFILE fi But, when value are out of ranges, it results in this: 20170810003646 - LASTEFFECTIVEHASH=139.2 is between 104.9 and 136.9 I'm following the maths syntax: if x > 104.9 and < 136.9, in maths you write it 104.9 < x < 136.9. But I suspect that bash/bc are behaving differently from my math's teacher. It would be great if bc won't fail to count up to 137 ;)
Unlike for example python, bc does not support chained comparisons: a < b < c To perform both comparisons and require both to be true, use logical-and (requires GNU bc): (a < b) && (b < c) For example: $ a=104.9; b=136; c=136.9; if echo "($a < $b) && ($b < $c)" | bc -l | grep -q 1; then echo True; else echo False; fi True $ a=104.9; b=137; c=136.9; if echo "($a < $b) && ($b < $c)" | bc -l | grep -q 1; then echo True; else echo False; fi False POSIX bc If you don't have GNU bc, you can replace logical-and with multiplication: $ a=104.9; b=136; c=136.9; if echo "($a < $b)*($b < $c)" | bc -l | grep -q 1; then echo True; else echo False; fi True $ a=104.9; b=137; c=136.9; if echo "($a < $b)*($b < $c)" | bc -l | grep -q 1; then echo True; else echo False; fi False
Checking that a decimal number is in a range in bc
1,416,436,772,000
When you run 'bc' on a GNU system, it prints out the following text: ~$ bc bc 1.07.1 Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006, 2008, 2012-2017 Free Software Foundation, Inc. This is free software with ABSOLUTELY NO WARRANTY. In contrast to several other GNU utilities: ~$ gcc --version gcc (Debian 8.3.0-6) 8.3.0 Copyright (C) 2018 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. and ~$ grep --version grep (GNU grep) 3.3 Copyright (C) 2018 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. and ~$ ls --version ls (GNU coreutils) 8.30 Copyright (C) 2018 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Is there any reason why bc lists out all of the copyright dates in such a way instead of just using "Copyright 2017 Free Software Foundation, Inc."?
bc follows the recommended GNU practices for copyright notices, which involves listing every single publication year, although the copyright notice here lists more years than saw bc releases (even including dc releases). The other tools only list the year of last publication, using gnulib’s version_etc function which only prints the year as last updated in gnulib. See also Copyright notice must be regularly updated while the project is active?
Why does GNU 'bc' have such a long copyright string?
1,416,436,772,000
I can pipe echo into bc. But I cannot do the same with "printf": it gives syntax error. ❯ echo "100-5" | bc 95 ❯ printf "%s" "100-5" | bc (standard_in) 1: syntax error
Just need a newline: printf '%s\n' "100-5" | bc In your present situation
Why I can pipe echo into bc, but I can't do the same with printf?
1,416,436,772,000
for i in {0..9} do T=$(bc<<<"8+$i*0.5") echo $T done I get : syntax error near unexpected token `T=$(bc<<<"8+$i*0.5")' I believe the problem is the $i. What am I doing wrong?
The problem is not $i, the problem is in your for construct syntax. You need a newline or ; before do (if used right after the for declaration): for i in {0..9}; do T=$(bc <<<"8+$i*0.5") echo "$T" done Or for i in {0..9} do T=$(bc <<<"8+$i*0.5") echo "$T" done For clarity, it's better to use whitespace before the here string (<<<) (and similar). Although not strictly necessary in this case, you should quote your variable expansions.
use loop variable for calculation bash
1,416,436,772,000
I've found that there is an upper-bound on your subscripts/indices in an array in GNU bc. Running interactively and asking to let arr[100000000]=42 returns an error: Runtime error (func=(main), adr=17): Array arr subscript out of bounds. This array size limit isn't listed among bc's limits, and it doesn't appear the "variable names" limit of 32767 affects this since bc accepts arr[100000]=42 without complaint. What is the exact bound on bc's array subscripts/indices? Is there a way to change this bound?
You can see the bc limits: $ echo 'limits' | bc BC_BASE_MAX = 2147483647 BC_DIM_MAX = 16777215 BC_SCALE_MAX = 2147483647 BC_STRING_MAX = 2147483647 MAX Exponent = 9223372036854775807 Number of vars = 32767 And into man bc (1p) we see: Arrays are singly dimensioned and can contain up to {BC_DIM_MAX} elements. Indexing shall begin at zero so an array is indexed from 0 to {BC_DIM_MAX}−1. Looking into 1.07.1 it is defined into file const.h. /* Definitions for arrays. */ #define BC_DIM_MAX 16777215 /* this should be NODE_SIZE^NODE_DEPTH-1 */ #define NODE_SIZE 64 /* Must be a power of 2. */ #define NODE_MASK 0x3f /* Must be NODE_SIZE-1. */ #define NODE_SHIFT 6 /* Number of 1 bits in NODE_MASK. */ #define NODE_DEPTH 4
What's the upper-bound for an array index/subscript in GNU bc?
1,416,436,772,000
Is there a more elegant way than using xargs -Ix for the following? echo "283" | xargs -Ix bc -l -e "scale=2; l( x )/l(10)"
I don't really see a reason for xargs here: printf 'scale=2; l(%s)/l(10)\n' "283" | bc -l Alternatives for when the number is read from a file: awk '{ printf "l(%s)/l(10)\n", $1 }' file | bc -l -e 'scale=2' (that's assuming a bc that has -e), or without bc at all: awk '{ printf "%.2f\n", log($1)/log(10) }' file
How to pipe a number into bc elegantly?
1,416,436,772,000
I'm trying to compare two floats in bash and something is going wrong. Here is the code sample based on solution here num1=0.502E-01 num2=0.01 echo $num1'>'$num2 | bc -l echo $num2'>'$num1 | bc -l I expect the output of 1 for first echo and 0 for second echo, but instead I get 0 for the first and 1 for the second. What is wrong with this input? How to get consistent comparison of these floats?
awk can certainly do float comparisons if called from your shell script. num1=0.502E-01 num2=0.01 awk -v a="$num1" -v b="$num2" 'BEGIN{print(a>b)}' 1 awk -v a="$num1" -v b="$num2" 'BEGIN{print(b>a)}' 0
Wrong output in comparing floats
1,416,436,772,000
I want to raise a fraction (fraction is calculated on first loop) to a decimal power (second loop), however, I always get 1 as a result. I want to store the output of the second loop in an array as well. Any ideas how to work around this? Thank you! # # vector of vertical pressure levels levs=($(seq 200.0 50.0 900.0)) printf "%s\n" "${levs[@]}" # # exponent for dry air rho=$(bc -l <<<'e(l(0.0819)*0.5)') echo $rho # # calculate fraction of P_surf/P_i from Poisson equation for each vertical pressure level val3=() # for i in "${levs[@]}" do echo $i val3+=($(bc -l <<<"1000.0/$i")) echo "$val3" done printf "%s\n" "${val3[@]}" # # raise fraction of P_surf/P_i to the rho power for dry air (#bc <<< "2 ^ 3") pow=() # for j in "${val3[@]}" do echo $j echo $rho pow+=($(bc <<<"$j^rho")) #echo $((i*rho)) echo "$pow" done #
There are a couple of problems associated with your attempt. The immediate problem being using rho as a literal string when raising it as a power constant to bc bc <<<"$j ^ $rho" Even with that the code does not work, bc does not take fractional numbers for exponents. You get an error non-zero scale in exponent. You can use awk (tested on GNU variant) for the same and apply the equal precision formatting as awk -v base="$j" -v xp="$rho" 'BEGIN{ printf "%.20f", base ** xp }'
Raise each element of an array to a power and store output to a new array in bash
1,416,436,772,000
I am trying to calculate what is the % of successful queries in apache log. I have two commands: cat access_log|cut -d' ' -f10|grep "2.."|wc -l and cat access_log|cut -d' ' -f10|wc -l They return me the number of successful queries and total queries number. I want to calculate what is the % of successful requests using bash and if it is possible - it should be 1 line script. It suppose to output just the % number like - 50 or 12 without any additional info. I tried to use bc with it but failed because of lack of knowledges. Can somebody help me?
Try this: echo $(( 100 * $( cut -d' ' -f10 access_log|grep "2.."|wc -l) / $(cut -d' ' -f10 access_log|wc -l) )) Bash can only handle integers.
How to calculate % of successful queries
1,416,436,772,000
I recently read about bc and found that it supports obase upto 999. Can anyone point me to the numeral set for bc for base greater than 16.
Yes, bc can process numbers with bases up to 999. As an example: $ echo "ibase=10;obase=40;3*40^2+7" | bc 03 00 07 Or, as it should be "307" = 3*40^2 + 0*40^1 + 7*40^0. Or 4807 in decimal. $ echo "ibase=10;obase=10;3*40^2+7" | bc 4807 So, the values are printed as a two digit (decimal) number with an space as separator. Some other example: $ echo "ibase=10;obase=530;371*530^9+222*530^3+127" | bc 371 000 000 000 000 000 222 000 000 127 Or, maybe (in bash), the same number: $ bc <<<"obase=530;1224212292558591376050694127" 371 000 000 000 000 000 222 000 000 127
What is the numeral for base greater than 16 in bc?
1,416,436,772,000
I am trying to multiply array values with values derived from the multiplication of a loop index using bc. #!/bin/bash n=10.0 bw=(1e-3 2.5e-4 1.11e-4 6.25e-5 4.0e-5 2.78e-5 2.04e-5 1.56e-5 1.29e-5 1.23e-5 1.0e-5) for k in {1..11};do a=$(echo "$n * $k" | bc) echo "A is $a" arn=${bw[k-1]} echo "Arn is $arn" b=$(echo "$arn * $a" | bc -l) echo "b is $b" #echo $a $b done I am able to echo the array values by assigning it to a new variable within the loop, but when I use that to multiply using bc, I get (standard_in) 1: syntax error. I searched for clues and tried some but none helped. The expected output is as follows. 10 1.00E-02 20 5.00E-03 30 3.33E-03 40 2.50E-03 50 2.00E-03 60 1.67E-03 70 1.43E-03 80 1.25E-03 90 1.16E-03 100 1.23E-03 110 1.10E-03 All help is greatly appreciated.
bc doesn't support the scientific format. Use something that does. For example, Perl: a=$(perl -e "print $n * $k" ) arn=${bw[k-1]} b=$(perl -e "printf '%.2E', $arn * $a") echo $a $b Output: 10 1.00E-02 20 5.00E-03 30 3.33E-03 40 2.50E-03 50 2.00E-03 60 1.67E-03 70 1.43E-03 80 1.25E-03 90 1.16E-03 100 1.23E-03 110 1.10E-03
bash array multiplication using bc
1,416,436,772,000
I ran into an issue where bc does not have boolean expressions in the AIX system. Wondering if there is a replacement command so I don't have make my code any longer? This is in a bash script. Here is what I had: percent=-0.17 max=0.20 if [[ $(bc <<< "$percent <= $max && $percent >= -$max") -ge 1 ]]; then echo "Under the $max acceptable buffer: File ACCEPTED" else echo "Over the $max acceptable buffer: File REJECTED" exit 1 fi This is my output: ++ bc syntax error on line 1 stdin + [[ '' -ge 1 ]]
bc's POSIX spec does not require bare conditionals, and AIX's bc does not support them. You would have to break out the test like this: percent=-0.17 max=0.20 if [[ $(bc <<< "if ($percent <= $max) if ($percent >= -$max) 1") -eq 1 ]]; then echo "Under the $max acceptable buffer: File ACCEPTED" else echo "Over the $max acceptable buffer: File REJECTED" exit 1 fi Re-formatting the bc script, it looks like this: if ($percent <= $max) if ($percent >= -$max) 1 ... only if the $percent value is within both ranges does the expression 1 get executed, which prints 1 to stdout.
AIX Does not support bc boolean expression [duplicate]
1,416,436,772,000
I have used this to write a function that calculates floating-point numbers. mt (){ echo "$1" | bc -l | awk '{printf "%f", $0}' echo ' ' } This works great, but I was wondering if there is a way to omit the function call entirely taking advantage of the error message that is returned when an operation of floats is attempted. $ 45.0+1.2 -bash: 45.0+1.2: command not found Is this doable? If so, how? EDIT I guess the downvotes mean I didn't think this through, although a clarifying comment would be helpful. I was using the mt function for calculations, but I often forget the mt when doing many of them in a short period. An initial and unique function call would do, and for that purpose I could simply use python and call it a day. Sorry for my ignorance.
Add this somewhere where it would be loaded into your bash environment: (~/.bashrc is one option) (It is a bad idea and won't work for division without spaces, see man page excerpt for why) (The exit is needed only for some non-GAWK AWK versions) command_not_found_handle() { # AWK version, security risk awk "BEGIN { print $*; exit; }" # Use AWK as a calculator # If you want to keep what you did previously: # BC version, possibly less of a security risk, but an extra process is involved # echo "$*" | bc -l | awk '{printf "%f\n", $0}' echo "$0: $1: command not found" 1>&2 # Send error to STDERR exit 127 # Keep same exit status as otherwise } From the bash man page: (This functions gets invoked when bash runs out of other options) If the name is neither a shell function nor a builtin, and contains no slashes, bash searches each element of the PATH for a directory con- taining an executable file by that name. Bash uses a hash table to remember the full pathnames of executable files (see hash under SHELL BUILTIN COMMANDS below). A full search of the directories in PATH is performed only if the command is not found in the hash table. If the search is unsuccessful, the shell searches for a defined shell function named command_not_found_handle. If that function exists, it is invoked with the original command and the original command's arguments as its arguments, and the function's exit status becomes the exit status of the shell. If that function is not defined, the shell prints an error message and returns an exit status of 127.
Bash as float calculator