date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,378,499,802,000 |
I have the bash line:
expr substr $SUPERBLOCK 64 8
Which is return to me string line:
00080000
I know that this is, actually, a 0x00080000 in little-endian. Is there a way to create integer-variable from it in bash in big-endian like 0x80000?
|
Probably a better way to do this but I've come up with this solution which converts the number to decimal and then back to hex (and manually adds the 0x):
printf '0x%x\n' "$((16#00080000))"
Which you could write as:
printf '0x%x\n' "$((16#$(expr substr "$SUPERBLOCK" 64 8)))"
| How to read string as hex number in bash? |
1,378,499,802,000 |
I am thinking if there exists any name for such a simple function which returns the order of numbers in an array.
I would really love to do this ranking by minimalist way and with basic Unix commands but I cannot get anything to my mind than basic find-and-loop which is not so elegant.
Assume you have an array of nu... |
If that list was in a file, one per line, I'd do something like:
sort -nu file |
awk 'NR == FNR {rank[$0] = NR; next}
{print rank[$0]}' - file
If it was in a zsh $array:
sorted=(${(nou)array})
for i ($array) echo $sorted[(i)$i]
That's the same principle as for the awk version above, the rank is the index NR/... | How to rank numbers in array by Unix? |
1,378,499,802,000 |
Here is my code; I want to compare $COUNTER to various multiple times.
if [ "$COUNTER" = "5" ]; then
It's okay, but I want it do it for dynamic times like 5,10,15,20 etc.
|
Conclusion of the various comments seems to be that the simplest answer to the original question is
if ! (( $COUNTER % 5 )) ; then
| Compare bash variable to see if divisible by 5 |
1,378,499,802,000 |
I am trying to sort a data file in descending order. The data file is given by three columns delimited by tabs; I want to order them in descending order for the third column with (the third column is given as a scientific notation in exponential value):
cat eII_surf.txt | sort -gr -k3
Somehow, this worked on a previ... |
2.3e-12 would be understood as 2 in a locale where the decimal radix character is , (as it is in most of the non-English speaking world including your de_DE.utf8) where the number would need to be written 2,3e-12.
You could do:
LC_ALL=C sort -grk2 < your-file
To force numbers being interpreted in the English style.
I... | "sort -g" does not work as expected on data in scientific notation |
1,378,499,802,000 |
I've reviewed the "Similar questions", and none seem to solve my problem:
I have a large CSV input file; each line in the file is an x,y data point. Here are a few lines for illustration, but please note that in general the data are not monotonic:
1.904E-10,2.1501E+00
3.904E-10,2.1827E+00
5.904E-10,2.1106E+00
7.... |
Here’s a generic variant:
BEGIN { OFS = FS = "," }
{
for (i = 1; i <= NF; i++) sum[i] += $i
count++
}
count % 3 == 0 {
for (i = 1; i <= NF; i++) $i = sum[i] / count
delete sum
count = 0
if ($NF >= 1.1 * last || $NF <= 0.9 * last) {
print
last = $NF
}
}
END {
if (coun... | Can `awk` sum a column over a specified number of lines |
1,378,499,802,000 |
I am working on a file where I have columns of very large values.
(like 40 digits : 646512354651316486246729519751795724672467596754627.06843
and so on ...)
I would like to have those numbers in scientific notation but with only 3 or 4 numbers after the dot. Is there a way to use sed or something to do that on every ... |
If you want to convert only specific column, awk helps
$ cat ip.txt
foo 64651235465131648624672951975 123
bar 3452356235235235 xyz
baz 234325236452352352345234532 ijkls
$ # change only second column
$ awk '{$2 = sprintf("%.3e", $2)} 1' ip.txt
foo 6.465e+28 123
bar 3.452e+15 xyz
baz 2.343e+26 ijkls
| How to convert big numbers into scientific notation |
1,378,499,802,000 |
The following command achieve my goal by grepping BTC price from specific exchange.
curl -sS https://api.binance.com/api/v1/ticker/price?symbol=BTCUSDT | jq -r '.price'
the output will be for the moment 7222.25000000 but i would like to get it 7222.25
|
Pass the price through tonumber:
curl -sS 'https://api.binance.com/api/v1/ticker/price?symbol=BTCUSDT' |
jq -r '.price | tonumber'
This would convert the price from a string to a number, removing the trailing zeros. See the manual for jq.
| Trim trailing zeroes off a number extracted by jq |
1,378,499,802,000 |
I have been trying to define a thousands separator in printf for a while now and I have discovered that zsh has some problems with it.
In bash I can use something like:
$ printf "%'d\n" 1234567890
1,234,567,890
but in zsh it won't work :
$ printf "%'d\n" 1234567890
printf: %': invalid directive
I have just found ... |
Update: as of zsh v. 5.1, the printf builtin supports grouping of thousands via ' just like bash/coreutils printf (see also the discussion here).
The thousands separator is a GNU extension that zsh doesn't support, and it has its own printf builtin that you end up using instead. As mentioned in the linked post, you c... | Thousands separator in printf in zsh |
1,378,499,802,000 |
I have a file which has variables from G1 to G229. I want to replace them with G230 to G469; how I can do that? I tried this bash script but it didn't work:
#!/bin/bash
for num in {1..229}
do
echo G$num
N=$(($num+229))
echo G$N
sed -i -e 's/$G$num/$G$N/g' file
done
|
sed solution. Maybe it too tricky and unoptimal, but it works. As an experiment :).
It do all replacements in the one sed call by executing the one, big command sequence, generated by printf and paste usage. I wanted split this command to the multiline for readability, but couldn't - it stops working then. So - the on... | how to replace a value with that value + constant |
1,378,499,802,000 |
I have a command that outputs a number to a log file, and I don't like the way it looks when the number changes the number of decimal places because it ruins the alignment and makes everything look messy. How do I force the output to have the same number of decimal places each time?
ex:
531.125
531.4561
531.3518
531.2... |
In bash printf is able to use the %f format
#!/bin/bash
for a in 531.125 531.4561 531.3518 531.2; do
printf "%8.4f\n" "$a"
done
Executed gives:
531.1250
531.4561
531.3518
531.2000
| Formatting numerical output in bash to have exactly 4 decimal places |
1,378,499,802,000 |
I have a program that sums a column in a file:
awk -v col=2 '{sum+=$col}END{print sum}' input-file
However, it has a problem: If you give it a file that doesn't have numeric data, (or if one number is missing) it will interpret it as zero.
I want it to produce an error if one of the fields cannot be parsed as a numbe... |
I ended up with this:
awk -v col=$col '
typeof($col) != "strnum" {
print "Error on line " NR ": " $col " is not numeric"
noprint=1
exit 1
}
{
sum+=$col
}
END {
if(!noprint)
print sum
}' $file
This uses typeof, which is a GNU awk extension. typeof($col) returns 'strnum' if $col is a valid n... | Make awk produce error on non-numeric |
1,378,499,802,000 |
I have a log file. For every line with a specific number, I want to sum the last number of those lines. To grep and cut is no problem but I don't know how to sum the numbers. I tried some solutions from StackExchange but didn't get them to work in my case.
This is what I have so far:
grep "30201" logfile.txt | cut -f6... |
You can take help from paste to serialize the numbers in a format suitable for bc to do the addition:
% grep "30201" logfile.txt | cut -f6 -d "|"
650
1389
945
% grep "30201" logfile.txt | cut -f6 -d "|" | paste -sd+
650+1389+945
% grep "30201" logfile.txt | cut -f6 -d "|" | paste -sd+ | bc
2984
If you have grep wit... | How to grep and cut numbers from a file and sum them |
1,378,499,802,000 |
This example works fine:
awk -v num1=5999 -v num2=5999 'BEGIN{ print (num2==num1) ? "equal" : "not equal" }'
equal
This example does not work well:
awk -v num1=59558711052462309110012 -v num2=59558711052462309110011 'BEGIN{ print (num2==num1) ? "equal" : "not equal" }'
equal
In the second example compared numbers a... |
You're reaching the limit of the precision of awk numbers.
You could force the comparison to be a string comparison with:
awk -v num1=59558711052462309110012 -v num2=59558711052462309110011 '
BEGIN{ print (num2""==num1) ? "equal" : "not equal" }'
(Here the concatenation with the empty string forces them to be consi... | How to compare two numbers in awk? |
1,378,499,802,000 |
I have a table data like below
abc 1 1 1
bcd 2 2 4
bcd 12 23 3
cde 3 5 5
cde 3 4 5
cde 14 2 25
I want the sum of values in each column based on variables in first column and desired result is like below:
abc 1 1 1
bcd 14 25 7
cde 20 11 35
I used awk command like this
awk -F"\t" '{for(n=... |
You were fairly close.
You see what you were doing wrong, don't you?
You were keeping one total for each column 1 value,
when you should have been keeping three.
This is similar to Inian's answer,
but trivially extendable to handle any number of columns:
awk -F"\t" '{for(n=2;n<=NF; ++n) a[$1][n]+=$n}
END {fo... | How to get sum of values in column based on variables in other column separately? [duplicate] |
1,378,499,802,000 |
I'm trying to sort a file based on a particular position but that does not work, here is the data and output.
~/scratch$ cat id_researchers_2018_sample
id - 884209 , researchers - 1
id - 896781 , researchers - 4
id - 901026 , researchers - 15
id - 904091 , researchers - 1
id - 905525 , researchers - 1
id - 908660 , ... |
You are intending to sort by column 7 numerically.
This can be done with either
$ sort -n -k 7 file
id - 884209 , researchers - 1
id - 904091 , researchers - 1
id - 905525 , researchers - 1
id - 916197 , researchers - 1
id - 896781 , researchers - 4
id - 908660 , researchers - 5
id - 908876 , researchers - 7
id - 9104... | Sort numerical column |
1,378,499,802,000 |
According to $ man gawk, the strtonum() function can convert a string into a number:
strtonum(str) Examine str, and return its numeric value. If
str begins with a leading 0, treat it as an
octal number. If str begins with a leading 0x
... |
This is related to the generalised strnum handling in version 4.2 of GAWK.
Input values which look like numbers are treated as strnum values, represented internally as having both string and number types. “0123” qualifies as looking like a number, so it is handled as a strnum. strtonum is designed to handle both strin... | Why does gawk treat `0123` as a decimal number when coming from the input data? |
1,378,499,802,000 |
I have a need to convert a list of decimal values in a text file into hex format, so for example test.txt might contain:
131072
196608
262144
327680
393216
...
the output should be list of hex values (hex 8 digit, with leading zeroes):
00020000
00030000
00040000
...
the output is printed into the text file. How to... |
You can do this using printf and bash:
printf '%08x\n' $(< test.txt)
Or using printf and bc...just...because?
printf '%08s\n' $(bc <<<"obase=16; $(< test.txt)")
In order to print the output to a text file just use the shell redirect > like:
printf '%08x\n' $(< test.txt) > output.txt
| Convert a list of decimal values in a text file into hex format |
1,378,499,802,000 |
I am trying to extract numbers out of some text. Currently I am using the following:
echo "2.5 test. test -50.8" | tr '\n' ' ' | sed -e 's/[^0-9.]/ /g' -e 's/^ *//g' -e 's/ *$//g' | tr -s ' '
This would give me 2.5, "." and 50.8. How should I modify the first sed so it would detect float numbers, both positive and ne... |
grep works well for this:
$ echo "2.5 test. test -50.8" | grep -Eo '[+-]?[0-9]+([.][0-9]+)?'
2.5
-50.8
How it works
-E
Use extended regex.
-o
Return only the matches, not the context
[+-]?[0-9]+([.][0-9]+)?+
Match numbers which are identified as:
[+-]?
An optional leading sign
[0-9]+
One or more numbers
([.][0-9]+)... | Extracting positive/negative floating-point numbers from a string |
1,378,499,802,000 |
I found out the floating point multiplication in mit-scheme is not accurate, for example,
1 ]=> (* 1991.0 0.1)
will produce
;Value: 199.10000000000002
Could you please help explain the appearance of the weird trailing number “2”?
|
This quote is from memory and so probably not quite right but it conveys the essence of the problem: "Operating on floating point numbers is like moving piles of sand: every time you do it, you lose a little sand and you get a bit of dirt" (from Kernighan and Plauger's "Elements of programming style" IIRC). Every prog... | Inaccurate multiplication in mit-scheme [closed] |
1,378,499,802,000 |
File contents:
RANDOM TEXT num1=400 num2=15 RANDOM TEXT
RANDOM TEXT num1=300 num2=10 RANDOM TEXT
RANDOM TEXT num1=200 num2=5 RANDOM TEXT
I would like to subtract 5 for each num2 per line like so:
RANDOM TEXT num1=400 num2=10 RANDOM TEXT
RANDOM TEXT num1=300 num2=5 RANDOM T... |
Using awk:
awk '{ for (i=1;i<=NF;i++) { if ($i ~ /num2=/) {sub(/num2=/, "", $i); $i="num2="$i-5; print} } }' file
This will loop through each column of each line looking for the column that contains num2=. When it finds that column it will:
Remove num2= - sub(/num2=/, "", $i)
Redefine that column as num2={oldnum-5}... | How can I adjust nth number in a line? |
1,378,499,802,000 |
This question is about generating random numbers between a range, which is fine, but it doesn't fit my case.
I'll explain in SQL terms because it seems to me that's easier to understand, though the question is about bash. My idea is with the results of the bash code build a SQL script.
I have two MySQL tables, one of ... |
If you want to do this only using commonly-available tools (at least on Linux distributions), the most efficient way is probably to ask shuf:
shuf -i 1-139 -n 1519 -r
This produces 1519 numbers randomly chosen between 1 and 139.
To ensure that each place gets one person, shuffle 139 numbers first without repeating:
s... | Random number output between two ranges linked together |
1,378,499,802,000 |
I want to get the exact number when I try to find the average of a column of values.
For example, this is the column of input values:
1426044
1425486
1439480
1423677
1383676
1360088
1390745
1435123
1422970
1394461
1325896
1251248
1206005
1217057
1168298
1153022
1199310
1250162
1247917
1206836
When I use the followin... |
... | awk '{ sum+=$1} END { print sum/NR}'
By default, (GNU) awk prints numbers with up to 6 significant digits (plus the exponent part). This comes from the default value of the OFMT variable. It doesn't say that in the docs, but this only applies to non-integer valued numbers.
You could change OFMT to affect all pr... | Number formatting and rounding issue with awk |
1,378,499,802,000 |
When the hex number is relative small, I can use
echo 0xFF| mawk '{ printf "%d\n", $1}'
to convert hex to dec.
When then hex number is huge, mawk does not work any more, e.g.
echo 0x8110D248 | mawk '{ printf "%d\n", $1 }'
outputs 2147483647 (which is wrong, 2147483647 is equivalent to 0x7FFFFFFF).
How can I convert ... |
A better command to use for arbitrarily large numbers is bc. Here's a function to perform the conversion
hextodec() {
local hex="${1#0x}"
printf "ibase=16; %s\n" "${hex^^}" | bc
}
hextodec 0x8110D248
2165363272
I'm using a couple of strange-looking features here that manipulate the value of the variables as ... | Converting HEX to DEC is out of range by using `mawk` |
1,378,499,802,000 |
There seems to be a number of neat and simple methods for rounding all numbers in a column to 1 decimal place, using awk's printf or even bash's printf. However I can't find an equally simple method for just reducing all numbers in a column to 1 decimal place (but not round up). The simplest method for sorting this at... |
easy enough with grep
$ cat ip.txt
123.434
1456.8123
2536.577
345.95553
23643.1454
$ grep -o '^[0-9]*\.[0-9]' ip.txt
123.4
1456.8
2536.5
345.9
23643.1
^ start of line
[0-9]* zero or more digits
\. match literal dot character
[0-9] match a digit
since -o option of grep is used, only matched portion is printed,... | Round down/truncate decimal places within column |
1,378,499,802,000 |
Following code calculates the Binomial Probability of a success event k out of n trials:
n=144
prob=$(echo "0.0139" | bc)
echo -e "Enter no.:"
read passedno
k=$passedno
nCk2() {
num=1
den=1
for((i = 1; i <= $2; ++i)); do
((num *= $1 + 1 - i)) && ((den *= i))
done
echo $((num / den))
}
b... |
FWIW,
prob=$(echo "0.0139" | bc)
is unnecessary - you can just do
prob=0.0139
Eg,
$ prob=0.0139; echo "scale=5;1/$prob" | bc
71.94244
There's another problem with your code, apart from the underflow issue. Bash arithmetic may not be adequate to handle the large numbers in your nCk2 function. Eg, on a 32 bit syst... | bc scale: How to avoid rounding? (Calculate small binomial probability) |
1,378,499,802,000 |
I'm trying to numerically sort every column individually in a very large file. I need the command to be fast, so I'm trying to do it in an awk command.
Example Input:
1,4,2,7,4
9,2,1,1,1
3,9,9,2,2
5,7,7,8,8
Example Output:
1,2,1,1,1
3,4,2,2,2
5,7,7,7,4
9,9,9,8,8
I made something that will do the job (but its not... |
You can do it in a single GNU awk:
gawk -F ',' '
{
for(i=1;i<=NF;i++){matrix[i][NR]=$i}
}
END{
for(i=1;i<=NF;i++){asort(matrix[i])}
for(j=1;j<=NR;j++){
for(i=1;i<NF;i++){
printf "%s,",matrix[i][j]
}
print matrix[i][j]
}
... | Numerical sorting of every column in a file individually using awk |
1,378,499,802,000 |
For example I have some test file and want to sort it by the column values. After
awk -F'[:,]' '{print $16}' test.json
I get next output for the column 16:
123
457
68
11
939
11
345
9
199
13745
Now, when I want to sort it numeric, I use
awk -F'[:,]' '{print $16}' test.json | sort -nk16
but I just get back... |
The output of awk contains only one column, so no 16th column.
So sort sees all identical empty sort keys and what you observe is the result of the last resort sort (lexical sort on the whole line) which you can disable with the -s option in some implementations.
Here you want:
awk -F'[:,]' '{print $16}' test.json | s... | Numeric sort by the column values |
1,575,676,821,000 |
I have a large data file dataset.csv with 7 numeric columns. I have read that AWK would be the fastest/efficient way to calculate the mean and variance for each column. I need an AWK command that goes through the CSV file and outputs the results into a summary CSV. A sample dataset:
1 1 12 1 0 0 426530
1 ... |
you will need a loop that goes to all column
{ for(i=1;i<=NF;i++) ...
and arrays
... total[i]+=$i ; sq[i]+=$i*$i ; }
this result in a command line like (for average)
awk '{ for(i=1;i<=NF;i++) total[i]+=$i ; }
END { for(i=1;i<=NF;i++) printf "%f ",total[i]/NR ;}'
full program
I use this awk to compute mean and... | Using AWK to calculate mean and variance of columns |
1,575,676,821,000 |
I have a file that has 1000 text lines. I want to sort the 4th column at each 20 lines interval and print the output to another file. Can anybody help me with sorting them with awk or sed?
Here is an example of the data structure input
1 1.1350 1092.42 0.0000
2 1.4645 846.58 0.0008
3 1... |
With GNU sort and GNU split, you can do
split -l 20 file.txt --filter "sort -nk 4|tail -n 1"
The file gets splitted in packets of 20 lines, then the filter option filters each packet by the given commands, so they get sorted numerically by the 4th key and only the last line (highest value) extracted by tail.
| How to sort each 20 lines in a 1000 line file and save only the sorted line with highest value in each interval to another file? |
1,575,676,821,000 |
awk newbie here.
Suppose I have two columns of data, and I want to calculate the rate of increase, given by delta(y)/delta(x). How would I do this in an awk script? What I've learnt so far only deals with line by line manipulation, and I'm not sure how I'd work with multiple lines.
Note: Suppose I have N data points, ... |
Use variables to store data that you need to remember from one line to the next.
Line N+1 in the output is calculated from lines N and N+1 in the input, so you need variables to store the content of the previous line. There are two fields per line, so use one variable for each.
Lines 1 and 2 get special treatment (tit... | Calculating rates / 'derivative' with awk |
1,575,676,821,000 |
Why does the following command print numerical values?
$ iostat | sed -n '/[:digit:]/!p'
1.56 1.38 0.31 0.34 0.03 96.38
|
The the POSIX character class you are trying to use must be placed inside a regular bracket expression, so [[:digit:]] not [:digit:]. You're also not limited to using just the one character class in the bracket expression, so e.g. [[:digit:][:punct:]] or [^[:digit:]] can be used.
Your command actually means "print all... | sed: !p command strange behavior |
1,575,676,821,000 |
I have a text file containing flight times, hours and minutes separated by colon sign, one per line:
00:50
00:41
00:43
00:50
01:24
I am currently using Apple's Numbers application with a simple formula to calculate the total time (result being 4:28 for the example data above).
However, I was wondering if there is an... |
As you note, there are many possibilities. The following versions with awk are roughly equivalent to the perl you included with your question:
(with GNU awk):
awk -F : '{acch+=$1;accm+=$2;} ENDFILE { \
print acch+int(accm/60) ":" accm%60; }' [inputfile]
(with "POSIX" awk):
awk -F : '{acch+=$1;accm+=$2;print acch+i... | How to sum times in a text file using command-line? |
1,575,676,821,000 |
I am currently looking for an alternative to the following code that works a little less 'wonky'.
printf "Please enter the ticket number:\t"; read vTICKET; vTICKET=$(printf %04d "$vTICKET"); printf "$vTICKET\n";
If I input 072 as the input, this is what I see
Please enter the ticket number: 072
0058
I am wondering i... |
The leading zeros on the input value are causing the shell to interpret it as an octal number.
You can force decimal conversion using 10# e.g.
$ printf "Please enter the ticket number:\t"; read vTICKET; vTICKET=$(printf %04d "$((10#$vTICKET))" ); printf "$vTICKET\n";
Please enter the ticket number: 072
0072
Note that... | Add leading zeroes to a user's input but is being transformed with printf |
1,575,676,821,000 |
I'm currently writing a shell script that separate values from their identifiers (retrieved from grep).
For example, if I grep a certain file I will retrieve the following information:
value1 = 1
value2 = 74
value3 = 27
I'm wondering what UNIX command I can use to take in the information and convert it to this format... |
You can use awk like this :
grep "pattern" file.txt | awk '{printf "%s ", $3}'
Depending of what you do with grep, but you should consider using awk for greping itself :
awk '/pattern/{printf "%s ", $3}' file.txt
Another way by taking advantage of bash word-spliting :
echo $(awk '/pattern/{print $3}' file.txt)
Edi... | How to separate numerical values from identifiers |
1,575,676,821,000 |
What is the difference between the two sort options -n and -g?
It's a bit confusing to have too much detail but not enough adequate documentation.
|
TL;DR
While -n will sort simple floats such as 1.234, the -g option handles a much wider range of numerical formats but is slower.
Also -g is a GNU extension to the POSIX specification.
From man sort, the relevant parts are:
-g, --general-numeric-sort, --sort=general-numeric
Sort by general numerica... | What's the difference between "sort -n" and "sort -g"? |
1,575,676,821,000 |
I am trying to find the maximum value of a column of data using gawk:
gawk 'BEGIN{max=0} {if($1>0+max) max=$1} END {print max}' dataset.dat
where dataset.dat looks like this:
2.0
2.0e-318
The output of the command is
2.0e-318
which is clearly smaller than 2.
Where is my mistake?
Edit
Interestingly enough, if ... |
The 0+ needs to be prefixed to each $1 to force a numeric conversion. max does not need 0+ -- it is already cast to numeric when it is stored.
Paul--) AWK='
> BEGIN { max = 0; }
> 0+$1 > max { max = 0 + $1; }
> END { print max; }
> '
Paul--) awk "${AWK}" <<[][]
> 2.0
> 2.0e-318
> [][]
2
Paul--) awk "${AWK}" <<[][]
> 2... | Why does gawk (sometimes?) think 2.0e-318 > 2.0? |
1,575,676,821,000 |
I use this command
awk 'NR%2{t=$1;next}{print $1-t,$2}'
to get the distance between two consecutive Y points in a file. But I would like to have all positive numbers. How to get that ? like something as modulus.
1577 -46.1492
1577.57 47
1578 -47.6528
1578.87 49
1579 -49.2106
1580 -50.7742
1580.15 51
|
Command: awk '$2 !~ /^-/{print $0}' file
output
1577.57 47
1578.87 49
1580.15 51
| How to obtain only positive value in second column [closed] |
1,575,676,821,000 |
Task:
stdout the load avg from top in a decimal form like (e.g 0.23)
Details:
I need to parse this script to Chef's inspec and check if its result is bigger/smaller than something, for example:
describe command("top -b -n 1 | awk '/load average/ { sub(/,/,\".\",$10); printf \"%f\n\",$10}'") do
its("stdout") { sho... |
Task:
sdout the load avg from top in a decimal form like(e.g 0.23)
Solution:
top -b -n 1 | perl -lane 'print "$1.$2" if /load average: (\d+)[,.](\d+)/'
Notes:
This retrieves the 1m load average. It looks like this is what you want, but you should state it clearly. If needed, the code can be easily modified to ret... | Converting awk printf string to decimal |
1,575,676,821,000 |
I'm currently working on a project called 'dmCLI' which means 'download manager command line interface'. I'm using curl to multi-part download a file. And I'm having trouble with my code. I can't convert string to int.
Here's my full code. I also uploaded my code into Github Repo. Here it is:
dmCLI GitHub Repository
#... |
The regex below will extract the number of bytes, only the number:
contentlength=$(
LC_ALL=C sed '
/^[Cc]ontent-[Ll]ength:[[:blank:]]*0*\([0-9]\{1,\}\).*$/!d
s//\1/; q' headers
)
After the above change, the contentlength variable will be only made of decimal digits (with leading 0s removes so the shell does... | String to integer in Shell |
1,575,676,821,000 |
I have a .log file where each entry is on the form
2018-09-28T10:53:48,006 [Jetty-6152 ] INFO [correlationId] my.package.service:570 - Inbound request: 1.2.3.4 - GET - 12342ms - 200 - /json/some/resource
2018-09-28T11:53:48,006 [Jetty-6152 ] INFO [correlationId] my.package.service:570 - Inbound request: 1.2.3.4 - ... |
I think this should do it:
grep -E '([5-9][0-9]{3}|[0-9]{5,})ms' | grep -v 5000ms
How does it work?
It uses -E so the regex is of the "modern" format (also called extended). It just makes the typing easier in our case as we can save some \ chars.
The (...|...)ms searches for two alternatives followed by the string ... | How do I grep for occurences of "[numberLargerThan5000]ms"? |
1,575,676,821,000 |
In my netstat output, I want to extract port range between 32000-64000. I have tried egrep "^[3,4,5,6]" but I need to start from 32000. Should I use awk or some kind of script?
Linux# netstat -nau
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address ... |
awk solution:
netstat -nau | awk -F'[[:space:]]+|:' 'NR>2 && $5>=32000 && $5<=64000'
The output in your case would be as:
udp 0 0 10.0.0.20:55238 0.0.0.0:*
udp 0 0 10.0.0.20:55240 0.0.0.0:*
udp 0 0 10.0.0.20:55244 0.0.0.0:*
udp 0 0 10.0.0.20:32246 0.... | grep regex port range from netstat |
1,575,676,821,000 |
For example, there are some temperature data at those folders in different time.
temps.txt contains the temperature number. So how can I use bash script to find out the maximum temperature? (the results only show the date,time and number of temperature,e.g. ./2011.10.20/00:00/temps.txt 27C).
$ ls
2011.10.20 2012.01.2... |
You can use the combination find, grep and awk command to get the desired result. The below is a oneliner which will print the file which has the maximum temperature recorded.
find . -mindepth 3 -exec echo -n "{} " \; -exec grep "PROCESSOR_ZONE" {} \; |
awk '{
split($4,val,"/");
gsub("C","",val[1]);
if (ma... | How to find out the biggest number in many documents that contains different numbers |
1,575,676,821,000 |
I have a file of about 13 million lines like:
Lat Long air_temp sst wind_speed wave_height wave_period
65.3 7.3 4.3 8.8 7.7 4 8
61.6 1.3 -9.99 8.8 9.8 4 7
61.2 1.1 -9.99 8.8 7.7 3 7
61.1 1 -9.99 8.8 8.7 3.5 7
61 1.7 -9.99 8.8 10.8 4 7
60.6 1.7 -9.99 8.8 8.2 4 10
60.6 ... |
This can be done with Python like:
Code:
#!/usr/bin/python
import re
import sys
SPACES = re.compile('\s+')
data_by_lat_long = {}
with open(sys.argv[1]) as f:
# get and print header
line = next(f)
print(line.strip())
for line in f:
data = SPACES.split(line.strip())
data_by_lat_long.s... | How can I calculate the average of multiple columns with the same value for the first two columns? |
1,575,676,821,000 |
I am trying to generate a comma separated unordered list of ints between 1 and 10, I have tried the following but it results in an ordered list:
seq -s "," 10 | shuf
|
You can use paste -s to join lines:
shuf -i1-10 | paste -sd, -
This uses -i option of shuf to specify a range of positive integers.
The output of seq can be piped to shuf:
seq 10 | shuf | paste -sd, -
Or -e to shuffle arguments:
shuf -e {1..10} | paste -sd, -
| How to generate a comma separated list of random ints |
1,575,676,821,000 |
Alright, it seems I cannot find a way to do what I need.
Let's say I have a text file A like this
(freqBiasL2[27])
(SatBiasL1[27])
(defSatBiasL2_L1[27])
(defSatBiasSlope[27])
(defSatBiasSigma[27])
(freqBiasL2[28])
(SatBiasL1[28])
(defSatBiasL2_L1[28])
(defSatBiasSlope[28])
(defSatBiasSi... |
Use perl:
perl -pe 's/(?<=\[)(\d+)(?=\])/$1+1/ge' prova.txt
Explanation:
-p means loop over every line and print the result after every line
-e defines the expression to execute on every line
s/from/to/ does a simple substition
s/(\d+)/$1+1/ge matches one or more digits, captures it into $1, and then the e modifier ... | Increment index in file |
1,575,676,821,000 |
I need a command that generates random numbers with 11 digits. How can this be done?
|
Try this:
od -An -N8 -d /dev/random | sed -e 's| ||g' -e 's|\(.\{11\}\).*|\1|'
| Generates random numbers with 11 digits |
1,575,676,821,000 |
I would like to convert a float to a ratio or fraction.
Do we have the option to convert a float 1.778 to ratio as 16:9 or 16/9 in bash, in a fashion similar to Python's fractions module (Fraction(1.778).limit_denominator(100)).
|
Pedantic or not, if our man is looking only looking at 3 decimals of precision....
Breaking out the good old awk hammer for the equally good old fashioned lowest denominator, rather than the high falutin' algorithm, just find the lowest error and denominator
echo "1.778" | awk 'BEGIN{en=1; de=101; er=1}{
for (d=10... | How can I convert a float to ratio? |
1,575,676,821,000 |
I have the following csv format. There are vals from the whole month, but I've chunked it:
2415.02,2203.35,00:17,25:May:2017,
3465.02,2203.35,01:17,25:May:2017,
2465.02,2203.35,12:17,26:May:2017,
465.02,2203.35,13:17,26:May:2017,
245.02,2203.35,14:17,26:May:2017,
2465.02,2203.35,05:17,26:May:2017,
2865.02,2203.35,06:1... |
Check this out:
awk -F, '{date1[$4]+=$1;++date2[$4]}END{for (key in date1) print "Average of",key,"is",date1[key]/date2[key]}' file
Average of 27:May:2017 is 2677.57
Average of 26:May:2017 is 1410.02
Average of 25:May:2017 is 2940.02
Explanation:
-F, : Defines the delimiter . Alternatively could be awk 'BEGIN{FS=","}... | Calculating average in awk based on column condition in csv |
1,575,676,821,000 |
Here's my tab-delimited file t.tsv:
$ cat t.tsv
2022/05/05 -258.03
2022/05/07 -18.10
2022/05/09 -10.74
2022/05/09 -132.60
2022/05/12 -18.56
2022/05/12 -20.20
2022/05/17 -11.00
2022/05/17 -112.91
2022/05/17 -51.43
2022/05/17 -64.78
2022/05/18 -13.96
2022/05/18 -13.96
2022/05/18 -7.51
2022/05/19 -17.08
202... |
I don't know of a sort implementation that understands \t or other such character representations, you need to use ANSI-C quoting instead:
sort --field-separator=$'\t' --key=1,1 --key=2,2n t.tsv
Also, according to this macOS man page, "The Apple man page for sort includes GNU long options for all the above, but these... | How do I use sort on multiple columns with different data types |
1,575,676,821,000 |
I have a German locale and need to sort US formatted numbers with commas as the thousands separator. Seems I don't override the locale properly?
sort --version
sort (GNU coreutils) 8.30
Example:
echo "-4.00\n40.00\n4,000.00"|LC_ALL=en_US.utf8 sort -h
-4.00
4,000.00
40.00
I actually don't expect it to change the orde... |
One possible explanation is that that en_US.utf8 locale is not available on your system.
You can use locale -a to get the list of available locales, locale -a | grep en_US for the list of US English ones.
If that locale was installed, LC_ALL=en_US.utf8 locale -k LC_NUMERIC would output something like:
decimal_point=".... | How to sort comma-thousand separated numbers while on other locale |
1,575,676,821,000 |
With the following example data, both columns are numeric, but the second has different numbers of digits.
2 9
1 1000
1 50
3 0
I want to sort based on both columns. Specifying them separately with the numeric flag, -n, produces the result I want.
sort -n -k1,1 -k2,2 num.data.txt
gives
1 50
1 1000
2 9
3 0
which is w... |
A -k1,2 key specification specifies one key that starts at the start of the first column (includes the leading blanks as the default column separator is the transition from a non-blank to a blank) and ends at the end of the second column.
It's important to realise it's only one key. If you need two keys, you need two... | Bash numeric sort gives different results when columns are selected simultaneously vs. together |
1,575,676,821,000 |
I am trying to write a script to change the following set of numbers
2.659980
3.256998
4.589778
2.120150
2.223365
2.325566
2.121112
3.020111
4.065112
0.221544
1.236665
1.395958
to the following form (essentially making a matrix out of a list of numbers which are separated by an empty line)
2.659980 2.223365 ... |
With *BSD's rs(1), assuming the input file is well-formed:
rs -C -t $( awk '/^$/ { print NR-1; exit }' file ) <file
| Rearranging list of numbers to make a matrix |
1,575,676,821,000 |
That should be easy, just use [[ "$var" =~ '^[1-9][0-9]*$' ]]. But I don't get the behavior I expect excepted with zsh. I don't control the machine where the script will be run, so portability along reasonable shells (Solaris legacy Bourne shell is not reasonable) is an issue. Here are some tests:
% zsh --version
zs... |
Testing if a string is a number
You don't need regular expressions for that. Use a case statement to match the string against wildcard patterns: they're less powerful than regex, but sufficient here. See Why does my regular expression work in X but not in Y? if you need a summary of how wildcard patterns (globs) diffe... | Testing if a string is a number |
1,575,676,821,000 |
Given this:
<p>Currencies fluctuate every day. The rate shown is effective for transactions submitted to Visa on <strong>February 5, 2017</strong>, with a bank foreign transaction fee of <st <span><strong>1</strong> Euro = <strong>1.079992</strong> United States Dolla <p>The 'currency... |
Using curl to fetch, lynx to parse, and awk to extract
Please don't parse XML/HTML with sed, grep, etc. HTML is context-free, but sed and friends are only regular.1
url='https://usa.visa.com/support/consumer/travel-support/exchange-rate-calculator.html/?fromCurr=USD&toCurr=EUR&fee=0&exchangedate=02/05/2017'
user_agen... | Need to extract a number from HTML |
1,575,676,821,000 |
I am trying to write a script to get a random, even hex number. I have found the the openssl command has a convenient option for creating random hex numbers. Unfortunately, I need it to be even and my script has a type casting error somewhere. Bash thinks that my newly generated hex number is a string, so when I try t... |
Using bash
To generate an even random number in hex:
$ printf '%x\n' $((2*$RANDOM))
d056
Or:
$ hexVal=$(printf '%x\n' $((2*$RANDOM)))
$ echo $hexVal
f58a
To limit the output to smaller numbers, use modulo, %:
$ printf '%x\n' $(( 2*$RANDOM % 256 ))
4a
Using openssl
If you really want to use a looping solution wi... | Proper type casting in a shell script for use with while loop and modulus |
1,575,676,821,000 |
I am running Ubuntu 14.04.1 LTS 64-bit with Bash 4.3.11(1)-release I have a program called harminv producing output as follows:
$ h5totxt hsli0.126.h5 | harminv -vt 0.1 -w 2-3 -a 0.9 -f 200
# harminv: 1902 inputs, dt = 0.1
frequency, decay constant, Q, amplitude, phase, error
# searching frequency range 0.31831 - 0.4... |
Using sed
This will print only lines that start with a positive number:
sed -n 's/^\([[:digit:]][^ ,]*\).*/\1/p'
Combined with one of your pipelines, it would look like:
h5totxt hsli0.126.h5 | harminv -vt 0.1 -w 2-3 -a 0.9 -f 200 | sed -n 's/^\([[:digit:]][^ ,]*\).*/\1/p'
How it works
-n
This tells sed not to print... | How to extract the positive numbers in the first column from an output as in the question? |
1,575,676,821,000 |
How can I use awk to convert a time value to a decimal value.
I have been using this command for the other way round (-> from):
awk '{printf "%d:%02d", ($1 * 60 / 60), ($1 * 60 % 60)}' <<< 1.5
prints: 1:30
how would I calculate this value 1:30 back to the decimal value 1.5?
|
How about this?
awk -F: '{printf "%.1f", ($1*60+$2)/60}' <<< 1:30
| Awk - convert time value to decimal value |
1,575,676,821,000 |
I have a text file (created by a script) which contains only numbers in a single line like "5 17 42 2 87 33". I want to check every number with 50 (example), and if any of those numbers is greater than 50, I want to run another shell script.
I'm using a vumeter program, my purpose is to run a sound recognition progra... |
Using awk:
awk '{ for(i = 1; i <= NF; i++) if($i > 50) { system("bash /other/shell/script.sh") } }' file.txt
Using a bash script:
#!/bin/bash
file=/path/to/file.txt
for num in $(<"$file"); do
if ((num>50)); then
bash /other/shell/script.sh
fi
done
This will loop through each number in file.txt and ... | Compare the numbers in text file, if meets the condition run a shell script |
1,575,676,821,000 |
Background:
(1) Here is a screen capture of a part of my ascii file (over 600Mb):
(1.1) Here is a part of the code:
0, 0, 0, 0, 0, 0, 0, 0, 3.043678e-05, 3.661498e-05, 2.070347e-05,
2.47175e-05, 1.49877e-05, 3.031176e-05, 2.12128e-05, 2.817522e-05,
1.802658e-05, 7.192285e-06, 8.467806e-06, 2.047874e-05, 9.... |
The following should work:
perl -pe 's/([0-9.e-]+)/$1 == 0 ? $1 : .001 + $1/ge' < input.txt > output.txt
-p process the file line by line
s/patern/replacement/ is a substitution.
[0-9.e-]+ matches one or more of the given characters, i.e. the numbers
() remembers each number in $1
/g applies the substitution global... | Add a number in a huge ASCII file |
1,575,676,821,000 |
Sample Input
file name
0.00 -1.0000 number1
0.00 -0.8000 number2
0.00 -0.6000 number3
0.00 -0.4000 number4
0.00 -0.2000 number5
0.00 0.0000 number6
0.00 0.2000 number7
0.00 0.4000 number8
0.00 0.6000 number9
0.00 0.8000 number10
0.00 1.0000 number11
0.02 -1.0000 number1... |
You tagged this with bash, but mentioned awk, so I hope it's okay to use that:
$ awk -vn=0 'NR == 1 {print; next}
$0 != "" { k = $1; a[n] = $2; b[n] = $3; n++ }
$0 == "" { for (i = 0; i < n ; i++) {
printf "%s %s %s\n", k, a[i], b[n-i-1]; }
... | How to reflect data points across their median line? |
1,575,676,821,000 |
I have the below script:
#!/bin/bash
# This shell script is to tabulate and search for SR
n=0 # Initial value for Sl.No
next_n=$[$n+1]
read -p "Enter your SR number : " SR
echo -e "$next_n\t$SR\t$(date)" >> /tmp/cases.txt
When I run the script for the first time, I will enter SR = 123.
The output will be:
1 123 ... |
You can read the value in the first column of the last line of the file like this:
#!/bin/bash
# This shell script is to tabulate and search for SR
next_n=$(($(tail -n1 /tmp/cases.txt 2>/dev/null | cut -f1) + 1))
read -p "Enter your SR number : " SR
echo -e "$next_n\t$SR\t$(date)" >> /tmp/cases.txt
cut -f1 selects th... | Increment a column every time a script is executed |
1,575,676,821,000 |
I have a data file abc.txt in the following format:
BALT 1
54.500 -161.070
3.95863757
0.01691576
BARM 2
-9.200 67.120
4.07529868
0.01951653
BKSR 3
43.830 142.520
4.08919819
0.00587340
I need to convert it in the format:
BALT 1
54.5000000 -161.070000
0.3958637E+01
0.1691576E-01
BARM 2
-9.20000000 67.12000... |
version 3 use an awk file such as
function tenth(x) {
u = x ; if ( u < 0 ) u = -x ;
b=10 ;
a=b-2 ;
if ( u >= 10 ) {
d=int(log(u)/log(10)) ;
a=b-d-1 ;
}
printf "%*.*f",b,a,x ;
}
length($1) == 4 { print ; next ;}
NF == 1 { d=int(log($1)/log(10)) ;if (d> -1) d++ ; printf " %.7fE%+03d\n",$1/(10^d),d ;}
NF... | Formatting from Decimal to Exponential |
1,657,123,964,000 |
[EDITED to reflect answers below]
I am looking for a way to create blocks of folders / directories from the command line or a script that will generate a top level folder titled "200000-209999" and then inside of that folder, sub-folders named thusly:
200000-200499
200500-200999
201000-201499
... etc ...
... etc ...
2... |
The seq utility is one way to generate numbers:
for start in $(seq 200000 500 209000); do mkdir "${start}"-"$((start + 499))"; done
The syntax is seq start increment end.
| Creating numerous ranges or blocks of folders/directories? |
1,657,123,964,000 |
we have this CLI syntax
hdp-select | grep hadoop-client
hadoop-client - 2.6.4.0-91
the final goal is to get the number as example:
2640
we capture the last number & remove the - and remove the .
so I did
hdp-select | grep hadoop-client | awk '{print $3}' | sed s'/-/ /g' | awk '{print $1}' | sed s'/\.//g'
2640
bu... |
With sed
hdp-select | sed '/^hadoop-client - /!d;s///;s/-.*//;s/\.//g'
| Parsing a Hadoop version string with Bash |
1,657,123,964,000 |
I have a file with following type of expression in every line "Age=22 years and Height=6 feet", I want to extract Age and height numbers only.
I have tried
grep -oP '(?<=Age=)[^years]+' $f | awk '{ printf "%d \n",$1; }
and get age correctly. How can I get both Age and height. When I try nested pattern match i get he... |
This is not doing what you think it does, it works only by accident:
[^years]+
It means, match any character except y, e, a, r and s at least once.
Also, instead of Look-behind assertion, I would use keep-out. It has the benefit that it can be of variable length, then you can easily match both Age and Height.
(Age|He... | Extracting numbers between two string patterns |
1,657,123,964,000 |
I am writing a parser, and have to do some fancy stuff. I am trying not to use python, but I might have to at this point.
Given an STDOUT that looks like this:
1
0
2
3
0
0
1
0
0
2
0
3
0
4
0
5
0
2
.
.
.
For 100,000 lines. What I need to do is add up every 5, like so:
1 - start
0 |
2 | - 6
3 |
0 - end
0 - start
1 |
0 ... |
cat numbers.txt | awk '{sum += $1; if (NR % 5 == 0) {print sum; sum=0}} END {if (NR % 5 != 0) print sum}'
sum starts as 0 in awk. Every fifth line it prints the current sum of numbers out, then resets the sum to zero and goes over the next five lines. The END at the end handles the edge case of the number of lines in... | Adding up every 5 lines of integers |
1,657,123,964,000 |
Dear all I have a big data file lets say file.dat, it contains two columns
e.g file.dat (showing few rows)
0.0000 -23.4334
0.0289 -23.4760
0.0578 -23.5187
0.0867 -23.5616
0.1157 -23.6045
0.1446 -23.6473
0.1735 -23.6900
0.2024 -23.7324
0.2313 -23.7745
0.2602 -23.8162
... |
The issue in your code,
awk 'BEGIN{min=9}{for(i=2;i<=2;i++){min=(min<$i)?min:$i}print min;exit}' file.dat
... is that you immediately exit after processing the first line of input. Your middle block there need to be triggered for every line. Then, in an END block, you can print the values that you have found. You ... | how to extract maximum and minimum value from column 1 and column 2 |
1,657,123,964,000 |
I have this file content:
63 41,3,11,12
1 31,60,72,96
7 41,3,31,14,15,68,59,60
7 60,72,96
7 60
1 41,3,31,31,14,15,68,59,60
60 41,3,115,12,13,66,96
1 41,3,11,12,13,66,96
I need to grep the '7' before the '60' (where the '60' is not followed by '72,96').
|
Modified sample based on comments
$ cat ip.txt
7 60,72,96
7 60
3 601
2 60,72,962
5 60,3
43 60
3 52360
$ grep -oP '^\h*\K\d+(?=\h+60\h*$)' ip.txt
7
43
-oP print only matching portion, uses PCRE
^\h*\K ignore starting blank characters of line
\d+ the number to be printed
(?=\h+60\h*$) only if it is followe... | Difficult grep. How can I isolate this number? |
1,657,123,964,000 |
I need to find the max and min of each entry in my file:
#subset of my file
NEUTRON 20.900103
PION- 0.215176
PION- 22.716532
NEUTRON 8.043279
PION+ 1.374297
PION- 0.313350
PION+ 0.167848
How could I loop through the file and find the min and max for each Name when there are multiple entry... |
awk '{
count[$1]++
min[$1]=(!($1 in min) || $2<min[$1]) ? $2 : min[$1]
max[$1]=(!($1 in max) || $2>max[$1]) ? $2 : max[$1]
}
END {
print "Name","Count","Minimum","Maximum"
print "----","-----","-------","-------"
for(i in count) print i,count[i],min[i],max[i]
}' file | column -t
The logic of the minimum a... | How to calculate max and min for each unique entry |
1,657,123,964,000 |
I am doing some timezone calculations in bash. I'm getting some unexpected values when converting the timezone offset hour output to an integer to do some additional calculations.
Partial script:
offset=$(date +%z)
echo "$offset"
hours=$(( offset ))
echo "$hours"
Output
-0400
-256
Desired Output (I accidentally omit... |
Using sed and bc:
date +%z | sed -E 's/^([+-])(..)(..)/scale=2;0\1(\2 + \3\/60)/' | bc
This will give you 2.00 back in the timezone I'm in (+0200).
With strange/unusual timezones:
$ echo '+0245' | sed -E 's/^([+-])(..)(..)/scale=2;0\1(\2 + \3\/60)/' | bc
2.75
$ echo '-0245' | sed -E 's/^([+-])(..)(..)/scale=2;0\1(\2... | Convert timezone offset to integer |
1,657,123,964,000 |
Here I have a data which I would to plot with line using Gnuplot.
Using the code
pl 'Sphere_ISOTEST_data.txt' w p
I get below figure
but, using
pl 'Sphere_ISOTEST_data.txt' w l
I get the following:
Can anyone suggest as to how to sort the data such that I can plot w l and get only the circumference of the circle.
|
This could be solved by converting the Cartesian coordinates into polar coordinates and sorting by the angle.
We can compute the angle as atan2(y,x).
We may sort the original data using this computed number by applying a Schwartzian transform where the angle is used as the temporary sorting key:
awk -v OFS='\t' '{ pri... | Sort the data to plot a circle |
1,657,123,964,000 |
sed command to delete leading 91 if number is 12 digit
My file is
919876543210
917894561230
9194561230
Need output
9876543210
7894561230
9194561230
|
sed -e '/^91[0-9]\{10\}$/s/^91//' < input > output
(or use filename if you prefer)
| sed command to delete leading 91 if number is 12 digit |
1,657,123,964,000 |
Consider the following foo.dat file with 11 columns:
893 1 754 946 193 96 96 293.164 293.164 109.115 70.8852
894 1 755 946 192 95 96 291.892 292.219 108.994 70.821
895 1 755 947 193 95 97 290.947 291.606 109.058 70.5709
896 1 75... |
I suspect you have a locale using a comma as decimal separator. This should fix that issue:
awk -F' ' '{printf "%-12s%-12s\n", $11, $9}' foo.dat | LC_ALL=C sort -g
| Sorting float number and awk |
1,657,123,964,000 |
I have a tabular file in which the first column has IDs and the second one has numeric values. I need to generate a file that contains only the line with the largest score for each ID.
So, I want to take this:
ES.001 2.33
ES.001 1.39
ES.001 119.55
ES.001 14.55
ES.073 0.35
ES.073 17.95
ES.140 ... |
Depending on the type of data, sorting may take a long time.
We can get the result without sorting (but using more memory) like this:
awk 'a[$1]<$2{a[$1]=$2}END{for(i in a){print(i,a[i])}}' infile
| Filtering the line with the largest value for a given ID |
1,657,123,964,000 |
In Unix, I am trying to find a command that would find the maximum value in Column3 and print the corresponding values from Column2 and Column1 (but not from Column3) in a new file.
Column1 Column2 Column3
A 1 25
B 2 6
C 3 2
D 4 ... |
I would use awk. Assuming that the data is formatted exactly as per your sample data, the following will produce the desired output:
awk -v MAX=0 '{ if(NR>1 && $3>MAX){WANT1=$1; WANT2=$2; MAX=$3}} END{print WANT1, WANT2}' infile > outfile
| Find the maximum value of Column3 and print the values of Column1 and 2 only |
1,657,123,964,000 |
im working on sun10 Solaris os,i have a process that returns table as by using this command dmh -q 12 the below:
*PROFILE PRIORITY COMM_TYPE QID # OF MSGS ATTRIBUTES/VALUES*
13 999 DC 24 3 32 1865
13 999 DC 94 1 32 1665
... |
Using awk:
dmh -q 12 | awk 'NR > 1 { sum += $5 } END {print sum}'
This will sum all the values in column 5 and then print the total.
To store this in a variable use command substitution:
var=$(dmh -q 12 | awk 'NR > 1 { sum += $5 } END {print sum}')
| get count with grep |
1,657,123,964,000 |
I have the following command:
find . -mtime -5 -type f -exec ls -ltr {} \; | awk '{print "cp -p "$9" "$7}'
the output is like:
cp -p ./18587_96xxdata.txt 10
cp -p ./16947_96xxdata.txt 8
cp -p ./32721_96xxdata.txt 9
cp -p ./32343_96xxdata.txt 9
cp -p ./32984_96xxdata.txt 10
But I want the last part of the output to b... |
See https://mywiki.wooledge.org/ParsingLs and https://mywiki.wooledge.org/Quotes and then do this instead:
$ find . -mtime -5 -type f -printf "cp -p '%p' '%Ad'\n"
cp -p './bbbb.csv' '09'
cp -p './cccc.csv' '10'
cp -p './out1.txt' '09'
cp -p './out2.txt' '05'
| How to always print an output with certain number of digits using AWK |
1,657,123,964,000 |
I want to sort my input file on the basis of the 2nd column in descending order. I have used the following command for this:
sort -k2,2nr input.txt > output.txt
However, after running the command i am getting the this output:
ENSG00000273451 2.46335345019054e-05
ENSG00000181374 1.05269640687115e-05
ENSG00000182150 ... |
sort -k2,2gr input.txt > output.txt
| How to sort the second column in descending order? [duplicate] |
1,657,123,964,000 |
I have two large files consisting mainly of numbers in matrix form, and I'd like to use diff (or similar command) to compare these files and determine which numbers are different.
Unfortunately, quite a lot of these numbers differ only by sign, and I'm not interested in those differences. I only care when two numbers... |
Given your shell provided "process substitution" (likes recent bashes), try
diff <(tr '-' ' ' <file1) <(tr '-' ' '<file2)
1,2c1,2
< 21 0.0081318 0.0000000 0.0000000 0.0000000 0.0138079
< 22 0.0000000 0.0000000 0.0000000 0.1156119 0.0000000
---
> 21 0.0081318 0.0000000 0.0000000 0.0000000... | How to ignore differences between negative signs of numbers in the diff command? |
1,657,123,964,000 |
I have a file file1 with the amount of times a user shows up in the files, something like this:
4 userC
2 userA
1 userB
and I have another file file2 with users and other info like:
userC, degree2
userA, degree1
userB, degree2
and I want an output where it shows the amount of times user shows up, for each degree:
5 ... |
Pure awk:
$ awk -F'[, ]' 'NR==FNR{n[$2]=$1;next}{m[$3]+=n[$1]}
END{for(i in m){print i " " m[i]}}' \
file1 file2
degree1 2
degree2 5
Or you can put it into a script like this:
#!/usr/bin/awk -f
BEGIN {
FS="[, ]"
}
{
if (NR == FNR) {
n[$2] = $1;
next;
} else {
m[$3] += n[$... | Summing by common strings in different files |
1,657,123,964,000 |
I want to process with a bash script a text file that looks like this:
ADD $05 $05 $05
SUBI $06 $06 00011
MUL $07 $07 $07
JSR 011101
taking the binary numbers (that are always longer than 4 bits) and converting them into their decimal representation.
For the previous example, this is the file I want ... |
There's probably a more elegant way using Perl's pack and unpack, but using a combination of string manipulation and oct:
$ perl -pe 's/\b[01]+\b/oct "0b" . $&/ge' file
ADD $05 $05 $05
SUBI $06 $06 3
MUL $07 $07 $07
JSR 29
See Converting Binary, Octal, and Hexadecimal Numbers
| Convert binary strings into decimal |
1,657,123,964,000 |
Hi I need to get the sum of each and every column in a file, needs to be flexible to as many columns as are in any given file
currently I use:
awk '{for (i=1;i<=NF;i++) sum[i]+=$i;}; END{for (i in sum) print sum[i];}'
This however, only gives me the sum of the first column, which i could obviously loop, but i would p... |
It does give you the sum of every column, but in one column (provided that the data is whitespace-separated):
$ cat data.in
1 2
3 4
5 6
$ awk '{ for (i=1;i<=NF;i++) sum[i]+=$i } END { for (i in sum) print sum[i] }' data.in
12
9
So it's a matter of not outputting a newline between each sum.
$ awk '{ for (i=1;i<=NF;... | Sum of each column in a file, needs to be flexible to as many columns as are in the file |
1,657,123,964,000 |
I have semicolon delimited data, where third column is decimal number:
A;B;1234.56
C;D;789.23
I need to make 2 changes to the format of the number:
remove numbers after the decimal point
add "thousand separator" to the number
so 1234.56 would become 1'234
I was able to do the first part, but don't know how to add t... |
Using the LC_NUMERIC='en_US.UTF-8' locale (that has supporting the comma as the thousands separator formatting for the numbers) and using the sprintf's %\047d (%\047d is just another type of writing single quote %'d using the Octal Escape Sequences, or you can write it %'\''d too) format modifier to force convert the ... | Reformat number in specific column |
1,657,123,964,000 |
I have a below input file which I need to split into multiple files based on the date in 3rd column. Basically all the same dated transactions should be splitted into particular dated file. Post splitting I need to create a header and Trailer.
Trailer should contain the count of the records and sum of the amounts in 4... |
Without your big number issue taken into account, I would write the awk program something like this:
BEGIN {
FS = "\\|~\\^"
OFS= "|~^"
}
$1 == "H" {
header = $0
}
$1 == "R" {
name = $3
sub("T.*", "", name)
sum[name] += $4
cnt[name] += 1
if (cnt[name] ... | Sum of large numbers and print the result with all decimal points for the stated question when using awk arrays |
1,657,123,964,000 |
I have the following tcpdump stream:
Current:
07:36:03.848461 IP 172.17.3.41.33101 > 172.17.3.43.17408: UDP, length 44
07:36:03.848463 IP 172.17.3.42.33101 > 172.17.3.43.17409: UDP, length 44
07:36:03.848467 IP SYSTEM-A.33101 > 172.17.3.43.17418: UDP, length 45
07:36:03.848467 IP SYSTEM-B.33101 > 172.17.3.43.17419: UD... |
If field numbers are constant - as in your question fields 3 and 5 - try
awk '
function CHX(FLD) {n = split ($FLD, T, ".")
sub (T[n] "$", sprintf ("%X", T[n]), $FLD)
}
{CHX(3)
CHX(5)
}
1
' file
07:36:03.848461 IP 172.17.3.41.814D > 172.17.3.43.4400 UDP, length 44... | sed/awk: replace numbers in a line after last occurance of '.' |
1,657,123,964,000 |
I would like to write a shell script program that gives the sum of even and odd numbers in a given file's odd and even lines.
I would like to use:
sed -n 2~2p
and
sed -n 1~2p
but I am not even sure where and how to start solving it.
Could you please guide me in the right direction?
Input file example:
20 15 14 17
20... |
using Miller (https://github.com/johnkerl/miller) and running
mlr --n2c put 'for (key, value in $*) {
if ((value % 2 ==0) && (NR % 2 ==0)) {
$even +=value;
} elif ((value % 2 !=0) && (NR % 2 !=0)) {
$odd +=value;
}
}
' then unsparsify then stats1 -a sum -f odd,even input.csv
you will have
od... | Sum of even and odd numbers of odd and even lines |
1,657,123,964,000 |
I have the following file, beginning with
male 9
male 11
male 9
male 1
female 4
female 13
male 14
If I use
sort -u -k1,1 -k2,2n
this returns
female 13
female 4
male 1
male 11
male 14
male 9
male 9
How can I make the single-digit numbers show as 01, 02, etc. so they will sort correctly?
Update:
The commenter who tol... |
You haven't defined "sort correctly" anywhere, so I'm going to assume that you want to group by the first column and order by ascending numerical value of the second, with duplicate values removed. This solution isn't what you've actually asked for, but it seems to be what you want.
sort -k1,1 -k2,2n -u datafile
femal... | Replace single field numbers with double field numbers (1->01) |
1,657,123,964,000 |
Trying to use this
#!/bin/bash
SIZE=$(redis-cli info | grep used_memory: | awk -F':' '{print $2}')
MAX=19000000000
if [ "$SIZE" -gt "$MAX" ]; then
echo 123
fi
But always getting: "Ganzzahliger Ausdruck erwartet"
When I echo SIZE I get a value like 2384934 - I dont have to / can convert a value or can / do I?
O... |
The numbers you use seems to be very big for bash. You can try something like:
#!/bin/bash
SIZE=$(redis-cli info | awk -F':' '$1=="used_memory" {print int($2/1000)}')
MAX=19000000
if [ "$SIZE" -gt "$MAX" ]; then
echo 123
fi
| Error when compare big numbers |
1,657,123,964,000 |
I extracting a column from a file with different values, some of them are 11 character to 13, but whenever the value is 11 I need to add a 0 in front.
awk -F, '{print $1 }' $FILE | \
awk '{printf("%04d%s\n", NR, $0)}' | \
awk '{printf("%-12s\n", $0) }'
82544990078
82544990757
899188001738
9337402002723
93374020026... |
You can use awk for this:
$ awk 'length() == 11 { $0 = "0" $0 } 1' < input
082544990078
082544990757
899188001738
9337402002723
9337402002686
9337402002747
812153010733
852271005003
089000118359
| Add 0 when ever the value is 12 character |
1,657,123,964,000 |
Trying to re-format a column, but I need to add decimals to the price and need to be align to the right. It also needs leading white spaces at end in order to work with other column.
awk -F, '{print $6}' $FILE | awk '{printf("%-7s\n", $0) }' > $TEMP/unit_price
current output:
99
121.5
108
67.5
This is how I need i... |
1 awk is enough for all your treatment
%s convert your number to a string basically, use another format converter like %f for float in this case
awk -F ',' '{printf("%3.2f\n", $6}' ${FILE} > ${TEMP}/unit_price
| Add the decimals and align to the right |
1,657,123,964,000 |
I am working on a project to calculate my overtime at work with a shell script. I have two inputs and want to find if my number is over 800 = 8 hours;
if it is bigger, then it has to print out the result to my text file.
It has to print out my difference.
if [ $var1 -gt 800 ]; then
`expr $var1-800`
echo Overtime: "... |
Try this using modern bash (don't use backticks or expr) :
if ((var1 > 800)); then
overtime=$((var1 - 800)) # variable assignation with the arithmetic
echo "Overtime: $overtime"
fi
Or simply :
if ((var1 > 800)); then
overtime="Overtime: $((var1 - 800))" # concatenation of string + arithmetic
fi
Check ba... | Echo calculation to text file [duplicate] |
1,657,123,964,000 |
i want to read the values in the third column and find the difference from the other value in the same column.
i tried this
#!/usr/bin/awk -f
NR==1 {prev=$3;next; }
dif=prev - $3;
{printf "%x",dif}
{print $3, dif > "diff"}
But since the values are hexadecimal im getting a zero as the difference.
|
The trick is that, on input, awk does not automatically interpret hex numbers. You have to ask it to do that explicitly using the strtonum function. Thus, when you need the number in your code, replace $3 with strtonum($3).
Example
Let's take this as the test file:
$ cat file
0x7f7488c4e6d7: R 0x7f7488b169ce
0x7f748... | read file containing Hex values and process it |
1,490,442,686,000 |
So I made a decimal to binary converter but currently it doesn't chop off the zeros in the beginning. If I entered 64 for $1 it would start with 13 zeros which is rather unsightly but I don't know how to chop them off. Any help?
#!/bin/bash
cat /dev/null > ~/Documents/.tobinary
touch ~/Documents/.tobinary
toBin=$1
c... |
Is there a specific reason for which you used this particular algorithm?
I'd rather construct the binary in shell variables than in a file. In such case you can strip the leading zeroes by adding a zero to a number, such as
expr 00001111 + 0
1111
Also, if you have to use a file, I would suggest using /tmp instead o... | How can I remove x number of zeros from the beginning of a file? |
1,490,442,686,000 |
How can I make a POSIX script that will print all the prime numbers from 1 to 1000?
|
factor {2..1000} | awk 'NF==2{print $2}'
| Simple bash script to print prime numbers from 1 to 1000 [closed] |
1,490,442,686,000 |
I need to extract some values from a file in bash, on my CentOS system. In myfile.txt I have a list of objects called Info_region in which each object is identified with a code (eg. BARD1_region_005 or BIRC2_region_002 etc.) Moreover there are some others columns in which are reported some numerical variable. The sam... |
Assuming the data is in the file called file and that it is sorted on the first column, the GNU datamash utility could do this in one go on the data file alone:
datamash -H -W -g 1 max 2-7 <file
This instructs the utility to use whitespace separated columns (-W; remove this if your column are truly tab-delimited), th... | Extract maximum values for each objects from a file [duplicate] |
1,490,442,686,000 |
I'm trying to build a script to trigger an action/alert for a linux appliance when load average reaches a specific threshold.
Script looks like this:
!/bin/bash
load=`echo $(cat /proc/loadavg | awk '{print $2}')`
if [ "$load" -gt 5 ]; then
echo "foo alert!"
fi
echo "System Load $(cat /proc/loadavg)"
Cred... |
set -- $(cat /proc/loadavg)
load=${2/./}
if [ "$load" -ge 500 ]; then
echo alert
fi
Get the load average from /proc, and set the positional parameters based on those fields. Grab the 2nd field and strip out the period. If that (now numeric) value is greater than or equal to 500, alert. This assumes (the current) be... | Error comparing decimal point integers in bash script |
1,490,442,686,000 |
In a shell script, I'm processing, some addition process will print an output. If it is a single-digit one, then it has to add zero as a prefix.
Here is my current script:
c_year=2020
for i in {01..11}
do
n_year=2020
echo 'Next Year:'$n_year
if [[ $i == 11 ]]
t... |
I think this is a possibility, simplifying:
for i in {1..11}; do
n_month=$(($i + 1))
[[ $n_month =~ ^[0-9]$ ]] && n_month="0$n_month"
echo "$n_month"
done
Output
02
03
04
05
06
07
08
09
10
11
12
| shell script set the default numbers as double digit zero as prefix |
1,490,442,686,000 |
I have a file whose contents look like this.
2,0,-1.8433679676403103,0.001474487996447893
3,1,0.873903837905657,0.6927701848899038
1,1,-1.700947426133768,1.5546514434152598
CSV with four columns whose third and last columns are floats.
I want to get rid of the whole part of the numbers (including the sign) and keep o... |
Using awk:
awk -F[,.] '{print $1","$2","substr($4,1,3)","substr($6,1,3)}' file
Where -F used to set the FS values to comma , and dot .
substr will print the 3 digits required after the dot.
| Keep only a few digits of decimal part |
1,490,442,686,000 |
Can someone show me a way to find the largest positive number using awk,sed or some sort of unix shell program? I'm using Linux RHEL with KSH. The input may have 8 columns worth of data with N/A as one of the possible values.
SAMPLE INPUT
1252.2 1251.8 N/A N/A
-31.9 -33.2 ... |
A bash-style solution using tr, sort and head
echo $(< file) | tr ' ' '\n' | sort -rn | head -1
echo $(< file) reads the file and echoes the result. But as we don't have quotes around it (echo "$(< file)" does the same as cat file) the content is one long string with newlines removed and whitespace squeezed
| tr ' '... | Find the largest positive number in my given data? |
1,490,442,686,000 |
I have a csv file with many columns. I have computed average of a column using the command
awk -F',' '{sum+=$3} END {print sum/NR}' cpu.csv
But it does include the first row that has text fields like Serial number, Name, value etc. I want to exclude this first row while doing the averaging.
Any ideas on how to ach... |
Exclude the first row:
awk -F',' 'NR > 1 {sum+=$3} END {print sum / (NR - 1)}' cpu.csv
| Finding average excluding the first row |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.