date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,391,717,026,000
I have two files with approximately 12900 and 4400 entries respectively, that I want to join. The files contain location information for all landbased weather observing stations around the globe. The largest file is updated biweekly, and the smaller once a year or so. The original files can be found here (http://www.wmo.int/pages/prog/www/ois/volume-a/vola-home.htm and http://weather.rap.ucar.edu/surface/stations.txt). The files I have are already manipulated by me with some mixed awk, sed, and bash script. I use the files to visualize data using the GEMPAK package, that is freely available from Unidata. The largest file will work with GEMPAK, but only not with its full capability. For this a join is needed. File 1 contains location information for weather observing stations, where the first 6 digits is the unique station identifier. The different parameters (station number, station name, country code, latitude longitude, and station elevation) are defined only by its position in the line, i.e. no tabs. 060090 AKRABERG FYR DN 6138 -666 101 060100 VAGA FLOGHAVN DN 6205 -728 88 060110 TORSHAVN DN 6201 -675 55 060120 KIRKJA DN 6231 -631 55 060130 KLAKSVIK HELIPORT DN 6221 -656 75 060160 HORNS REV A DN 5550 786 21 060170 HORNS REV B DN 5558 761 10 060190 SILSTRUP DN 5691 863 0 060210 HANSTHOLM DN 5711 858 0 060220 TYRA OEST DN 5571 480 43 060240 THISTED LUFTHAVN DN 5706 870 8 060290 GROENLANDSHAVNEN DN 5703 1005 0 060300 FLYVESTATION AALBORG DN 5708 985 13 060310 TYLSTRUP DN 5718 995 0 060320 STENHOEJ DN 5736 1033 56 060330 HIRTSHALS DN 5758 995 0 060340 SINDAL FLYVEPLADS DN 5750 1021 28 File 2 contains the unique identifier in File 1 and a second, 4 character identifier (ICAO locator). 060100 EKVG 060220 EKGF 060240 EKTS 060300 EKYT 060340 EKSN 060480 EKHS 060540 EKHO 060600 EKKA 060620 EKSV 060660 EKVJ 060700 EKAH 060780 EKAT I want to join the two files, so that the resulting file will have the 4 character identifier in the first 4 positions in the line, i.e. the identifier should replace the 4 spaces. 060090 AKRABERG FYR DN 6138 -666 101 EKVG 060100 VAGA FLOGHAVN DN 6205 -728 88 060110 TORSHAVN DN 6201 -675 55 060120 KIRKJA DN 6231 -631 55 060130 KLAKSVIK HELIPORT DN 6221 -656 75 060160 HORNS REV A DN 5550 786 21 060170 HORNS REV B DN 5558 761 10 060190 SILSTRUP DN 5691 863 0 060210 HANSTHOLM DN 5711 858 0 EKGF 060220 TYRA OEST DN 5571 480 43 EKTS 060240 THISTED LUFTHAVN DN 5706 870 8 060290 GROENLANDSHAVNEN DN 5703 1005 0 EKYT 060300 FLYVESTATION AALBORG DN 5708 985 13 060310 TYLSTRUP DN 5718 995 0 060320 STENHOEJ DN 5736 1033 56 060330 HIRTSHALS DN 5758 995 0 EKSN 060340 SINDAL FLYVEPLADS DN 5750 1021 28 Is it possible to accomplish this task with some bash and/or awk script?
awk 'BEGIN { while(getline < "file2" ) { codes[$1] = $2 } } { printf "%4s%s\n", codes[$1], substr($0, 5) }' file1
Joining two files with unique identifier
1,391,717,026,000
I have two files having a primary key value as a first field, and a corresponding value(s) as the remaining fields, and some primary key values are missing in one of them but are present in another, and vice versa: $ cat jointest1.txt jointest2.txt a 1 b 2 d 4 e 5 a 10 b 11 c 12 d 13 I'd expect an output that merges those files according to a primary key, either substituting the missing values or not, like: $ joinmerge jointest1.txt jointest2.txt a 1 10 b 2 11 c - 12 d 4 13 e 5 - Ability to replace missing values with dashes or something is optional. I tried join, but it says my files are not properly sorted: $ join jointest1.txt jointest2.txt a 1 10 b 2 11 join: file 2 is not in sorted order d 4 13 What command should I use instead?
What implementation of join are you using? With join (GNU coreutils) 5.97, I can use [0 1021] ~/temp/jointest % join -a1 -a2 jointest1.txt jointest2.txt a 1 10 b 2 11 c 12 d 4 13 e 5 and the "plain" join works, too (but omits c and e). There is an -e option which supposedly lets you choose the marker for empty fields, but it appears to be broken in my version and only fills case e, not case c.
Join, filling in missing key values
1,391,717,026,000
I want to write a shell script that get two files A and B, and get a result like this: File A: user_a tel_a addr_a user_b tel_b addr_b File B: process_1 user_a process_2 user_a process_3 user_b And the result: user_a process_1 tel_a addr_a user_a process_2 tel_a addr_a user_b process_3 tel_b addr_b How can i do this? awk or something else?
join ... join -1 2 -2 1 FileB FileA Output user_a process_1 tel_a addr_a user_a process_2 tel_a addr_a user_b process_3 tel_b addr_b The input files need to be sorted by the key field ... Your example files are already sorted, so there was no need, but otherwise you could incorporate the sort as follows. join -1 2 -2 1 <(sort -k2 FileB) <(sort FileA)
A shell script for joining two files
1,391,717,026,000
I'm trying to match and merge two sets of sorted data, one set per file. Each file contains two columns: the key field and the associated value. The resulting output should contain three columns: the key field, the value (if any) from the first file, and the value (if any) from the second file. I need to include lines of data that are not matched. First file "john" apple,green cherry,red orange,orange Second file "jane" apple,red banana,yellow cherry,yellow kiwi,green Desired result apple,green,red banana,,yellow cherry,red,yellow kiwi,,green orange,orange, I thought initially that this was a trivial job for join LC_ALL=C join -j1 -a1 -a2 -t',' john jane But the result of the -a1 -a2 puts the unmatched value always in the second column: apple,green,red banana,yellow cherry,red,yellow kiwi,green orange,orange I need to be able to see from which source file the unmatched value originates, ideally by having those values in the appropriate second or third column of the result file, but I cannot work out a simple way of achieving this without descending into awk ... getline() type constructs. Any suggestions, please?
You want -o auto: join -t, -j 1 -a 1 -a 2 -o auto john jane From man join: -o FORMAT obey FORMAT while constructing output line      ︙ If FORMAT is the keyword 'auto', then the first line of each file determines the number of fields output for each line. Or better explained from GNU Coreutils: join invocation (follow the link into General options in join): ‘-o auto’ If the keyword ‘auto’ is specified, infer the output format from the first line in each file. This is the same as the default output format but also ensures the same number of fields are output for each line. Missing fields are replaced with the -e option and extra fields are discarded. % cat john apple,green cherry,red orange,orange % cat jane apple,red banana,yellow cherry,yellow kiwi,green % join -t, -j 1 -a 1 -a 2 -o auto john jane apple,green,red banana,,yellow cherry,red,yellow kiwi,,green orange,orange,
Join two files each with two columns, including non-matching lines
1,391,717,026,000
How can I get two files A and B, and out put a result like this: File A: 001 Apple, CA 020 Banana, CN 023 Apple, LA 045 Orange, TT 101 Orange, OS 200 Kiwi, AA File B: 01-Dec-2013 01.664 001 AAA CAC 1083 01-Dec-2013 01.664 020 AAA CAC 0513 01-Dec-2013 01.668 023 AAA CAC 1091 01-Dec-2013 01.668 101 AAA CAC 0183 01-Dec-2013 01.674 200 AAA CAC 0918 01-Dec-2013 01.674 045 AAA CAC 0918 01-Dec-2013 01.664 001 AAA CAC 2573 01-Dec-2013 01.668 101 AAA CAC 1091 01-Dec-2013 01.668 020 AAA CAC 6571 01-Dec-2013 01.668 023 AAA CAC 2148 01-Dec-2013 01.674 200 AAA CAC 0918 01-Dec-2013 01.668 045 AAA CAC 5135 Result: 01-Dec-2013 01.664 001 AAA CAC 1083 Apple, CA 01-Dec-2013 01.664 020 AAA CAC 0513 Banana, CN 01-Dec-2013 01.668 023 AAA CAC 1091 Apple, LA 01-Dec-2013 01.668 101 AAA CAC 0183 Orange, OS 01-Dec-2013 01.674 200 AAA CAC 0918 Kiwi, AA 01-Dec-2013 01.674 045 AAA CAC 0918 Orange, TT 01-Dec-2013 01.664 001 AAA CAC 2573 Apple, CA 01-Dec-2013 01.668 101 AAA CAC 1091 Orange, OS 01-Dec-2013 01.668 020 AAA CAC 6571 Banana, CN 01-Dec-2013 01.668 023 AAA CAC 2148 Apple, LA 01-Dec-2013 01.674 200 AAA CAC 0918 Kiwi, AA 01-Dec-2013 01.668 045 AAA CAC 5135 Orange, TT (file A: the number should match to middle number from file B) Is there any possible way to doing this?
A simple solution with awk: awk -v FILE_A="file-A" -v OFS="\t" 'BEGIN { while ( ( getline < FILE_A ) > 0 ) { VAL = $0 ; sub( /^[^ ]+ /, "", VAL ) ; DICT[ $1 ] = VAL } } { print $0, DICT[ $3 ] }' file-B Here is a commented version: awk -v FILE_A="file-A" -v OFS="\t" ' BEGIN { # Loop on the content of file-A # to put the values in a table while ( ( getline < FILE_A ) > 0 ){ # Remove the index from the value VAL = $0 sub( /^[^ ]+ /, "", VAL ) # Fill the table DICT[ $1 ] = VAL } } { # Print the line followed by the # corresponding value print $0, DICT[ $3 ] }' file-B
Join two files, matching on a column, with repetitions
1,391,717,026,000
I have two files in file A, there are sequence_numbers in the other file B, there are many columns, and the first column is sequnce numbers, I want to get a files with all the lines in the B with the sequence numbers which are in the A how can I achieve this? thanks like file A 1 3 8 9 20 file B 1 kfjk 3243424 2 fkdkf 23543592 3 iefjk 21493402 7 dlafdl 23435231 8 kfkdlkf 309834
You want join (1), I guess: For each pair of input lines with identical join fields, write a line to standard output. The default join field is the first, delimited by whitespace. When FILE1 or FILE2 (not both) is -, read standard input. [0 1075 12:50:10] ~/temp/sx % join A B 1 kfjk 3243424 3 iefjk 21493402 8 kfkdlkf 309834 join: file 1 is not in sorted order OK, so apparently you need to combine this with sort (1) to sort by alpha value (not numerical value, so 20 < 3) join <(sort A) <(sort B) works for me, but that looks weird and might be a zsh extension. There's no harm in doing sort A > A.tmp; sort B > B.tmp; join A.tmp B.tmp (As usual, check the man pages for pitfalls.)
intersection of two files according to the first column
1,391,717,026,000
I have long text files with space-delimited fields: cat file1.txt Id leng sal mon 25671 34343 56565 5565 44888 56565 45554 6868 23343 23423 26226 6224 77765 88688 87464 6848 66776 23343 63463 4534 cat file2.txt Id number 25671 34343 76767 34234 23343 23423 66776 23343 cat output.txt Id leng sal mon 44888 56565 45554 6868 77765 88688 87464 6848 file1.txt has four columns, file2.txt has two columns. I want to compare 1st column ($1) in both files (file1.txt, file2.txt) and output the file that did not match in file2.txt. I have tried join -v1 file1.txt file2.txt >output.txt But the output has some errors. Any awk/sed command is appreciated.
In order to use join, you need to make sure that FILE1 and FILE2 are sorted on the join fields. The following command should do the trick: join -v1 <(sort file1.txt) <(sort file2.txt)
Compare two file columns
1,391,717,026,000
I have this file1.txt deiauk 9 kespaul 8 luktol 7 titkur 6 and other file2.txt kespaul b deiauk a And I want to merge both files in one by first value, so my result should be deiauk 9 a kespaul 8 b luktol 7 titkur 6
sort file2.txt | join -a 1 file1.txt - join requires sorted input. The '-' specifies that standard input will be used for the second file, which allows the output of sort to be used as input. The '-a 1' specifies that non-matching lines from the first file will be included in the output.
Merge two files
1,391,717,026,000
I was wondering if anyone would know the complexity of the Unix join command? I had assumed that it might be linear since both files need to be sorted. Someone insisted to me that it was logarithmic which I doubt. Or perhaps it depends on the files and can be logarithmic (or N*log(N)) when one of the files is small and approach linear when both are large?
The BSD join implementation is quite simple to follow and seems to be linear with regards to the number of lines in the files. This has gone mostly unchanged in all BSD systems since at least BSD 4.4 Lite2. The snippet below comes from a current OpenBSD system, for comparison, this is a link to the BSD 4.4 Lite2 code originally committed by Keith Bostic in 1991 (replacing an earlier version of the utility): /* * We try to let the files have the same field value, advancing * whoever falls behind and always advancing the file(s) we output * from. */ while (F1->setcnt && F2->setcnt) { cval = cmp(F1->set, F1->joinf, F2->set, F2->joinf); if (cval == 0) { /* Oh joy, oh rapture, oh beauty divine! */ if (joinout) joinlines(F1, F2); slurp(F1); slurp(F2); } else { if (F1->unpair && (cval < 0 || F2->set->cfieldc == F2->setusedc -1)) { joinlines(F1, NULL); slurp(F1); } else if (cval < 0) /* File 1 takes the lead... */ slurp(F1); if (F2->unpair && (cval > 0 || F1->set->cfieldc == F1->setusedc -1)) { joinlines(F2, NULL); slurp(F2); } else if (cval > 0) /* File 2 takes the lead... */ slurp(F2); } } I looked at the code for join in GNU coreutils, but GNU code has so much going on in it that I really only can guess, based on the comments in the code, that it more or less also implements the same sort of algorithm: /* Keep reading lines from file1 as long as they continue to match the current line from file2. */ [...] /* Keep reading lines from file2 as long as they continue to match the current line from file1. */ [...] If you take the sorting into account, and assume an N*log(N) sorting algorithm, then the complete time complexity would be N*(1 + log(N)), or N*log(N) for large N values. That is, the JOIN operation is faster than the sorting. You can't do better than linear for a JOIN operation, because you can't skip lines (unless you have a precalculated index of some description and don't include the indexing in the time complexity). The best case scenario is that none of the lines join, in which case you need to read all lines from one of the two files and compare these to first line of the other file. The worst case scenario is that all lines join, in which case you need to read both files and do pairwise comparisons between both sets of lines (a linear operation on sorted files). If the user requests to see unpaired lines, then you are forced to read both files completely. If you manage to do worse than linear for the JOIN alone, then you're doing something wrong.
Unix join command complexity
1,391,717,026,000
Given the files: 1.txt 1, abc, 123, 456, 789 2, lmn, 123, 456, 789 3, pqr, 123, 456, 789 2.txt 1, abc, 123, 000, 000 3, lmn, 123, 000, 000 9, opq, 123, 000, 000 OUTPUT.txt ID, NAME, X, 1A, 1B, 2A, 2B 1, abc, 123, 456, 789, 000, 000 2, lmn, 123, 456, 789, MISSING, MISSING 3, pqr, 123, 456, 789, 000, 000 9, opq, 123, MISSING, MISSING, 000, 000 I'ved used this for reference. I tried using the following: join -t , -a1 -a2 -1 1 -2 1 -o 0 -o 1.2 -o 1.3 -o 1.4 -o 1.5 -o 2.4 -o 2.5 -e "MISSING" 1.txt 2.txt Which produces: 1, abc, 123, 456, 789, 000, 000 2, lmn, 123, 456, 789,MISSING,MISSING 3, pqr, 123, 456, 789, 000, 000 9,MISSING,MISSING,MISSING,MISSING, 000, 000 Any help?
I don't think you can do it with join alone. You could do: join -t, -a1 -a2 -o0,1.2,1.3,1.4,1.5,2.2,2.3,2.4,2.5 -e MISSING 1.txt 2.txt | perl -F, -lape '@F[1..2]=@F[5..6] if $F[1] eq "MISSING"; $_=join",",@F[0..4],@F[7..8]' -p: use a line by line reading loop like in sed/awk -a, -F,: like awk, split the lines into fields (into the @F array). -l: works on the content of lines (works like awk where the input is split on RS ($/) (but RS not included in $0) and ORS ($\) is appended before printing). -e ...: perl [e]xpression to evaluate for each line. Then it reads almost like English: fields 1 to 2 are set to fields 5 to 6 if field 1 (the second field as indexes start at 0) is "MISSING". Then set the content of the current record ($_ is like $0 in awk) to the fields 0 to 4 and 7 to 8. Actually, writing the same in awk is not more complicated: awk -F, -vOFS=, '$2 == "MISSING"{$2=$6;$3=$7} {print $1,$2,$3,$4,$5,$8,$9}'
Join: Two files - but only append last two columns
1,391,717,026,000
I have a file1.txt USA Joe 123.123.123 Russia Marry 458.786.892 Canada Greg 151.844.165 Latvia Grace 125.895.688 and file2.txt 1 123.123.123 2 151.844.165 3 465.879.515 and I want to create a new file result.txt where I print my only those lines that adresses (xxx.xxx.xxx) are both in file1 and file2 so my result should be USA Joe 123.123.123 Canada Greg 151.844.165 I need to use awk, but how I need to use it for both files?
You can try: awk 'FNR==NR{a[$2];next};$NF in a' file2.txt file1.txt > result.txt
Awk: compare two files
1,391,717,026,000
I have 4 tsv (tab separated) files that look like this: file_1: abc 1 def 2 ghi 3 file_2: abc 2 ghi 3 file_3: def 1 ghi 2 jkl 4 file_4: ghi 3 jkl 4 I want to join those file to get 1 tsv file like this: dataset file_1 file_2 file_3 file_4 abc 1 2 def 2 4 ghi 3 3 2 3 jkl 4 4 I have try using awk $ awk ' BEGIN{OFS=FS="\t"} FNR==1{f = f "\t" FILENAME} NR==FNR{a[$1] = $2} NR!=FNR{a[$1] = a[$1] "\t" $2} END{printf "dataset%s\n", f; for(i in a) print i, a[i]} ' file_{1..4} This command is work, but I got shifted value. Let say, if first and second column have empty value and third and fourth column have value 4 and 4, the output that I got from that command is for first and second column have value 4, but for third and fourth column have empty value. So I try to join my tsv file separately using awk that I mentioned. First only for file_1 and file_2 to get output_1, then join file_3 and file_4 to get output_2. After that I use $ join output_1 output_2 to merge output_1 and output_2 but I only get value that exist in 4 file. I lost data that only exist in one file. I'll very appreciate if you can give me an advice. Thank you
$ cat tst.awk BEGIN { FS=OFS="\t" } { datasets[$1]; fnames[FILENAME]; vals[$1,FILENAME] = $2 } END { printf "%s", "dataset" for (fname in fnames) { printf "%s%s", OFS, fname } print "" for (dataset in datasets) { printf "%s", dataset for (fname in fnames) { printf "%s%s", OFS, vals[dataset,fname] } print "" } } $ tail -n +1 file? ==> file1 <== a 1 b 2 c 3 ==> file2 <== a 2 c 3 $ awk -f tst.awk file1 file2 dataset file1 file2 a 1 2 b 2 c 3 3 Add as many files to the list as you like.
Joining multiple column from different file using awk
1,391,717,026,000
I need to merge below 2 files: file1: TABLES REF-IO HEAD-IO DIFF-IO test 200 500 -300 exam 2 3 -1 final 2 1 1 mail 4 2 2 TOTAL 208 506 -298 file2: TABLES REF-SELECT HEAD-SELECT DIFF-SELECT test 5 7 -2 game 3 3 0 exam 0 7 -7 final 12 6 6 TOTAL 20 23 -3 merged file should be as shown below: TABLES REF-IO HEAD-IO DIFF-IO REF-SELECT HEAD-SELECT DIFF-SELECT test 200 500 -300 5 7 -2 exam 2 3 -1 0 7 -7 final 2 1 1 12 6 6 mail 4 2 2 0 0 0 TOTAL 208 506 -298 20 23 -3
awk ' NR==FNR {vals[$1] = $2 " " $3 " " $4; next} !($1 in vals) {vals[$1] = "0 0 0"} {$(NF+1) = vals[$1]; print} ' file2 file1 TABLES REF-IO HEAD-IO DIFF-IO REF-SELECT HEAD-SELECT DIFF-SELECT test 200 500 -300 5 7 -2 exam 2 3 -1 0 7 -7 final 2 1 1 12 6 6 mail 4 2 2 0 0 0 TOTAL 208 506 -298 20 23 -3
Merge 2 files based on all values of the first column of the first file
1,391,717,026,000
file1: abc|123|check def|456|map ijk|789|globe lmn|101112|equator file2: abc|123|check def|456|map ijk|789|equator lmn|101112|globe EXPECTED OUTPUT: ijk|789|equator lmn|101112|globe Current awk script: awk 'BEGIN{OFS=FS="|"} NR==FNR{a[$3]=$3;next}!($3 in a)' file1 file2 This does comparison based on array content. How to compare line by line and printing only that results.
If I understand you correctly, you want to print a line from file2 if the 3rd field is different to the corresponding entry in file1. If so, this should do it: awk 'BEGIN{FS="|"} NR==FNR{a[$1,$2]=$3;next}(a[$1,$2]!=$3)' file1 file2 Yours wasn't working because you were taking $3 as the key for array a and $3 is not unique (both equator and globe are present in both files). I agree with @drewbenn that both grep and join are simpler for this particular case, but here's a Perl way of doing the same thing: perl -laF'\|' -ne '($k{$F[0].",".$F[1]}||=$F[2]) eq $F[2]||print;' file1 file2
Comparing files line by line in awk with delimiter
1,391,717,026,000
I have two files with tab-separated values that look like this: file1: A 1 B 3 C 1 D 4 file2: E 1 B 3 C 2 A 9 I would like to find rows between files 1 and 2 where the string in column 1 is the same, then get the corresponding values. The desired output is a single file that looks like this: B 3 3 C 1 2 A 1 9 Can this be done with a Unix one-liner?
GNU coreutils includes the command join that does exactly what you want if line sorting in the result is irrelevant: join <(sort file1) <(sort file2) A 1 9 B 3 3 C 1 2 If you want the tabs back, do: join <(sort file1) <(sort file2) | tr ' ' '\t' A 1 9 B 3 3 C 1 2 Or use the t option to join. (<() aka process substitution, requires ksh93 (where the feature originated in), bash or zsh)
Find common elements in a given column from two files and output the column values from each file
1,391,717,026,000
I have a 2 files with the following: File1.txt A 1 B 2 C 5 Z 3 File2.txt A 4 B 7 C 10 D 11 What I would like to do is create something like A 1 4 B 2 7 C 5 10 D - 11 Z 3 - Is there a utility that does this? If not how Can this be done? Using a find and awk or something?
join -a1 -a2 -o 0,1.2,2.2 -e - file1.txt file2.txt
Joining two files
1,391,717,026,000
I have 2 files, one having 2 columns, another having 1 column. The second file is sorted using sort -u. Now the task is I need to join this column with the first column of the first file, which is not sorted. So what will be the syntax? Will join -j 1 file2.txt sort -s -n -k 1 file1.txt work? The output I want is actually the 2nd column of file 2 after joining and the unique entries in it. File 2 1 2 3 File 1 2 500 1 5000 1 300 3 3000 3 300 4 450 Output 5000 300 500 3000
No need to use non-standard process substitution (<(...)) here: sort file1 | join -o1.2 - file2 | uniq
How to sort and join at the same time?
1,391,717,026,000
I have two files in these formats: file1 : air smell hand dude road cat file 2: air,4,21,01,13,3,2 smell,21,4,2,5,6 dude,1,31,42,1 road,1,4,2,1,4 cat,1,5,6,3,1 hand,1,4,2,1,6 mouse,1,3,5,6,2 what I want to do is print the entire row of file 2, if the first string in column 1 of file 2 is found in file 1, and I want to keep the order of file 1. expected output: air,4,21,01,13,3,2 smell,21,4,2,5,6 hand,1,4,2,1,6 dude,1,31,42,1 road,1,4,2,1,4 cat,1,5,6,3,1
This should do it: awk -F, 'FNR==NR {a[$1]; next}; $1 in a' file1 file2 edit: Interpreted the wrong file for the ordering. A new attempt (requires gawk if that is acceptable) gawk -F, ' FNR==NR {a[NR]=$1; next}; {b[$1]=$0} END{for (i in a) if (a[i] in b) print b[a[i]]} ' file1 file2 edit 2: With nowmal awk, and swapping the files around: awk -F, 'FNR==NR {a[$1]=$0; next}; $1 in a {print a[$1]}' file2 file1
comparing the first column of two files and printing the entire row of the second file if the first columns match
1,391,717,026,000
Objective: Merge the contents of two files using common key present in the files file1.txt ========= key1 11 key2 12 key3 13 file2.txt ========= key2 22 key3 23 key4 24 key5 25 Expected Output : ================== key1 11 key2 12 22 key3 13 23 key4 24 key5 25 Approaches tried: join command: join -a 1 -a 2 file1.txt file2.txt ## full outer join awk: awk 'FNR==NR{a[$1]=$2;next;}{ print $0, a[$1]}' 2.txt 1.txt Approach 2 is resulting in a right outer join and NOT a full outer join: key1 11 key2 12 22 key3 13 23 What needs to be modified in approach 2 to result in a full outer join?
With awk, try: awk '{a[$1]=($1 in a)?a[$1]" "$2:$2};END{for(i in a)print i,a[i]}' file1 file2 For huge files, you should use join instead of awk approach, since when awk approach will store all files content in memory before printing out.
Why isn't this awk command doing a full outer join?
1,391,717,026,000
I have multiple .txt files and would like to merge them together base on their first column (numeric), and fill in "NULL" if there's any missing data. File_1: 1 a 2 b 3 c File_2: 3 c 4 d 5 e File_3: 4 d 5 e 6 f Expected_Output: 1 a NULL NULL 2 b NULL NULL 3 c c NULL 4 NULL d d 5 NULL e e 6 NULL NULL f join -t $'\t' -a 1 -a 2 -1 1 -2 1 -e NULL -o 0,1.2,2.2 file_1 file_2 | join -t $'\t' -a 1 -a 2 -1 1 -2 1 -e NULL - file_3 > expected_output This command give me correct output of column 1, 2 and 3 ;however, "NULL" is missing in column 4, any idea how to fix it? Also is there a better way to merge multiple files instead writing a super long pipe command?
You are almost there. Using your command, we get: $ join -t $'\t' -a 1 -a 2 -1 1 -2 1 -e NULL -o 0,1.2,2.2 file_1 file_2 | join -t $'\t' -a 1 -a 2 -1 1 -2 1 -e NULL - file_3 1 a NULL 2 b NULL 3 c c 4 NULL d d 5 NULL e e 6 f Lines just don't have the same number of columns because we are not setting a format for the right-hand join in the pipeline. If we add it as -o 0,1.2,1.3,2.2 (the join field + the second and third columns from the first join + the second column of file_3): $ join -t $'\t' -a 1 -a 2 -1 1 -2 1 -e NULL -o 0,1.2,2.2 file_1 file_2 | join -t $'\t' -a 1 -a 2 -1 1 -2 1 -e NULL -o 0,1.2,1.3,2.2 - file_3 1 a NULL NULL 2 b NULL NULL 3 c c NULL 4 NULL d d 5 NULL e e 6 NULL NULL f Finally, if we can assume the GNU implementation of join, we can let it do the job of inferring the right format and use -o auto instead of -o 0,1.2,2.2 and -o 0,1.2,1.3,2.2, provided that, for each file, all lines have at most the same number of fields as the first one. Quoting info join: -o auto If the keyword auto is specified, infer the output format from the first line in each file. This is the same as the default output format but also ensures the same number of fields are output for each line. Missing fields are replaced with the -e option and extra fields are discarded.
Join multiple files
1,391,717,026,000
I find join pretty useful. It lets you join file1 against file2 on key fields. Is it possible to do so dynamically against the results of a command, like: join -1 1 -2 1 file1 'curl http://example.com?code=$1&fmt=csv' Maybe using xargs or named pipes? Ideally it would do one "lookup" per record/line in file1
Yes, if your shell supports process substitution (bash and ksh93 does) you may do like this: $ join file1 <( yourcommand ) This runs the join command with file1 and a file descriptor in /dev/fd connected to the standard output of yourcommand (which would be your curl thingy). Note that join expects all input to be sorted. It requires sorted input streams to be able to parse them only once. In particular, the input needs to be sorted with sort -b (ignoring leading blanks). If that's not the case, you may make it so: $ join <( sort -b file1 ) <( yourcommand | sort -b )
Can I `join` against a command?
1,391,717,026,000
I have two files. The first file has the following format: 10D0325 2465 0 0 -9 -9 10D0598 2567 0 0 -9 -9 10D0562 2673 0 0 -9 -9 10D0175 2457 0 0 -9 -9 10D0241 2209 0 0 -9 -9 10D0954 2312 0 0 -9 -9 10D0446 2489 0 0 -9 -9 The second file has this format: 10D0325 1 10D0598 1 10D0175 2 10D0954 1 10D0446 2 What I want to do is add the second column of the second file to the first file, based on the ID variable. As you can see the first column can be used as an identifier variable to match the first dataset with the second. However, the first file contains some lines/ID's that are not present in the second file. Therefore I cannot simply order both files and paste this column into the first file. There must be a fairly easy way to do this, unfortunately my Linux skills are limited. P.S. For the sake of clarity, this is what I would like the resulting file to look like (any other symbol may be used to indicate missings instead of blank spaces): 10D0325 2465 0 0 -9 -9 1 10D0598 2567 0 0 -9 -9 1 10D0562 2673 0 0 -9 -9 10D0175 2457 0 0 -9 -9 2 10D0241 2209 0 0 -9 -9 10D0954 2312 0 0 -9 -9 1 10D0446 2489 0 0 -9 -9 2
That's pretty easy with awk: awk 'NR==FNR{a[$1]=$2;next}{print $0,a[$1]}' file2 file1 First (when file2 is being read) we create an array a which stores second column from file2, indexed with first column. And then we print file1 adding value from an array.
Adding column based on matching of second column
1,391,717,026,000
I have a directory ballgown, in which there are around 1000 subdirectories as sample names. Each subdirectory has a file t_data.ctab. The filename is the same in all subdirectories. ballgown |_______TCGA-A2-A0T3-01A |___________ t_data.ctab |_______TCGA-A7-A4SA-01A |___________ t_data.ctab |_______TCGA-A7-A6VW-01A |___________ t_data.ctab Like above ballgown has 1000 subdirectories. The t_data.ctab file in all those 1000 subdirectories looks like below with columns: t_id chr strand start end t_name num_exons length gene_id gene_name cov FPKM 1 1 - 10060 10614 MSTRG.1.1 1 555 MSTRG.1 . 0.000000 0.000000 2 1 + 11140 30023 MSTRG.10.1 12 3981 MSTRG.10 . 2.052715 0.284182 3 1 - 11694 29342 MSTRG.11.1 8 6356 MSTRG.11 . 0.557588 0.077194 4 1 + 11869 14409 ENST00000456328.2 3 1657 MSTRG.10 DDX11L1 0.000000 0.000000 5 1 + 11937 29347 MSTRG.10.3 12 3544 MSTRG.10 . 0.000000 0.000000 6 1 - 11959 30203 MSTRG.11.2 11 4547 MSTRG.11 . 0.369929 0.051214 7 1 + 12010 13670 ENST00000450305.2 6 632 MSTRG.10 DDX11L1 0.000000 0.000000 8 1 + 12108 26994 MSTRG.10.5 10 5569 MSTRG.10 . 0.057091 0.007904 9 1 + 12804 199997 MSTRG.10.6 12 3567 MSTRG.10 . 0.000000 0.000000 10 1 + 13010 31097 MSTRG.10.7 12 4375 MSTRG.10 . 0.000000 0.000000 11 1 - 13068 26832 MSTRG.11.3 9 5457 MSTRG.11 . 0.995280 0.137788 From all the t_data.ctab files I want to extract only t_name and FPKM column and create a new file. In the new file the FPKM column should be the sample name. It should look like below: t_name TCGA-A2-A0T3-01A TCGA-A7-A4SA-01A TCGA-A7-A6VW-01A MSTRG.1.1 0 0.028181 0 MSTRG.10.1 0.284182 0.002072 0.046302 MSTRG.11.1 0.077194 0.685535 0.105849 ENST00000456328.2 0 0.307315 0.038961 MSTRG.10.3 0 0.446015 0.009946 MSTRG.11.2 0.051214 0.053577 0.036081 ENST00000450305.2 0 0.110438 0.040319 MSTRG.10.5 0.007904 0 1.430825 MSTRG.10.6 0 0 0.221105 MSTRG.10.7 0 0.199354 0 MSTRG.11.3 0.137788 0.004792 0 If it is two or three files I can use cut -f6,12 on each file and then join them. But I have around 1000 files now.
Try this simple way: first do: awk 'FNR==1 { print substr(FILENAME,1,16) >substr(FILENAME,1,16)".tmp" } FNR >1 { print $12 > substr(FILENAME,1,16)".tmp" } NR==FNR{ print $6 >"first_column.tmp" }' TCGA-A*/t_data.ctab then paste them together with comma delimited file (remove -d, if you want to have Tab instead): paste -d, *.tmp t_name,TCGA-A2-A0T3-01A,TCGA-A7-A4SA-01A,TCGA-A7-A6VW-01A MSTRG.1.1,0.000000,0.00000,0.0000 MSTRG.10.1,0.284182,0.28418,0.2841 MSTRG.11.1,0.077194,0.07719,0.0771 ENST00000456328.2,0.000000,0.00000,0.0000 MSTRG.10.3,0.000000,0.00000,0.0000 MSTRG.11.2,0.051214,0.05121,0.0512 ENST00000450305.2,0.000000,0.00000,0.0000 MSTRG.10.5,0.007904,0.00790,0.0079 MSTRG.10.6,0.000000,0.00000,0.0000 MSTRG.10.7,0.000000,0.00000,0.0000 MSTRG.11.3,0.137788,0.13778,0.1377
How to create a new file with required columns from different multiple files in linux?
1,391,717,026,000
I'm using join command under linux, but the results vary between different machines. I have two simple files: cat 1.txt a aaa,0.2 b bbb,0.3 c ccc,0.5 cat 2.txt a aaa,0.2 b bbb,0.3 c ccc,0.6 I'm running the following command join -a 1 -1 1 -2 1 -t "," -o 1.1' '1.2' '2.2 <(cat 1.txt| sort -t ",") <(cat 2.txt| sort -t ",") Result on machine 1: ,0.2a,0.2 ,0.3b,0.3 ,0.6c,0.5 join --version join (GNU coreutils) 8.13 locale LANG=en_US.UTF-8 LANGUAGE=en_US.UTF-8 LC_CTYPE="en_US.UTF-8" LC_NUMERIC="en_US.UTF-8" LC_TIME="en_US.UTF-8" LC_COLLATE="en_US.UTF-8" LC_MONETARY="en_US.UTF-8" LC_MESSAGES="en_US.UTF-8" LC_PAPER="en_US.UTF-8" LC_NAME="en_US.UTF-8" LC_ADDRESS="en_US.UTF-8" LC_TELEPHONE="en_US.UTF-8" LC_MEASUREMENT="en_US.UTF-8" LC_IDENTIFICATION="en_US.UTF-8" LC_ALL=en_US.UTF-8 Result on machine 2: a aaa,0.2,0.2 b bbb,0.3,0.3 c ccc,0.5,0.6 join --version join (GNU coreutils) 5.97 locale LANG=en_US.UTF-8 LC_CTYPE="en_US.UTF-8" LC_NUMERIC="en_US.UTF-8" LC_TIME="en_US.UTF-8" LC_COLLATE="en_US.UTF-8" LC_MONETARY="en_US.UTF-8" LC_MESSAGES="en_US.UTF-8" LC_PAPER="en_US.UTF-8" LC_NAME="en_US.UTF-8" LC_ADDRESS="en_US.UTF-8" LC_TELEPHONE="en_US.UTF-8" LC_MEASUREMENT="en_US.UTF-8" LC_IDENTIFICATION="en_US.UTF-8" LC_ALL= Clearly, the result on the first machine is wrong. It's been truncated. I've tried to use different locale settings but had no success.
Fix your files with dos2unix, or if that's not installed: sed -i 's/\r$//' {1,2}.txt
Truncated result returned by JOIN
1,391,717,026,000
I would like to merge a variable from one file to another in linux. The first variable contains the name I want to merge files on. I have sorted both files using both -f and -k: sort -f -k 1,1 SCZ.N.tmp> SCZ.N.tmp.sorted and sort -f -k 1,1 1kg.tmp > 1kG.ref_file.sorted However, when I join both files with this command: join -1 1 -2 1 SCZ.N.tmp.sorted 1kG.ref_file.sorted> SCZ.freq.joined I keep getting the error 'join: SCZ.N.tmp.sorted:112855: is not sorted: chr1_100002155_D D I6 0.995112 0.0184 0.7897 87016' Nevertheless, the join continues and the majority is merged. However, I am not sure whether I am losing a small proportion of cases because of mismatch between the files, or because something goes wrong with sorting these files. Does anybody know what I am doing wrong? And what i can do to not get this error? Thank you! I have also tried: LANG=en_EN sort -f -k 1,1 SCZ.N.tmp> SCZ.N.tmp.sorted2 and LANG=en_EN sort -f -k 1,1 1kg.tmp > 1kg.tmp.sorted2, with then joining using: LANG=en_EN join -1 1 -2 1 SCZ.N.tmp.sorted2 1kg.tmp.sorted2> SCZ.freq.joined. But that did not solve it.
You are sorting the files with the -f option, as case-independent keys. However, join expects the keys in normal sorted sequence. You should add the -i option to the command-line for join, to have it ignore case differences. Alternatively, omit the -f option from both sorts. Edit: also found another possibility here. The field separators need to be identical for the sort and the join. It looks like the defaults for sort and join are both whitespace, but it may be the next hurdle.
Joining two sorted files gives error: join: <file>:112855: is not sorted:
1,391,717,026,000
I am looking to compare two files, printing only records with a matching ID number and without duplicate records. I have two files: file1.txt contains: Simons 0987768798980 West 09809867678 Vickers 768774564650 Simons 76867790987 Peterson 24346576865 Simons 76867790987 Holister 87879655456 Peterson 87686765766 And, file2.txt contains: 768774564650 Harry 76867790987 Steve 0987768798980 Mary 0987768798980 Mary 76856009097 Ali 87879655456 Rick 87686765766 Martin The desired outcome is: Harry Vickers 768774564650 Steve Simons 76867790987 Mary Simons 0987768798980 Rick Holister 87879655456 Martin Peterson 87686765766 This is what I have tried: ARGV[1]==FILENAME{id2lastname[$2]=$1;id2id[$2]=$2} ARGV[2]==FILENAME{id2firstname[$1]=$2} $1 in id2id{print id2firstname[$1],id2lastname[$1],id2id[$1],id2firstname[$1]="",id2id[$1]="",id2lastname[$1]=""} Which produces the following output: Harry Vickers 768774564650 Steve Simons 76867790987 Mary Simons 0987768798980 Mary Rick Holister 87879655456 Martin Peterson 87686765766 I would be pleased to know why this removed the last name and ID number of the duplicate record, but left the first name. Apologies if the technique is odd or unconventional. I have not been learning for long. If my attempt cannot be fixed or you feel there is a better way, I am happy for you to produce the desired result in a different way, but please: use GAWK (as I want to progress with it), try to keep it as simple as possible, and, explain how it works so I can learn something.
The reason the partial line is printed, is that in your code you are not deleting the values you want to remove from the array, but replacing their values with empty strings. This causes the check $1 in id2id{ ... } to be evaluated to true for the values that are empty strings. The solution is to replace the code id2id[$1]="" with delete id2id[$1], and then it should work as expected. Here is a slightly simplified version of the code: awk 'NR == FNR { a[$2] = $1; next } $1 in a { print a[$1], $2, $1; delete a[$1] }' file1.txt file2.txt In a one-liner: awk 'NR==FNR{a[$2]=$1;next} $1 in a{print a[$1],$2,$1; delete a[$1]}' file1.txt file2.txt Using awk instead of join has the advantage of being simple and easily customizable. The disadvantage is that is stores the first file in RAM before merging, so it will not handle huge files effectively.
Print Arrays Without Duplicates in AWK
1,391,717,026,000
Is there some workaround to join multiple files at once based on the first column? Usually, I would do: join File1 File2 > File1+File2 and File1+File2 File3 > final_output Example files: File1: 1 test1 2 test3 3 test4 4 test5 7 test7 File2: 1 example1 2 example2 3 example3 4 example4 8 example8 File3: 1 foo1 2 foo2 3 foo3 4 foo4 10 foo10 Considering that f.e. fifth line may differs in each file, and there is n number of files. Edit: Example output: 1 test1 example1 foo1 2 test2 example2 foo2 3 test3 example3 foo3 4 test4 example4 foo4 On the other hand, I am not sure how lines that don't match in column1 will be processed (fifth line) Thanks
Basically like this for your 3 files example $ join file2 file3| join file1 - 1 test1 example1 foo1 2 test3 example2 foo2 3 test4 example3 foo3 4 test5 example4 foo4 But important all your input files must be sorted already (sort -k 1b,1, numerical sorted like your example may not work!). So the example above sorted on-the-fly could be written in bash like this: join <(sort -k 1b,1 file2) <(sort -k 1b,1 file3) | join <(sort -k 1b,1 file1) -\ | sort -k 1n,1 And finally the generic case for n files using a recursive function (tested in bash).: xjoin() { local f local srt="sort -k 1b,1" if [ "$#" -lt 2 ]; then echo "xjoin: need at least 2 files" >&2 return 1 elif [ "$#" -lt 3 ]; then join <($srt "$1") <($srt "$2") else f=$1 shift join <($srt "$f") <(xjoin "$@") fi } xjoin file1 file2 file3 | sort -k 1n,1 If you know what you are doing you may omit the sort pipes. But from my experience join without explicit sort is very often the cause of trouble.
Merge multiple files with join
1,391,717,026,000
I want to merge two files. Files A.txt 001;abc;def;ghi;jkl;pqr 002;abc;def;ghi;jkl;pqr 003;abc;def;ghi;jkl;pqr 004;abc;def;ghi;jkl;pqr 005;abc;def;ghi;jkl;pqr . Second File B.txt 001;mno 002;mno 003;mno 004;mno 005;mno to have a text file C.txt 001;abc;def;ghi;jkl;mno;pqr I am able to merge these two files but I don't know how to insert the output from file B mno before pqr.
join will print each line of sorted input files that have the same first *(by default) field. So, setting the field delimiter (-t) to ; you get: $ join -t\; A.txt B.txt 001;abc;def;ghi;jkl;pqr;mno 002;abc;def;ghi;jkl;pqr;mno 003;abc;def;ghi;jkl;pqr;mno 004;abc;def;ghi;jkl;pqr;mno 005;abc;def;ghi;jkl;pqr;mno Combining that with awk to switch the field positions around: $ join -t\; A.txt B.txt | awk -F';' -v OFS=';' '{k=$NF; $NF=$(NF-1); $(NF-1)=k; print;}' 001;abc;def;ghi;jkl;mno;pqr 002;abc;def;ghi;jkl;mno;pqr 003;abc;def;ghi;jkl;mno;pqr 004;abc;def;ghi;jkl;mno;pqr 005;abc;def;ghi;jkl;mno;pqr
Merging Two Files columns in order
1,560,507,233,000
I'm trying to join two csv files, sorted, and with tab delimiter. I'm new to the join command, so I'm not too sure about how to use it, but it seems to be replacing every tab in the files with spaces (messing up the alignment). The command I'm using is: join -1 5 -2 2 -t $'\t' -o $order --header file1.csv file2.csv | column -t > result.csv In the first file, the data is sorted according to the 5th column, and the 2nd column in the second file. The variable $order is a simple string containing the different columns '1.1 1.2 1.3' etc. (28 of them). The delimiter I'm using comes from SE. Do you know where this comes from?
It is because column -t, with the table mode automatically determines the column width and creates a readable table output and delimits output with spaces and not tabs. To do this explicitly using column, use its output delimiter set flag with -o join -1 5 -2 2 -t $'\t' -o $order --header file1.csv file2.csv | column -o '\t' > result.csv
join replaces tabs by spaces
1,560,507,233,000
I have two files t1 and t2. root@localhost:~# root@localhost:~# cat t1 udp UNCONN 0 0 0.0.0.0:68 0.0.0.0:* users:(("dhclient",pid=479,fd=7)) 479 tcp LISTEN 0 128 127.0.0.1:6060 0.0.0.0:* users:(("gggg-ruit",pid=24968,fd=5)) 24968 root@localhost:~# root@localhost:~# cat t2 root 88 0.0 0.0 0 0 ? I< Jan06 0:00 [scsi_tmf_0] root 96 0.0 0.0 0 0 ? I< Jan06 0:00 [ipv6_addrconf] root 24965 0.0 0.2 11592 3004 ? S Jan12 0:00 bash /root/restart_gggg.sh root 24968 0.7 5.2 112488 53472 ? Sl Jan12 30:52 /usr/local/bin/gggg-ruit -singleInstance :44444 I want to join them on the 8th column of t1 and the 2nd column of t2. I already have them in sorted order. Let's prove it. root@localhost:~# awk '{print $8}' t1 479 24968 root@localhost:~# awk '{print $2}' t2 88 96 24965 24968 Now when I join them, I got the following error. root@localhost:~# join -1 8 -2 2 -o 2.1,2.2,1.1,1.2,1.5,1.6,2.11 t1 t2 join: t2:3: is not sorted: root 24965 0.0 0.2 11592 3004 ? S Jan12 0:00 bash /root/restart_gggg.sh root@localhost:~# Why it tells me t2 is not sorted on row 3? As you can see, it's been already sorted on the join column.
They’re sorted numerically, but join requires them to be sorted lexicographically: 24968, then 479; and 24965, 24968, 88, then 96.
The "join" utility reports: file is not sorted, but in fact it is sorted
1,560,507,233,000
I have two files which are pipe-delimited and may have column 1+column2 matches in both, or one file may have the entry while the other does not. Assume my match-key I am going off of equals $1"-"$2 using a pipe '|' as the FS. file1 1111|AAA|foo|50 1111|BBB|foo|30 2222|BBB|foo|10 file2 1111|AAA|bar|10 1111|CCC|bar|20 3333|AAA|bar|40 The desired output would be the following for the first entry (I have this working) 1111|AAA|50|10 For the second entry file1 (If there is no matching column1+column2 in both files, replace the entry which is missing for foo as 0. And the other way around) 1111|BBB|30|0 And for an entry key (column1+column2) in file2, but not in file1 (This is entry 3 of file 2 expected output) 3333|AAA|0|40 So, desired output overall format is listing ALL unique keys which are represented by column1+column2 in BOTH files. With the 3rd column entries being those values from file1 column 4 (or 0 if value doesn't exist in file1) and the 4th column in output as those values in column 4 of file 2 (or 0 if value doesn't exist in file2). I have done a lot of research and tried many things but I have values not outputting if the column1+column2 pair exists in file2 but not file1 by using the following: join -t"|" -e0 -a1 -a2 -o 1.2,1.3,1.5,2.5 <(<file1 awk -F"|" '{print $1"-"$2"|"$0}' | sort -k1,1) <(<file2 awk -F"|" '{print $1"-"$2"|"$0}' | sort -k1,1) The above case gives me expected output if there is a column1+column2 match in file1 but not file2, and appends a 0 for the match not existing... How can I get this to work for ALL scenarios? Above command will do some process substitution by adding a key in column 1 in both files which is column1+column2, and then join based off of that new key. -e0 will add a 0 if this key exists in file1 but not file2. How can I get it to cover the case of: New key (column1-column2) exists in file 2 but NOT file 1?
With your approach you have to use join twice (or change your approach to do it with a single join invocation) : print the common lines and the unpairable lines from file1 with join -t'|' -e0 -a1 -o 1.2,1.3,1.5,2.5 <(<file1 awk -F'|' '{print $1"-"$2"|"$0}' | sort -t'|' -k1,1) <(<file2 awk -F'|' '{print $1"-"$2"|"$0}' | sort -t'|' -k1,1) print the unpairable lines from file2 with join -t'|' -e0 -v2 -o 2.2,2.3,1.5,2.5 <(<file1 awk -F'|' '{print $1"-"$2"|"$0}' | sort -t'|' -k1,1) <(<file2 awk -F'|' '{print $1"-"$2"|"$0}' | sort -t'|' -k1,1) You can do the same with a single awk invocation, storing $4 in two arrays indexed by e.g. $1|$2 and then in the END block iterating over each array indices, comparing them and printing accordingly: awk -F'|' 'NR==FNR{z[$1"|"$2]=$4;next}{x[$1"|"$2]=$4} END{for (j in x){if (!(j in z)){print j, "0", x[j]}}; for (i in z){if (i in x){print i, z[i], x[i]} else {print i, z[i], "0"}} }' OFS="|" file1 file2
Joining entries based off of column using awk/join
1,560,507,233,000
Is it possible to copy the whole rows of File1 in a new File3 following the instruction given by File2 by using a simple bash script using sed or awk? File1: /*two or more columns*/ AC 456324 DC 689712 GH 123677 KL 236587 File2: /*one column*/ AC DC File3: AC 456324 DC 689712 I'm actually performing this using Python dictionaries and I wondered if you knew a simple way out.
With grep grep -Ff File2 File1 With awk awk 'NR==FNR {a[$1]++;next} a[$1]' File2 File1
Merging files by rows
1,560,507,233,000
My question is similar to this one: Merge multiple columns based on the first column values I have multiple files (10+) that I want to merge/join into one output file, for example: file 1 2000 0.0202094 2001 0.0225532 2002 0.02553 2003 0.0261099 2006 0.028843 file 2 2000 0.0343179 2003 0.039579 2004 0.0412106 2006 0.041264 file 3 2001 0.03 2004 0.068689 2006 0.0645474 All files have the same two columns and are of unequal length. Where there is no entry for the columns (missing in one or several files), I would want a 1. If there is no entry in any file (like 2005) I don't want any output. The desired output would be: file1 file2 file3 2000 0.0202094 0.0343179 1 2001 0.0225532 1 0.03 2002 0.02553 1 1 2003 0.0261099 0.0395799 1 2004 1 0.0412106 0.0686893 2006 0.028843 0.041264 0.0645474 I have tried to modify the awk code provided by the answer of this other question, but I feel like it could not be possible with that solution.
Use join: join -a1 -a2 -e 1 -o auto <(join -a1 -a2 -e 1 -o auto file1 file2) file3 see in the man join -a FILENUM also print unpairable lines from file FILENUM, where FILENUM is 1 or 2, corresponding to FILE1 or FILE2 -e EMPTY replace missing input fields with EMPTY -o FORMAT obey FORMAT while constructing output line If FORMAT is the keyword 'auto', then the first line of each file determines the number of fields output for each line. Note: join require sorted input, so if those are not sorted (which they are in given samples), sort them first, like: join -a1 -a2 -e 1 -o auto \ <(join -a1 -a2 -e 1 -o auto <(sort file1) <(sort file2)) \ <(sort file3) To apply this on multiple files: join first two files and save output to third file say join.tmp: join -a1 -a2 -e 1 -o auto file1 file2 >join.tmp next loop over the rest of the files and update join.tmp file for every run: for file in rest_files*; do join -a1 -a2 -e 1 -o auto join.tmp "$file" >join.tmp.1 mv join.tmp.1 join.tmp done at the end your join.tmp would be your final joined result. print with header: $ hdr() { awk 'FNR==1{ print "\0", FILENAME }1' "$1"; } $ join -a1 -a2 -e 1 -o auto \ <(join -a1 -a2 -e 1 -o auto <( hdr file1) <(hdr file2)) \ <(hdr file3) |tr -d '\0' for the multiple files version: $ hdr() { awk 'FNR==1{ print "\0", FILENAME }1' "$1"; } $ join -a1 -a2 -e 1 -o auto <(hdr file1) <(hdr file2) >join.tmp $ for file in rest_files*; do join -a1 -a2 -e 1 -o auto join.tmp <(hdr "$file") >join.tmp.1 mv join.tmp.1 join.tmp done $ tr -d '\0' <join.tmp >final.file
Merge multiple files by first column
1,560,507,233,000
I have two text files. File 2 has logs over 1,000,000. File 1 has IP addresses line by line. I want to read file 2 lines and search these lines in file 1, I mean: file 1: 34.123.21.32 45.231.43.21 21.34.67.98 file 2 : 34.123.21.32 0.326 - [30/Oct/2013:06:00:06 +0200] 45.231.43.21 6.334 - [30/Oct/2013:06:00:06 +0200] 45.231.43.21 3.673 - [30/Oct/2013:06:00:06 +0200] 34.123.21.32 4.754 - [30/Oct/2013:06:00:06 +0200] 21.34.67.98 1.765 - [30/Oct/2013:06:00:06 +0200] ... I want to search for the IP from file 1 line by line in file 2 and print time arguments (example: 0.326) to a new file. How can I do this?
Join + sort If you're trying to find IP's that are present in both, you can use the join command but you'll need to use sort to pre-sort the files prior to joining them. $ join -o 2.2 <(sort file1) <(sort file2) Example $ join -o 2.2 <(sort file1) <(sort file2) 1.765 0.326 4.754 3.673 6.334 Another example file 1a: $ cat file1a 34.123.21.32 45.231.43.21 21.34.67.98 1.2.3.4 5.6.7.8 9.10.11.12 file 2a: $ cat file2a 34.123.21.32 0.326 - [30/Oct/2013:06:00:06 +0200] 45.231.43.21 6.334 - [30/Oct/2013:06:00:06 +0200] 45.231.43.21 3.673 - [30/Oct/2013:06:00:06 +0200] 34.123.21.32 4.754 - [30/Oct/2013:06:00:06 +0200] 21.34.67.98 1.765 - [30/Oct/2013:06:00:06 +0200] 1.2.3.4 1.234 - [30/Oct/2013:06:00:06 +0200] 4.3.2.1 4.321 - [30/Oct/2013:06:00:06 +0200] Running the join command: $ join -o 2.2 <(sort file1) <(sort file2) 1.234 1.765 0.326 4.754 3.673 6.334 NOTE: The original order of file2 is lost with this method, due to the fact that we sorted it first. However this method only needs to scan file2 a single time now, as a result. grep You can use grep to search for matches in file2 using lines that are in file1, but this method isn't as efficient as the first method I showed you. It's scanning file2 looking for each line in file1. $ grep -f file1 file2 | awk '{print $2}' Example $ grep -f file1 file2 | awk '{print $2}' 0.326 6.334 3.673 4.754 1.765 1.234 Improving grep's performance You can speed up the grep's performance by using this form: $ LC_ALL=C grep -f file1 file2 | awk '{print $2}' You can also tell grep that the stings in file1 are fixed length (-F) which will also help in getting better performance. $ LC_ALL=C grep -Ff file1 file2 | awk '{print $2}' Generally in software, you try to avoid having to do this approach though, since it's basically a loop within a loop type of solution. But there are times when it's the best that can be achieved using a computer + software. References Linux tools to treat files as sets and perform set operations on them
compare files line by line and create new one bash programming
1,560,507,233,000
Because the input to join must be sorted, often the command is called similarly to: join <(sort file1) <(sort file2) This is not portable as it uses process substitution, which is not specified by POSIX. join can also use the standard input by specifying - as one of the file arguments. However, this only allows for sorting one of the files through a pipeline: sort file1 | join - <(sort file2) It seems there should be a simple way to accomplish sorting of both files and then joining the results using POSIX-specified features only. Perhaps something using redirection to a third file descriptor, or perhaps it will require created a FIFO. However, I'm having trouble visualizing it. How can join be used POSIXly on unsorted files?
You can do it with two named pipes (or of course you could use one named pipe and stdin): mkfifo a b sort file1 > a & sort file2 > b & join a b Process substitution works essentially by setting up those fifos (using /dev/fd/ instead of named pipes where available). For example, in bash: $ echo join <(sort file1) <(sort file2) join /dev/fd/63 /dev/fd/62 Note how bash has substituted the process with a file name, in /dev/fd. (Witout /dev/fd/, new enough versions of zsh, bash, and ksh93 will used named pipes.) It's left those open when invoking join, so when join opens those, it'll read from the two sorts. You can see them passed with some lsof-trickery: $ sh -c 'lsof -a -d 0-999 -p $$; exit' <(sort file1) <(sort file2) COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME sh 1894 anthony 0u CHR 136,5 0t0 8 /dev/pts/5 sh 1894 anthony 1u CHR 136,5 0t0 8 /dev/pts/5 sh 1894 anthony 2u CHR 136,5 0t0 8 /dev/pts/5 sh 1894 anthony 62r FIFO 0,10 0t0 237085 pipe sh 1894 anthony 63r FIFO 0,10 0t0 237083 pipe (The exit is to prevent a common optimization where the shell doesn't fork when there is only one command to run).
Join two unsorted files with POSIX? [duplicate]
1,560,507,233,000
I have N files like so: file1.txt Header1,Header2,Header3,Header4,Header5 A,B,RANDOM,1,2 C,D,RANDOM,3,4 fileN.txt Header1,Header2,Header3,Header4,Header5 A,B,RANDOM,1,2 C,D,RANDOM,3,4 They all have the same headers. I would like to sum all of Header4 and Header5 based on Header1 and Header2. So all items with the A,B fields should sum Header4,Header5. To print something like A,B,2,4 C,D,6,8
Assuming ordering of output is no requirement... awk ' BEGIN { FS=OFS=SUBSEP="," } { s4[$1,$2]+=$4 ; s5[$1,$2]+=$5 } END { for (k in s4) print k,s4[k],s5[k] } ' file1 ... fileN (Edited and restructured for better legibility.)
Awk sum csv columns based on fields
1,560,507,233,000
I need to match two unequal files using $1 of file 1 and $2 of file 2 and print $1 of file 2 on file 1. Input file 1 101 2 101 5 101 7 103 2 103 3 103 4 105 3 105 2 Input file 2 24 101 23 103 26 105 Desired output 101 2 24 101 5 24 101 7 24 103 2 23 103 3 23 103 4 23 105 3 26 105 2 26 I have tried the following code but it gave me incorrect output. awk 'FNR==NR{a[$2]=$0;next};{print a[$2]}' file2 file1
A classic job for join: join -1 1 -2 2 file1 file2 -1 1 specifies the field in the first file. -2 2 specifies the field in the second file.
How can I match a field in two different files and append together in output file?
1,560,507,233,000
I have two files. I want to compare the content of one file with the other one. If there is a matching line between both files then print the line and its line number in each file. Example: File 1: ABC PQR MNO XYZ File 2: qqqq wewe ABC acdd abcc nop MNO Expected output: ABC 1 3 MNO 3 7 ..
With awk you could process the first file, store the lines ($0) and their corresponding line numbers (NR) (as indices/values) in an associative array (l[$0]) then process the second file and if a line is in the the array index print it along with the value of l[$0] and current line number (FNR): awk 'FNR==NR{l[$0]=NR; next}; $0 in l{print $0, l[$0], FNR}' file1 file2
Print common lines between two files along with their line numbers in those files
1,560,507,233,000
I have: File 1 like: sting_of_printable_characters*sting_of_printable_characters*sting_of_printable_characters*ALPHANUMERIC_PATTERN File 2 like: sting_of_printable_characters*ALPHANUMERIC_PATTERN where * is a field separator and the alphanumeric pattern is always the last field in the line. I am completely stumped on how to achieve the following and would appreciate some assistance. I need to essentially "join" (I've tried the join command and it doesn't seem to work with alphanumeric keys) these two files based on "ALPHANUMERIC_PATTERN", and only print where both files contain the same ALPHANUMERIC_PATTERN. I would prefer to use awk due to it's processing efficiency but anything would be very helpful. (These files are large.) The catch is that I need to see the output similar to the below: ALPHANUMERIC_PATTERN*stuff_from_file_1*stuff_from_file_2
With join you could try like this: join -t\* \ <(sed 's/\(.*\)\(\*\)\(.*\)/\3\2\1/' file1 | sort -t\* -k1,1) \ <(sed 's/\(.*\)\(\*\)\(.*\)/\3\2\1/' file2 | sort -t\* -k1,1) The two seds move the last field to the beginning of line, e.g. field1*field2*...field(N-1)*field(N) becomes field(N)*field1*field2*...*field(N-1) the results are then then sorted on 1st field and then joined (always on 1st field). This will print lines like: field(N)*fields(1)to(N-1)*from*file1*fields(1)to(N-1)*from*file2 If you prefer working with temporary files and save join result to e.g. outfile: sed 's/\(.*\)\(\*\)\(.*\)/\3\2\1/' file1 | sort -t\* -k1,1 > sorted_1 sed 's/\(.*\)\(\*\)\(.*\)/\3\2\1/' file2 | sort -t\* -k1,1 > sorted_2 join -t\* sorted_{1,2} > outfile rm -f sorted_{1,2}
Join (large) files on alphanumeric pattern
1,560,507,233,000
bash-3.2$ cat sample.log sample.log.1 sample.log.2 ID COL1 COL2 COL4 1 col1 col2 col4 2 c1 c2 c4 3 co1 co2 co4 ID COL3 COL1 1 col3 col1 2 c3 c1 3 co3 co1 ID COL1 COL2 COL3 1 col1 col2 col3 2 c1 c2 c3 3 co1 co2 co3 I need to write an awk script such that it gives me the values of the columns for a particular id like a select query on multiple tables in db. give me col1 col2 and col3 fields for id 1 and should not duplicate result. meaning the result should be like The result should be ID COL1 COL2 COL3 1 col1 col2 col3 but not The result should be ID COL1 COL2 COL3 COL3 1 col1 col2 col3 col3 Even a suggestion is also good. awk ' BEGIN { while ( (getline line < "sample.log") > 0 ) {ids[substr(line,1,index(line," ")-1)];} } { // get the column values here based on the stored id's .. } ' sample.log sample.log.1 sample.log.2 I am trying to do something like that mentioned above. I am not sure if it is a good idea.
You can use the join command to perform this task: join -1 1 -2 1 sample.log sample.log.1 -o 1.1,1.2,1.3,2.2 The output will be 'single space' separated, but you can use awk to reformat it to be column aligned. Note that the join input files must be sorted.
merging files and getting column values based on id field
1,560,507,233,000
I have file1 (sample): 60108903374 60172485121 60108919381 60174213128 60108919951 60108919970 601112020106 601112020107 601112020108 601112020113 601112020114 60175472940 And file2: 60179970001,A 60172681920,A 60174202041,A 60172514180,A 60174314679,A 60174325306,A 60175472940,A 60174213128,A 60175328984,A 60175349857,A 60172796759,A 60172798922,A 60179195129,A 60172485121,B 60173483126,A 60172683175,A 60174521828,A 60142536314,B 60175347909,B 60175183031,B I want to merge file1 and file2 with output matching based on the first column as well as shows the second column from file2. Desired output: 60172485121,B 60174213128,A file1 has ~80k lines and file2 has 500k lines. Tried using: join -1 1 -2 1 -o 1.1,2.2 file1 file2
join -t, <(sort file1) <(sort -t, file2) The above does the job.
How to merge two files of different lines and column and output matching lines with colums?
1,560,507,233,000
I have a file name "conf1" containing variables like: name='john' last='' custom='1000' and another file name conf2 like this: name='john' last='star' I want to merge between them to one file but in a way that the merged file contain the variable in the same order I source them. for example if I source conf1 and then conf2 the variable from conf 2 will override conf1. but I will also have the variable that I don't have in conf2. I want to merge and create 1 file from both of them with only the variables that are unique and was sources last. required output:conf3 name='john' last='star' custom='1000' is this possible?
You could do: $ awk -F= '{l[$1]=$0};END{for (i in l) print l[i]}' conf1 conf2 custom='1000' last='star' name='john' Note that the order of the lines in the output is not guaranteed (based on how awk stores the array internally in a hash table), but settings in conf2 will override those in conf1. where awk -F= ... conf1 conf2 call awk with = as separator on conf files. {l[$1]=$0} store definition of each var, newest overriding oldest END{ ... } in the end, (after all files processed) for (i in l) loop for all var, print l[i] and print it.
source multiple files and output one file
1,560,507,233,000
I want to perform what some data analysis software call an anti-join: remove from one list those lines matching lines in another list. Here is some toy data and the expected output: $ echo -e "a\nb\nc\nd" > list1 $ echo -e "c\nd\ne\nf" > list2 $ antijoincommand list1 list2 a b
I wouldn't use join for this because join requires input to be sorted, which is an unnecessary complication for such a simple job. You could instead use grep: $ grep -vxFf list2 list1 a b Or awk: $ awk 'NR==FNR{++a[$0]} !a[$0]' list2 list1 a b If the files are already sorted, an alternative to join -v 1 would be comm -23 $ comm -23 list1 list2 a b
How to do anti-join or inverse join in bash
1,560,507,233,000
I have a tab separated file that looks like this: 123 some text 123 some different text 334 some other text 341 more text and I want to do two things. One is to order everything numerically (this is easy to do) and the other is to remove a line if it's number is already present. I.e. the output would look like this: 123 some text 334 some other text 341 more text I tried getting a file of just the unique numbers, i.e. 123 334 341 and joining it with the original file with: join -j 1 justNumbers.txt original.txt but this gave me the original file back. Any ideas?
If you want to sort/test for uniqueness the first field specifically, and your system has the GNU coreutils version of sort, then I think you could just use sort -nu file viz. $ sort -nu file 123 some text 334 some other text 341 more text From info coreutils 'sort invocation' The commands sort -u and sort | uniq are equivalent, but this equivalence does not extend to arbitrary sort options. For example, sort -n -u inspects only the value of the initial numeric string when checking for uniqueness, whereas sort -n | uniq inspects the entire line.
Removing lines with a single common field
1,560,507,233,000
I have two files. file_1.txt looks like this: R1 C1 C2 C3 C4 C5 R2 C1 C2 C3 C4 C5 R3 C1 C2 C3 C4 C5 R4 C1 C2 C3 C4 C5 R5 C1 C2 C3 C4 C5 R6 C1 C2 C3 C4 C5 R7 C1 C2 C3 C4 C5 R8 C1 C2 C3 C4 C5 R9 C1 C2 C3 C4 C5 R10 C1 C2 C3 C4 C5 file_2.txt looks like this: R4 C4 C5 R6 C4 C5 R7 C4 C5 R9 C4 C5 I would like to replace the C4 and C5 values in file_1.txt by those that correspond to them from file_2.txt, while keeping the C1, C2, and C3 values in file_1.txt unchanged. So the resulting file_3.txt should look like this: R1 C1 C2 C3 C4 C5 R2 C1 C2 C3 C4 C5 R3 C1 C2 C3 C4 C5 R4 C1 C2 C3 C4_new C5_new R5 C1 C2 C3 C4 C5 R6 C1 C2 C3 C4_new C5_new R7 C1 C2 C3 C4_new C5_new R8 C1 C2 C3 C4 C5 R9 C1 C2 C3 C4_new C5_new R10 C1 C2 C3 C4 C5 All values are numbers. The first columns in file_1.txt and file_2.txt are key fields and are sorted in ascending numerical order. Is this something that join alone can do?
This problem makes for a typical application of awk awk 'NR == FNR{a1[$1]=$2; a2[$1]=$3; next}; $1 in a1{$5=a1[$1]; $6=a2[$1]};{print}' file_2.txt file_1.txt You may have to set the output field separator explicitly to tab, in which case awk -v OFS='\t' 'NR == FNR{a1[$1]=$2; a2[$1]=$3; next}; $1 in a1{$5=a1[$1]; $6=a2[$1]};{print}' file_2.txt file_1.txt
Join two files with different number of columns and rows
1,560,507,233,000
Table 1 (tab separated): NC_000001.11 1243 A T 0.14 NC_000005.11 1432 G C 0.0006 NC_000012.12 1284 A T 0.93428 NC_000021.9 9824 T C 0.9 Lookup table (tab separated) - this is in fact huge, around 6G gzipped: NC_000001.11 1243 rs73647921 A T NC_000005.11 1432 rs75444 G C NC_000012.12 1284 rs754723 A T NC_000021.9 9824 rs865545 T C I would like the output to match on the first 4 columns of table 1 which correspond to columns 1/2/4/5 of the lookup table; taking table 1 and transforming it into: MarkerName P-Value rs73647921 0.14 rs75444 0.0006 rs754723 0.93428 rs865545 0.9 I think I need to use join along the lines of: join -t, -a 1 -a 2 -o0,1.5,2.3 -e ' -' file1 file2 But this doesn't seem to work. How could I also use a gzipped file?
With awk (and bash) you can write awk ' BEGIN {FS = OFS = "\t"} NR == FNR {pvalue[$1,$2,$3,$4] = $5; next} FNR == 1 {print "MarkerName", "P-Value"} { key = $1 SUBSEP $2 SUBSEP $4 SUBSEP $5 sub(/\r$/, "", key) } key in pvalue {print $3, pvalue[key]} ' table1.tsv <(zcat lookup.tsv.gz) awk uses the SUBSEP variable to join comma-separated indices for arrays. That last bit of syntax around the zcat is a bash Process Substitution For multi-field join conditions, join can be painful to work with. It also complains if the files are not sorted.
Looking up values in one table and outputting it into another using join/awk
1,560,507,233,000
I have seven (or eight and so on) files with same number of lines. file1 1.001 1.002 1.003 1.004 file2 2.001 2.002 2.003 2.004 file3 3.001 3.002 3.003 3.004 etc. Desired output: 1.001;2.001;3.001;4.001;5.001;6.001;7.001 1.002;2.002;3.002;4.002;5.002;6.002;7.002 1.003;2.003;3.003;4.003;5.003;6.003;7.003 1.004;2.004;3.004;4.004;5.004;6.004;7.004 How to do it with short script in awk?
As steeldriver said, the reasonable way to do this is with paste: $ paste -d';' file* 1.001;2.001;3.001;4.001;5.001;6.001;7.001;8.001 1.002;2.002;3.002;4.002;5.002;6.002;7.002;8.002 1.003;2.003;3.003;4.003;5.003;6.003;7.003;8.003 1.004;2.004;3.004;4.004;5.004;6.004;7.004;8.004 But, if you must use awk: $ awk '{a[FNR]=a[FNR](FNR==NR?"":";")$0} END{for (i=1;i<=FNR;i++) print a[i]}' file* 1.001;2.001;3.001;4.001;5.001;6.001;7.001;8.001 1.002;2.002;3.002;4.002;5.002;6.002;7.002;8.002 1.003;2.003;3.003;4.003;5.003;6.003;7.003;8.003 1.004;2.004;3.004;4.004;5.004;6.004;7.004;8.004 The awk script keeps all the data in memory. If the files are large, this could be a problem. But, for this task, paste is better and simpler anyway. How it works In this script a is an array with a[i] being the output for line i. As we read through each of the subsequent files, we append the new information for line i to the end of a[i]. After we have finished reading the files, we print out the values in a. In more detail: a[FNR]=a[FNR](FNR==NR?"":";")$0 FNR is the line number of the current file we are reading and $0 are the contents of that line. This code adds $0 on to the end of a[FNR]. Except if we are still reading the first file, we put in a semicolon before $0. This is done using the complex looking ternary statement: (FNR==NR?"":";"). This is really just a if-then-else command. If we are reading the first file, that is if FNR==NR, then it returns an empty string "". If not, it returns a semicolon, ;. END{for (i=1;i<=FNR;i++) print a[i]} After we have finished reading all the files, this prints out the data that we have accumulated in array a.
Join seven files with awk line-by-line
1,560,507,233,000
I have two different files and I'd like to do a merge of their information using the first column. File1.txt A,info1,info2 234,info3,info4 CD,info5,info6 File2.txt 234,ccc,bb CD,aaa,dd Expected output.csv A,info1,info2,, 234,info3,info4,ccc,bb CD,info5,info6,aaa,dd I tried with awk (not my script) join and grep but I didn't obtain the desired result. awk -F "," 'FNR==NR {h[$1] = $2;next} BEGIN{ OFS = "\t"} {print $0,$2?h[$1]:"0"}' file1.txt prova2.txt and join -a 1 <(sort file1.txt) <( sort file2.txt) > output.csv Could someone help me please?
if the number of fields in both files are the same then you can use -o auto to fill-up the number of fields in each line based on the first line of each file (by default it fills the missing fields with the value of -e option which by default it's space character but you can change it to any string you want); $ join -t, -a1 -o auto <(sort file1) <(sort file2) 234,info3,info4,ccc,bb A,info1,info2,, CD,info5,info6,aaa,dd we also added -t, to specify the field separators for the input &ouptut files. if you want to add lines which are only exist in file2, add -a2 to the command. above command is the shortened version of below command which we explicitly saying which fields to output: join -t, -a1 -o0,1.2,1.3,2.2,2.3 <(sort file1) <(sort file2) -o #.k prints the kth field from the file number #. -o 0 here outputs the un-pairable lines. Or using awk: awk 'BEGIN{ FS=OFS=","; na="" } { key=$1; sub(/[^,]*,/, "") } NR==FNR { file1[key]=$0; next } (key in file1){ print key, file1[key], $0; delete file1[key] } END{ for(key in file1) print key, file1[key], na, na }' file1 file2 In the { key=$1; sub(/[^,]*,/, "") } action, we take a backup from the first column then with sub() we remove that column by striping line upto the first comma character, so the remaining content will be the value for that key for the later uses.
Merge two files using first column
1,560,507,233,000
I have multiple files which need to be merged based on first column from each file File1: foo 12 jhdfeg 25 kjfdgkl 37 File 2: foo 23 jhdfeg 45 File 3: foo 35 djhf 37 The output should be like this file1 file2 file3 foo 12 23 35 jhdfeg 25 45 0 kjfdgkl 37 0 0 djhf 0 0 37
perl -F'\s+' -lane ' $. == 1 and @ARGC = ($ARGV, @ARGV); # initialize the @ARGC array exists $h{$F[0]} or $h[keys %h] = $F[0]; # needed to remember order $h{$F[0]}->[@ARGC-@ARGV-1] = $F[1]; # populate hash END { $, = "\t"; # set the OFS to TAB print q//, @ARGC; # print the first line with filenames for my $key (@h) { # print remaninig lines with data print $key, map { $h{$key}->[$_] // 0 } 0 .. $#ARGC; } } ' file1 file2 file3 # ... you can give as many files here Output file1 file2 file3 foo 12 23 35 jhdfeg 25 45 0 kjfdgkl 37 0 0 djhf 0 0 37
Paste multiple files based on the first column into one single file
1,560,507,233,000
I need to combine two files into a single file with all columns from both files. I am providing my example files. File 1 chr loc T1 C1 chr1 100 2 3 chr1 200 3 4 chr2 100 1 4 chr2 400 3 1 File 2 chr loc T2 C2 chr1 100 1 2 chr1 300 4 1 chr2 100 7 5 chr2 500 1 9 and output file should be like this output file chr loc T1 C1 T2 C2 chr1 100 2 3 1 2 chr1 200 3 4 0 0 chr1 300 0 0 4 1 chr2 100 1 4 7 5 chr2 400 3 1 0 0 chr2 500 0 0 1 9
join -a1 -a2 -e 0 -o 0,1.2,1.3,2.2,2.3 \ <(sed 's/ \+/_/' file1 | sort) \ <(sed 's/ \+/_/' file2 | sort) | sed 's/_/ /' | column -t | sort chr loc T1 C1 T2 C2 chr1 100 2 3 1 2 chr1 200 3 4 0 0 chr1 300 0 0 4 1 chr2 100 1 4 7 5 chr2 400 3 1 0 0 chr2 500 0 0 1 9 The trickiest part here are the reasons for sed -- join will only join on a single field, and here the join criteria is the first 2 fields. So, we have to combine those fields into a single word: I replace the first sequence of whitespace with an underscore so join will see chr1_100, chr1_200, etc. join requires its input files to be sorted. I use process substitution so that join can work with the sed|sort pipelines like files. Then another sed call to undo the combined field, and then column to make it pretty. By default, join uses the first field of each file as the key field. By default, join does an inner join: only keys present in both files are printed. The -a1 and -a2 option enable the full outer join we want. The -e option provides the default value for null fields, and we need the -o option to specify that we want all the fields. Can also use awk: awk ' {key = $1 OFS $2} NR == FNR {f1[key] = $3; f2[key] = $4; next} !(key in f1) {print $1, $2, 0, 0, $3, $4; next} {print key, f1[key], f2[key], $3, $4; delete f1[key]} END {for (key in f1) print key, f1[key], f2[key], 0, 0} ' file1 file2 | sort chr loc T1 C1 T2 C2 chr1 100 2 3 1 2 chr1 200 3 4 0 0 chr1 300 0 0 4 1 chr2 100 1 4 7 5 chr2 400 3 1 0 0 chr2 500 0 0 1 9
combine two files to single file with combined columns
1,560,507,233,000
I have four different files: file1, file2, file3, file4. Each file has 2 different columns separated by tab. I want to match first column of file1 (as reference) with the first column of second file, third and fourth file and print first column which is matching and second column of each file which has matching first column. files look like: file 1 Bm1_00085|Bm1_22625 0.263974289 Bm1_00087|Bm1_22620 0.663443490 file 2 Bm1_00085|Bm1_22625 0 Bm1_57630|Bm1_52870 0 file 3 Bm1_57630|Bm1_54855 0 Bm1_00085|Bm1_22625 4 file 4 Bm1_57630|Bm1_52870 0 Bm1_00085|Bm1_22625 1 output: Bm1_00085|Bm1_22625 0.263974289 0 4 1
In perl, the tool for this job is a hash. A hash is a set of key-value pairs which makes this sort of cross referencing quite easy. Note - this will ONLY work if the first field is unique: #!/usr/bin/env perl use strict; use warnings; my %data; while (<>) { my ( $key, $value ) = split; push( @{ $data{$key} }, $value ); } foreach my $key ( sort keys %data ) { if ( @{ $data{$key} } >= @ARGV ) { print join( "\t", $key, @{ $data{$key} } ), "\n"; } } Invoke as myscript.pl file1 file2 file3 file4. It: reads a list if files from the command line via <>, and opens them for processing. Iterates one line at a time, splitting the line into $key and $value. Stores $value in a hash of arrays. runs through each key in your hash if there are a number elements >= number of command line arguments (e.g. number of files) - prints that line. Output from this is: Bm1_00085|Bm1_22625 0.263974289 0 4 1 Note: Assumes unique 'keys' within all the files.
compare multiple files(more than two) with two different columns
1,560,507,233,000
I have 2 CSV files for each date (csv_2014_4_15 and csv_2014_4_16) , with basic structure and couple of unique columns as below. id,name,created_at,updated_at,other columns 12, joe, 2013-1-1 18:30, 2014-2-1 12:00 56, bob, datetime, datetime I want to merge the 2 csv files based on these these conditions. My code so far is as below. if (csv_date_x.id == csv_date_x+1.id) { if(csv_date_x.updated_at < csv_date_x.updated_at) add csv_date_x+1 row into out.csv } else { if(csv_date_x+1.created_at == TODAY (yyyy-mm-dd) add csv_date_x+1 row into out.csv }
Try this: $ awk -F',' -v t="$(date +"%Y-%-m-%-d")" ' FNR == NR { u[$1] = $4; next; } $4 > u[$1] { print; next; } t ~ $3 ' file_1 file_2 Explanation We get today date, save it in variable t While reading file_1 FNR == NR, we save each updated time of each id in associative array u, key is id, value is updated time. While reading file_2: If update date of id, $4 greater than correspond id updated time save in array u ($4 > u[$1]), we print the line and skip to next line. If above condition false, we check if created date of current line is today, t ~ $3, i.e "2014-7-11" ~ "2014-7-11 12:00", if true print the line.
Merge csv files with conditions
1,560,507,233,000
I have a script equijoin2: #! /bin/bash # default args delim="," # CSV by default outer="" outerfile="" # Parse flagged arguments: while getopts "o:td:" flag do case $flag in d) delim=$OPTARG;; t) delim="\t";; o) outer="-a $OPTARG";; ?) exit;; esac done # Delete the flagged arguments: shift $(($OPTIND -1)) # two input files f1="$1" f2="$2" # cols from the input files col1="$3" col2="$4" join "$outer" -t "$delim" -1 "$col1" -2 "$col2" <(sort "$f1") <(sort "$f2") and two files $ cat file1 c c1 b b1 $ cat file2 a a2 c c2 b b2 Why does the last command fail? Thanks. $ equijoin2 -o 2 -d " " file1 file2 1 1 a a2 b b1 b2 c c1 c2 $ equijoin2 -o 1 -d " " file1 file2 1 1 b b1 b2 c c1 c2 $ equijoin2 -d " " file1 file2 1 1 join: extra operand '/dev/fd/62'
"$outer" is a quoted scalar variable so it always expands to one argument. If empty or unset, that still expands to one empty argument to join (and when you call your script with -o2, that's one -a 2 argument instead of the two arguments -a and 2). Your join is probably GNU join in that it accepts options after non-option arguments. That "$outer" is a non-option argument when empty as it doesn't start with - so is treated as a file name and join complains about the third file name provided which it doesn't expect. If you want a variable with a variable number of arguments, use an array: outer=() ... (o) outer=(-a "$OPTARG");; ... join "${outer[@]}" Though here you could also do: outer= ... (o) outer="-a$OPTARG";; ... join ${outer:+"$outer"} ... <(sort < "$f1") <(sort < "$f2") Or: unset -v outer ... (o) outer="$OPTARG";; ... join ${outer+-a "$outer"} ... (that one doesn't work in zsh except in sh/ksh emulation). Some other notes: join -t '\t' doesn't work. You'd need delim=$'\t' to store a literal TAB in $delim Remember to use -- when passing arbitrary arguments to commands (or use redirections where possible). So sort -- "$f1" or better sort < "$f1" instead of sort "$f1". arithmetic expansions are also subject to split+glob so should also be quoted (shift "$((OPTIND - 1))") (here not a problem though as you're using bash which doesn't inherit $IFS from the environment and you're not modifying IFS earlier in the script, but still good practice).
Why do I have "join: extra operand '/dev/fd/62'" error?
1,560,507,233,000
I have two files with columns separated by tab and I want to merge them file a01 a= b= c= d= e= f= g= h= i= j= k= l= m= n= 0= file b01 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 desired output a=1 b=2 c=3 d=4 e=5 f=6 g=7 h=8 i=9 j=10 k=11 l=12 m=13 n=14 0=15 but when I run the command join a01 b01 it returns nothing I don't know what I am doing wrong thanks in advance
That's simply not how the joincommand works - it joins lines based on a common (matching field) - which your input files don't have. You could do something like this using paste and awk: paste a01 b01 | awk '{n=NF; for (i=n/2;i>0;i--) {$i = $i""$(i+n/2); NF--}} 1' a=1 b=2 c=3 d=4 e=5 f=6 g=7 h=8 i=9 j=10 k=11 l=12 m=13 n=14 0=15
join returns nothing
1,560,507,233,000
I asked a similar question yesterday (Merging two tables including multiple ocurrence of column identifiers) but ran into a problem with unique lines. I would like to merge two tables based on column 1: File 1: 1 today 1 green 2 tomorrow 3 red File 2: 1 a lot 1 sometimes 2 at work 2 at home 2 sometimes 3 new 4 a lot 5 sometimes 6 at work Desired output (file 3): 1 today a lot 1 today sometimes 1 green a lot 1 green sometimes 2 tomorrow at work 2 tomorrow at home 2 tomorrow sometimes 3 red new I came up with the following: awk -F '[\t]' -v OFS='\t' '{i=$1;$1=x} NR==FNR{A[i]=$0;next} A[i]{print i,$0A[i]}' file2 file1 > file3 However, it gives me only: 1 today sometimes 2 tomorrow sometimes 3 red new Please note that the solutions in the previous thread (join and awk) would give me a combination of the 2 files including all lines. I would like to have only the lines of file 1 (column 1 as the identifier) but report all matching occurrences in file 2. Edit: columns are tab separated Real File 1: fig|395961.4.peg.2627 Bacteria Cyanobacteria unknown unknown 1795 (Column1: fig... Column2: Bacteria... Column3 1795) Real File 2: fig|1000561.3.peg.1838 Cysteine desulfurase (EC 2.8.1.7) Test - Thiamin Cofactors, Vitamins, Prosthetic Groups, Pigments (Column1: fig... Column2: Cysteine... Column3 Test...)
I would do this in Perl: #!/usr/bin/env perl use strict; my (%file1,%file2); ## Open the 1st file open(A,"file1"); while(<A>){ ## Remove trailing newlines chomp; ## Split the current line on tabs into the @F array. my @F=split(/\t/); ## This is the tricky part. It adds fields 2-last ## to the hash $file1. The value of this hash is an array ## and the keys are the 1st fields. This will result in a list ## of all 1st fields and all their associated columns. push @{$file1{$F[0]}},@F[1..$#F]; } ## Open the 2nd file open(B,"file2"); while(<B>){ ## Remove trailing newlines chomp; ## Split the current line on tabs into the @F array. my @F=split(/\t/); ## If the current 1st field was found in file1 if (defined($file1{$F[0]})) { ## For each of the columns associated with ## this 1st field in the 1st file. foreach my $col (@{$file1{$F[0]}}) { print "$F[0]\t$col\t@F[1..$#F]\n"; } } } You could golf it into a (long) one-liner: $ perl -lane 'BEGIN{open(A,"file1"); while(<A>){chomp; @F=split(/\t/); push @{$k{$F[0]}},@F[1..$#F];} } $k{$F[0]} && print "$F[0]\t@{$k{$F[0]}}\t@F[1..$#F]"' file2 1 today green a lot 1 today green sometimes 2 tomorrow at work 2 tomorrow at home 2 tomorrow sometimes 3 red new If you're working with huge files, let it run a while.
Merging two tables including multiple ocurrence of column identifiers and unique lines
1,560,507,233,000
I have 2 set of files. File one contains ID's ex: 1111 2222 6666 3333 4444 File two contains ID and username: 1873 Neil 1111 Roger 7632 Tim 3333 Oscar 8723 Greg 4444 Roy 6666 Patrick I want to extract the ID and username, but only the ones that has the same ID as in file 1. I did the normal grep -f file1 file2, on two test files I made, with few Id's such as the ones i just posted. However, when i apply this to the two proper files, where file1 contains 3500 ID's and File2 contains 12000 ID's + Username, instead of extracting the 3500 lines that occur in both files, it extracts 12000 lines. However, with the 2 test files, and a few dummy ID's it will only extract the correct ids and leave the others. Any tip on what's wrong?
Try doing this using join instead of grep, this will be more suitable : $ join <(sort file1) <(sort file2) 1111 Roger 3333 Oscar 4444 Roy 6666 Patrick If your shell lack process substitutions <( ), you can do : sort file1 > new_file1 sort file2 > new_file2 join new_file1 new_file2 Doc said : join writes to standard output a line for each pair of input lines that have identical join fields. See http://www.gnu.org/software/coreutils/manual/html_node/join-invocation.html Notes : The file need to be sorted on the sort key for join to work properly, that's why we uses some file descriptors in the background using process substitutions See http://mywiki.wooledge.org/ProcessSubstitution , or http://mywiki.wooledge.org/BashFAQ/024 for a common use.
Matching two files for similar first line
1,560,507,233,000
I want to compare two CSV files with the following format. They not have headers. I want to compare them by a specific column (in this case, the 2nd one). The source CSV files around 4-5GB, so loading them into memory is won't work. If there's no matching column in old.csv, than every new line written into out.csv. This 2nd column will be a html link, for the shake of simplicity, here one word only. My question is it possible to achieve the same result with sed, awk, join, or grep? old.csv "person"|"john"|"smith" "person"|"anne"|"frank" "person"|"bob"|"macdonald" "fruit"|"orange"|"banana" "fruit"|"strawberry"|"fields" "fruit"|"ringring"|"banana" new.csv "person"|"john"|"smith" "person"|"anne"|"frank" "person"|"bob"|"macdonald" "fruit"|"orange"|"banana" "fruit"|"strawberry"|"fields" "glider"|"person"|"airport" "fruit"|"ringring"|"banana" "glider"|"person2"|"airport" diff.py #!/usr/bin/env python3 """ Source: https://gist.github.com/davidrleonard/4dbeebf749248a956e44 Usage: $ ./csv-difference.py -d new.csv -s old.csv -o out.csv -c 1 """ import sys import argparse import csv def main(): parser = argparse.ArgumentParser(description='Output difference in CSVs.') parser.add_argument('-d', '--dataset', help='A CSV file of the full dataset', required=True) parser.add_argument('-s', '--subset', help='A CSV file that is a subset of the full dataset', required=True) parser.add_argument('-o', '--output', help='The CSV file we should write to (will be overwritten if it exists', required=True) parser.add_argument('-c', '--column', help='A number of the column to be compared (0 is column 1, 1 is column 2, etc.)', required=True, type=int) args = parser.parse_args() dataset_file = args.dataset subset_file = args.subset output_file = args.output column_num = args.column with open(dataset_file, 'r') as datafile, open(subset_file, 'r') as subsetfile, open(output_file, 'w') as outputfile: data = {row[column_num]: row for row in csv.reader(datafile, delimiter='|', quotechar='"')} subset = {row[column_num]: row for row in csv.reader(subsetfile, delimiter='|', quotechar='"')} data_keys = set(data.keys()) subset_keys = set(subset.keys()) output_keys = data_keys - subset_keys output = [data[key] for key in output_keys] output_csv = csv.writer(outputfile, delimiter='|', quotechar='"', quoting=csv.QUOTE_ALL) for row in output: output_csv.writerow(row) if __name__ == '__main__': main() sys.stdout.flush() Which is generating out.csv "glider"|"person"|"airport" "glider"|"person2"|"airport"
Super simple with awk: $ awk -F'|' 'NR == FNR {old[$2]; next} !($2 in old)' old.csv new.csv "glider"|"person"|"airport" "glider"|"person2"|"airport" That stores the 2nd field of the old.csv file in the array named "old", and then for the new.csv file, it will print records where the 2nd field is not in the "old" array. It is true that this will not respect any pipe character within quotes. For that, I like ruby's csv module: ruby -rcsv -e ' old_col2 = [] old_data = CSV.foreach("./old.csv", :col_sep => "|") do |row| old_col2 << row[1] end CSV.foreach("./new.csv", :col_sep => "|") do |row| if not old_col2.include?(row[1]) puts CSV.generate_line(row, :col_sep => "|", :force_quotes => true) end end '
Merging two CSV compared by a specific column only
1,560,507,233,000
I've been trying to use awk to merge two files when they have the same first column. Here are my example files: FileA.txt A2M 1 A4GALT 11 AAAS 35 AAGAB 7 FileB.txt A4GALT 2 AAAS 17 AAGAB 7 As you can see, the second file is missing the entry for A2M. If I am missing an entry, then I want the entry to read 0 in the final output. As so: A2M 1 0 A4GALT 11 2 AAAS 35 17 AAGAB 7 7 My lab mate suggested that I use awk since join isn't working properly for me. With some help, I've come up with this awk command: awk -F "\t" 'FNR==NR {h[$1] = $2;next} BEGIN{ OFS = "\t"} {print $0,$2?h[$1]:"0"}' FileB.txt FileA.txt However, my output does not output the 0 when there isn't a match in FileB.txt and instead prints nothing. Any idea about what's going wrong?
If you join two files, it's a job for join: join -1 1 -2 1 -a 1 -o 1.1 -o 1.2 -o 2.2 -e "0" FileA.txt FileB.txt Where: -1 1 -2 1 defines the fields to join (in both files the 1st) -a 1 to force join to print the unpairable lines from FileA.txt -o 1.1 1.2 2.2 is the output format and -e "0" defines the value to store in an empty field The output: A2M 1 0 A4GALT 11 2 AAAS 35 17 AAGAB 7 7
Merge lines with matching first column
1,560,507,233,000
I have a requirement of comparing the first column between 2 pipe delimited files and if they match, I need to replace the 3rd Column in File 1 with 4th Column in File 2. File 1: 111|xyz|23345 222|abc|123 333|xyz|45667 444|xyz|5432 555|xyz|8976 File 2: 111|xyz|344|rtms 222|abc|222|xyzw 666|xyz|ggg|abde 888|xyz|ff|nnnn 333|xyz|dd|abde 444|xyz|vv|nnnn 555|xyz|bbb|uuyytt Output File: 111|xyz|rtms 222|abc|xyzw 333|xyz|abde 444|xyz|nnnn 555|xyz|uuyytt
One-liner without the need for awk and a temporary file: join -t '|' -j1 -o 1.1 1.2 2.4 <(sort -t'|' -k1,1 file1) <(sort -t '|' -k1,1 file2) Using both join and awk: First, sort file2 based on 1st field and save it it file2.sort sort -k 1 file2 > file2.sort Now, using "|" as the delimiter, join file1 and file2.sort. Then again using "|" as the delimiter, extract the necessary column using awk. join -t '|' file1 file2.sort | awk -F "|" ' {print $1"|"$2"|"$6}' THe output will be: ron@ron:~$ join -t '|' file1 file2.sort | awk -F "|" ' {print $1"|"$2"|"$6}' 111|xyz|rtms 222|abc|xyzw 333|xyz|abde 444|xyz|nnnn 555|xyz|uuyytt
Compare 1st Column in 2 Files and Replace 3rd Column of File 1 with 4th Column of File 2
1,350,877,526,000
I am having some issues with a script that is using join to join two files. Eaxmple input files contains lines like this: Here are the input file and the output of the join command: D:\work\BuildScripts\3C>cat D:\temp\aaa.txt hzapplications\adn\adn4\adn4density\adn4_idd_module.cpp,83 hzapplications\adn\adn4\adn4density\adn4dencalmodule.cpp,73 hzapplications\adn\adn4\adn4density\adn4denimagemodulerm.cpp,111 hzapplications\adn\adn4\adn4density\adn4denimagemodulert.cpp,202 hzapplications\adn\adn4\adn4density\adn4densityanqmodules.cpp,445 hzapplications\adn\adn4\adn4density\adn4densityappl.cpp,378 hzapplications\adn\adn4\adn4density\adn4densityappl.h,50 hzapplications\adn\adn4\adn4density\adn4densityevrmodules.cpp,272 hzapplications\adn\adn4\adn4density\adn4densitykernel.cpp,490 hzapplications\adn\adn4\adn4density\adn4densitykernel.h,65 hzapplications\adn\adn4\adn4density\adn4densitysecimgmodule.cpp,209 hzapplications\adn\adn4\adn4density\adn4densitysecimgmodule.h,70 hzapplications\adn\adn4\adn4density\adn4densitysecmodule.cpp,218 hzapplications\adn\adn4\adn4density\adn4densitysecmodule.h,70 hzapplications\adn\adn4\adn4density\adn4dphimodules.cpp,610 hzapplications\adn\adn4\adn4density\adn4dphimodulesrt.cpp,115 hzapplications\adn\adn4\adn4density\adn4rhomodulesrt.cpp,102 D:\work\BuildScripts\3C>cat D:\temp\bbb.txt hzapplications\activect\ptc\ictsx01\ictsx01_bootuptask.cpp,1 hzapplications\activeps\iola\acquisition\iola_acqmodule.cpp,4 hzapplications\activeps\iola\simulation\iola_simmodule.cpp,3 hzapplications\activeps\iolr\simulation\iolr_simmodule.cpp,1 hzapplications\activeps\iolr\task\iolr_poweron200vhitask.cpp,1 hzapplications\activeps\iolr\task\iolr_poweron200vlowtask.cpp,1 hzapplications\activeps\iolr\task\iolr_poweronnrlvtask.cpp,1 hzapplications\activeps\iolr\task\iolrtaskcommon.cpp,2 hzapplications\adn\adn4\adn4density\adn4densitykernel.cpp,1 hzapplications\adn\adn4\adn4equipment\adn4adseelem.cpp,1 hzapplications\adn\adn4\adn4equipment\adn4collar.cpp,1 hzapplications\adn\adn4\adn4equipment\adn4tool.cpp,2 hzapplications\adn\adn6c\adn6cequipment\adn6ccollar.cpp,1 hzapplications\adn\adn8\adn8equipment\adn8tool.cpp,1 hzapplications\adn\adn8\adn8neutron\adn8neutronkernel.cpp,1 hzapplications\adn\adn8d\adn8ddensity\adn8ddensitykernel.cpp,1 hzapplications\adn\adn8d\adn8dequipment\adn8dtool.cpp,1 D:\work\BuildScripts\3C>join --ignore-case -1 1 -2 1 -t"," -o "1.1,1.2,2.2" -e "0" -a 1 D:\temp\aaa.txt D:\temp\bbb.txt hzapplications\adn\adn4\adn4density\adn4_idd_module.cpp,83,0 hzapplications\adn\adn4\adn4density\adn4dencalmodule.cpp,73,0 hzapplications\adn\adn4\adn4density\adn4denimagemodulerm.cpp,111,0 hzapplications\adn\adn4\adn4density\adn4denimagemodulert.cpp,202,0 hzapplications\adn\adn4\adn4density\adn4densityanqmodules.cpp,445,0 hzapplications\adn\adn4\adn4density\adn4densityappl.cpp,378,0 hzapplications\adn\adn4\adn4density\adn4densityappl.h,50,0 hzapplications\adn\adn4\adn4density\adn4densityevrmodules.cpp,272,0 hzapplications\adn\adn4\adn4density\adn4densitykernel.cpp,490,0 hzapplications\adn\adn4\adn4density\adn4densitykernel.h,65,0 hzapplications\adn\adn4\adn4density\adn4densitysecimgmodule.cpp,209,0 hzapplications\adn\adn4\adn4density\adn4densitysecimgmodule.h,70,0 hzapplications\adn\adn4\adn4density\adn4densitysecmodule.cpp,218,0 hzapplications\adn\adn4\adn4density\adn4densitysecmodule.h,70,0 hzapplications\adn\adn4\adn4density\adn4dphimodules.cpp,610,0 hzapplications\adn\adn4\adn4density\adn4dphimodulesrt.cpp,115,0 hzapplications\adn\adn4\adn4density\adn4rhomodulesrt.cpp,102,0 D:\work\BuildScripts\3C> The expected output is that this particular line is joined like so: hzapplications\adn\adn4\adn4density\adn4densitykernel.cpp,490,1 Any suggests are most welcome. I am using the unxutils package on windows, this is the exact version: D:\work\BuildScripts\3C>join --version join (GNU textutils) 2.0 Written by Mike Haertel. Copyright (C) 1999 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
It turns out that --ignore-case is the problem. It has an effect even when there are no uppercase letters because it treats all lowercase letters as uppercase, causing them to jump to the other side of the characters that are between the uppercase and lowercase in ASCII order: [\]^_ In normal sorted order, iolrt comes after iolr_ but in --ignore-case order they are reversed. The sort command needs the -f option to produce the correct order. (In addition to -t, and -k1,1)
Using join with two files fails on larger file sizes
1,350,877,526,000
My files: file1.txt ========= key1 key1 key1 key1 key2 key2 key3 key3 key3 key4 key4 file2.txt ========= key1 22 key2 23 key3 24 Expected Output : ================== key1 22 key1 22 key1 22 key1 22 key2 23 key2 23 key3 24 key3 24 key3 24 All solutions that I found does not duplicate matches strings. awk '{a[$1]=a[$1]" "$2} END{for(i in a)print i, a[i]}' join -a 1 What needs to be modified in this approaches to result in a left outer join?
Awk solution: awk 'NR==FNR{ a[$1]=$2; next }$1 in a{ $2=a[$1]; print }' file2.txt file1.txt The output: key1 22 key1 22 key1 22 key1 22 key2 23 key2 23 key3 24 key3 24 key3 24 Or simply with join command: join -o1.1,2.2 file1.txt file2.txt
LEFT (OUTER) JOIN [duplicate]
1,350,877,526,000
I have many files like following in a directory "results" 58052 results/TB1.genes.results 198003 results/TB1.isoforms.results 58052 results/TB2.genes.results 198003 results/TB2.isoforms.results 58052 results/TB3.genes.results 198003 results/TB3.isoforms.results 58052 results/TB4.genes.results 198003 results/TB4.isoforms.results For eg: TB1.genes.results file looks like following: gene_id transcript_id(s) length effective_length expected_count TPM FPKM ENSG00000000003 ENST00000373020,ENST00000494424,ENST00000496771,ENST00000612152,ENST00000614008 2206.00 1997.20 1.00 0.00 0.01 ENSG00000000005 ENST00000373031,ENST00000485971 940.50 731.73 0.00 0.00 0.00 ENSG00000000419 ENST00000371582,ENST00000371584,ENST00000371588,ENST00000413082,ENST00000466152,ENST00000494752 977.15 768.35 1865.00 14.27 37.82 ENSG00000000457 ENST00000367770,ENST00000367771,ENST00000367772,ENST00000423670,ENST00000470238 3779.11 3570.31 1521.00 2.50 6.64 ENSG00000000460 ENST00000286031,ENST00000359326,ENST00000413811,ENST00000459772,ENST00000466580,ENST00000472795,ENST00000481744,ENST00000496973,ENST00000498289 1936.74 1727.94 1860.00 6.33 16.77 ENSG00000000938 ENST00000374003,ENST00000374004,ENST00000374005,ENST00000399173,ENST00000457296,ENST00000468038,ENST00000475472 2020.10 1811.30 6846.00 22.22 58.90 ENSG00000000971 ENST00000359637,ENST00000367429,ENST00000466229,ENST00000470918,ENST00000496761,ENST00000630130 2587.83 2379.04 0.00 0.00 0.00 ENSG00000001036 ENST00000002165,ENST00000367585,ENST00000451668 1912.64 1703.85 1358.00 4.69 12.42 ENSG00000001084 ENST00000229416,ENST00000504353,ENST00000504525,ENST00000505197,ENST00000505294,ENST00000509541,ENST00000510837,ENST00000513939,ENST00000514004,ENST00000514373,ENST00000514933,ENST00000515580,ENST00000616923 2333.50 2124.73 1178.00 3.26 8.64 Other files also has the same columns. To join all "genes.results" with "gene_id" and "expected_count" columns into one text file I gave the following command. paste results/*.genes.results | tail -n+2 | cut -f1,5,12,19,26 > final.genes.rsem.txt [-f1 (gene_id), 5 (expected_count column from TB1.genes.results), 12 (expected_count column from TB2.genes.results), 19 (expected_count column from TB3.genes.results), 26 (expected_count column from TB4.genes.results)] "final.genes.rsem.txt" has, selected gene_id and expected_count columns from every file. ENSG00000000003 1.00 0.00 3.00 2.00 ENSG00000000005 0.00 0.00 0.00 0.00 ENSG00000000419 1865.00 1951.00 5909.00 8163.00 ENSG00000000457 1521.00 1488.00 849.00 1400.00 ENSG00000000460 1860.00 1616.00 2577.00 2715.00 ENSG00000000938 6846.00 5298.00 1.00 2.00 ENSG00000000971 0.00 0.00 6159.00 7069.00 ENSG00000001036 1358.00 1186.00 6196.00 7009.00 ENSG00000001084 1178.00 1186.00 631.00 1293.00 My question is - As I have only few samples I gave the column number in the command [like this in "cut" -f1,5,12,19,26]. What I should do if I have more than 100 samples. How can I join them with required columns?
GNU awk is used. I put this command in the bash script. It will be more convenient. Usage: ./join_files.sh or, for pretty printing, do: ./join_files.sh | column -t. #!/bin/bash gawk ' NR == 1 { PROCINFO["sorted_in"] = "@ind_num_asc"; header = $1; } FNR == 1 { file = gensub(/.*\/([^.]*)\..*/, "\\1", "g", FILENAME); header = header OFS file; } FNR > 1 { arr[$1] = arr[$1] OFS $5; } END { print header; for(i in arr) { print i arr[i]; } }' results/*.genes.results Output (I created three files with the same content for testing) $ ./join_files.sh | column -t gene_id TB1 TB2 TB3 ENSG00000000003 1.00 1.00 1.00 ENSG00000000005 0.00 0.00 0.00 ENSG00000000419 1865.00 1865.00 1865.00 ENSG00000000457 1521.00 1521.00 1521.00 ENSG00000000460 1860.00 1860.00 1860.00 ENSG00000000938 6846.00 6846.00 6846.00 ENSG00000000971 0.00 0.00 0.00 ENSG00000001036 1358.00 1358.00 1358.00 ENSG00000001084 1178.00 1178.00 1178.00 Explanation - the same code with comments added. Also, look at the man gawk. gawk ' # NR - the total number of input records seen so far. # If the total line number is equal 1 NR == 1 { # If the "sorted_in" element exists in PROCINFO, then its value controls # the order in which array elements are traversed in the (for in) loop. # else the order is undefined. PROCINFO["sorted_in"] = "@ind_num_asc"; # Each field in the input record may be referenced by its position: $1, $2, and so on. # $1 - is the first field or the first column. # The first field in the first line is the "gene_id" word; # Assign it to the header variable. header = $1; } # FNR - the input record number in the current input file. # NR is the total lines counter, FNR is the current file lines counter. # FNR == 1 - if it is the first line of the current file. FNR == 1 { # remove from the filename all unneeded parts by the "gensub" function # was - results/TB1.genes.results # become - TB1 file = gensub(/.*\/([^.]*)\..*/, "\\1", "g", FILENAME); # and add it to the header variable, concatenating it with the # previous content of the header, using OFS as delimiter. # OFS - the output field separator, a space by default. header = header OFS file; } # some trick is used here. # $1 - the first column value - "gene_id" # $5 - the fifth column value - "expected_count" FNR > 1 { # create array with "gene_id" indexes: arr["ENSG00000000003"], arr["ENSG00000000419"], so on. # and add "expected_count" values to it, separated by OFS. # each time, when the $1 equals to the specific "gene_id", the $5 value will be # added into this array item. # Example: # arr["ENSG00000000003"] = 1.00 # arr["ENSG00000000003"] = 1.00 2.00 # arr["ENSG00000000003"] = 1.00 2.00 3.00 arr[$1] = arr[$1] OFS $5; } END { print header; for(i in arr) { print i arr[i]; } }' results/*.genes.results
How to join files with required columns in linux?
1,350,877,526,000
I have two very large text files with space-delimited fields: File1 527858 51 2 27.92464882 8.63E-07 570289 82 2 30.12532071 2.87E-07 571034 90 2 29.26089611 4.43E-07 571033 90 2 28.56723908 6.26E-07 452403 104 2 28.27577506 7.24E-07 351390 100 2 28.16226794 7.67E-07 527858 50 2 27.92464882 8.63E-07 File2 527858 rs435 570289 rs564 571034 rs654 571033 rs345 452403 rs665 351390 rs787 527858 rs435 output: rs435 51 2 27.92464882 8.63E-07 rs564 82 2 30.12532071 2.87E-07 rs654 90 2 29.26089611 4.43E-07 rs345 90 2 28.56723908 6.26E-07 rs665 104 2 28.27577506 7.24E-07 rs787 100 2 28.16226794 7.67E-07 rs435 50 2 27.92464882 8.63E-07 Compare the first column of file1 and file2 and replace the first column of file1 with names in 2nd column of file2.
Here's the same basic idea as the awk in Archemar's answer, implemented in Perl: $ perl -lane '$#F>1?print"$l{$F[0]} @F[1..$#F]":($l{$F[0]}=$F[1])' file2 file1 rs435 51 2 27.92464882 8.63E-07 rs564 82 2 30.12532071 2.87E-07 rs654 90 2 29.26089611 4.43E-07 rs345 90 2 28.56723908 6.26E-07 rs665 104 2 28.27577506 7.24E-07 rs787 100 2 28.16226794 7.67E-07 rs435 50 2 27.92464882 8.63E-07 Explanation -lane: the -l adds a newline to each print call and removes trailing newlines from each line of input. The -a makes perl act like awk: it will automatically split each input line into the array @F. So, the 1st field will be $F[0], the second $F[1] etc. The -n tells perl to read its input file(s) line by line and apply the script given with -e to each of them. $#F>1? ... : ... : this is a C-style conditional operator. The general format is condition ? foo : bar which means "if condition is true, do foo and if it isn't, do bar. The $#F is the number of array indices in the array @F. Since arrays begin at 0, a value of 1 means an array with two elements. So, this will execute the first block (print ..., see below) if there are more than 2 elements in the array, which will only be true for file1. ($l{$F[0]}=$F[1]) : this is executed for each line of file2, for each line with less than 3 fields. It populates the hash %l, whose keys are the numerical first fields of file2 and whose values are the associated rsIDs. print"$l{$F[0]} @F[1..$#F]" : print the rsID saved in the hash %l for this first field ($l{$F[0]}), a space, and then the rest of the fields in this line ($F[1..$#F]). Personally, I would probably use the awk solution or, at worst, the perl one I provided since they don't need the files to be sorted. However, since you tagged with join, here's how to do it using that tool: $ join -o 2.2 1.2 1.3 1.4 1.5 <(sort file1) <(sort file2) rs787 100 2 28.16226794 7.67E-07 rs665 104 2 28.27577506 7.24E-07 rs435 50 2 27.92464882 8.63E-07 rs435 50 2 27.92464882 8.63E-07 rs435 51 2 27.92464882 8.63E-07 rs435 51 2 27.92464882 8.63E-07 rs564 82 2 30.12532071 2.87E-07 rs345 90 2 28.56723908 6.26E-07 rs654 90 2 29.26089611 4.43E-07
match columns and replace
1,350,877,526,000
I have an invnentory file with a group of servers defined [prod] prod1 ansible_host=10.10.2.30 prod2 ansible_host=10.10.2.40 prod3 ansible_host=10.10.2.50 prod4 ansible_host=10.10.2.60 I want to define a variable that concatenates all servers in prod group delimited by comma. lets say variable is list, variable should contain values like list: "prod1_IP,prod2_IP,prod3_IP,prod4_IP" list: "10.10.2.30,10.10.2.40,10.10.2.50,10.10.2.60" one could easily put these manually, but catch is, list is supposed to expand in different scenrios. I guess one can use hostvars[item].ansible_host with for loop. but not sure how
The playbook below does the job - hosts: prod gather_facts: false tasks: - block: - set_fact: list1: "{{ ansible_play_hosts_all|product(['_IP'])|map('join')| join(',') }}" - debug: var: list1 - set_fact: list2: "{{ ansible_play_hosts_all|map('extract', hostvars, 'ansible_host')| join(',') }}" - debug: var: list2 run_once: true gives list1: prod1_IP,prod2_IP,prod3_IP,prod4_IP list2: 10.10.2.30,10.10.2.40,10.10.2.50,10.10.2.60
how to fetch ansible inventory & populate in an ansible variable?
1,350,877,526,000
I've been trying to merge two big files based on two key columns (Chromosome and Position) and I found out that the most efficient way seems to be awk. A sample of how my files look like is: file1.txt Gene_ID Chromosome Position Fst ENSG00000141424 18 33688658 0 ENSG00000141424 18 33688669 0 ENSG00000141424 18 33688681 0 ENSG00000141424 18 33688683 0.0111734 ENSG00000141424 18 33688720 0 ENSG00000141424 18 33688726 0 ENSG00000141424 18 33688743 0 ENSG00000141424 18 33688745 0 ENSG00000141424 18 33688763 0 And the other file: file2.txt Chromosome Start End Ref Alt RS_ID 1 10019 10020 TA T rs775809821 1 10020 10020 A - rs775809821 1 10055 10055 - A rs768019142 1 10055 10055 T TA rs768019142 1 10108 10108 C T rs62651026 1 10109 10109 A T rs376007522 1 10128 10128 A AC rs796688738 1 10128 10128 - C rs796688738 1 10139 10139 A T rs368469931 1 10144 10145 TA T rs144773400 I want to get a third file looking like this: Gene_ID Chromosome Position RS_ID Fst ENSG00000141424 18 33688658 rs1504554... 0 I've tried using awk and I think the syntax is OK but what I get is a file containing file1.txt and file2.txt concatenated. awk 'FS=" "; OFS=" ";NR=FNR{A[$1,$2]=$6;next}{$5=A[$2,$3];print}' file1.txt file2.txt > file3.txt Any ideas of what I might be doing wrong?
A few correction of your code should fix it awk 'NR==FNR{A[$1,$2]=$6;next}{$5=A[$2,$3];if($5!="")print}' file2.txt file1.txt NR==FNR instead of NR=FNR is the condition when awk runs through the first file file2.txt On the second run when NR!=FNR we print out the line if there exists a join key in A The default field separator for awk is already a whitespace, so no need to specify them here
Joining two files based on two key columns awk
1,350,877,526,000
I want to join two files according to number in the file, two number same file Toyota model1 BMW model2 Benz model3 BMW model4 BMW model5 Benz model6 Benz model7 and second file class C model1 class E model2 class A model3 class W model4 class W model5 class C model6 class A model7 I want to join the two file according to the number, like this, joining each model number with each other in one file. Toyota class C model1 BMW class E model2 Benz class A model3 BMW class W model4 BMW class W model5 Benz Class C model6 Benz Class A model7 after that delete the 'model' string My code, sort -V file1 > new_file1 sort -V file2 > new_file2 join newfile1 new_file2 > result.txt sed 's/model[(1-9)]//g' result.txt > result_1.txt the problem I'm getting an Error while joining the files join: file1.txt:10: is not sorted: 03Benz model 249 join: file2.txt:4: is not sorted: BMW model 2 what if I want to count frequency after that ( this pair happen in the text 2 times) after joined Toyota class C 1 BMW class E 1 Benz class A 2 BMW class W 2 Benz Class C 1
Use join. It looks like changing the first space character in the file to something other than space will be sufficient to convert the file whitespace-delimited records. Here's an implementation that replaces the first space with % and then joins on the second column of each file. $ cat file2 | sed -e 's/ /%/' | join -1 2 -2 2 - file1 or $ <file2 sed -e 's/ /%/' | join -1 2 -2 2 - file1 which produces model1 class%C Toyota model2 class%E BMW model3 class%A Benz model4 class%W BMW model5 class%W BMW model6 class%C Benz model7 class%A Benz If you need to convert it to a tab-delimited format, you can use tr. tr ' %' '\t '
How to Sort and join according to Number/Counter inside the file?
1,350,877,526,000
I'm trying to be a bit effective and need a script or command solution. Say I make a file with 2 columns, or two files with one column, whichever is easier: AA1 B2 ZZ1 YYY XX1 AA2 B2 ZZ2 YYY XX2 AA3 B3 ZZ3 YYY XX3 AA4 B4 ZZ4 YYY XX4 ZZ5 YYY XX5 ZZ6 YYY XX6 ZZ7 YYY XX7 There is an uneven number of entries. Now, I want to make a new file (preferably with some other input as well, though lets start here) that takes every value in the first column and combines that with every value in the second column (they will always have unequal number of lines) and outputs results as: AA1 B2 ZZ1 YYY XX1 AA1 B2 ZZ2 YYY XX2 [...] AA4 B4 ZZ1 YYY XX1 AA4 B4 ZZ2 YYY XX2 So, cycle through all values in column 1 in order, combining them with each value in column 2 in order.
Assuming you have the following files: $ cat file1 ZZ1 YYY XX1 ZZ2 YYY XX2 ZZ3 YYY XX3 ZZ4 YYY XX4 ZZ5 YYY XX5 ZZ6 YYY XX6 ZZ7 YYY XX7 $ cat file2 AA1 B2 AA2 B2 AA3 B3 AA4 B4 Then use this awk: awk 'FNR==NR{a[c++]=$0} FNR!=NR{for(i in a){print $0,a[i]}}' file1 file2 FNR==NR applies only to the first file file1 a[c++]=$0 fills an array called a with the contents of file1 FNR!=NR applies only to the second file file2 for(i in a) loops trough array a... print $0,a[i] ...and print the line from file2 and the contents of the array. The outout: AA1 B2 ZZ1 YYY XX1 AA1 B2 ZZ2 YYY XX2 AA1 B2 ZZ3 YYY XX3 AA1 B2 ZZ4 YYY XX4 AA1 B2 ZZ5 YYY XX5 AA1 B2 ZZ6 YYY XX6 AA1 B2 ZZ7 YYY XX7 AA2 B2 ZZ1 YYY XX1 AA2 B2 ZZ2 YYY XX2 AA2 B2 ZZ3 YYY XX3 AA2 B2 ZZ4 YYY XX4 AA2 B2 ZZ5 YYY XX5 AA2 B2 ZZ6 YYY XX6 AA2 B2 ZZ7 YYY XX7 AA3 B3 ZZ1 YYY XX1 AA3 B3 ZZ2 YYY XX2 AA3 B3 ZZ3 YYY XX3 AA3 B3 ZZ4 YYY XX4 AA3 B3 ZZ5 YYY XX5 AA3 B3 ZZ6 YYY XX6 AA3 B3 ZZ7 YYY XX7 AA4 B4 ZZ1 YYY XX1 AA4 B4 ZZ2 YYY XX2 AA4 B4 ZZ3 YYY XX3 AA4 B4 ZZ4 YYY XX4 AA4 B4 ZZ5 YYY XX5 AA4 B4 ZZ6 YYY XX6 AA4 B4 ZZ7 YYY XX7
Combine columns using awk? (Or other suggestions)
1,350,877,526,000
I have 'FileA': 10 10011300 10011301 T C 10 10012494 10012495 G A 10 10028691 10028692 A T 10 10093496 10093497 G A 10 10102457 10102458 C T 10 10103252 10103253 G C 10 10122271 10122272 T C 10 10128778 10128779 T C 10 10130299 10130300 C A 10 10148307 10148308 G A and I have files 1-22: 1 10177 rs367896724 1 10235 rs540431307 1 10352 rs555500075 1 10505 rs548419688 1 10506 rs568405545 1 10511 rs534229142 1 10539 rs537182016 1 10542 rs572818783 1 10579 rs538322974 1 10616 rs376342519 As you might be able to tell, I am dealing with genetic data. What I want to do is every time columns 1 and 3 from FileA match columns 1 and 2 from files 1-22, columns 1, 2 and 3 from files 1-22 (or columns 1 and 3 from FileA and column 3 from files 1-22) are printed into a text file with the number shared between FileA and files 1-22 in column 1 as the text file's name. So far, I'm stuck on sorting the files to be able to invoke join. FileA keeps giving me an error, even though I am trying to sort it exactly the same way I was successfully able to sort files 1-22. Essentially, what I am hoping for is a tutorial to do exactly the thing I am trying to do here. I have checked with multiple different sources and have attempted this several times to no avail. Commands I have used: sort -k 1,1 FileA.txt join -j 1 File1.txt FileA.txt > output.txt EDIT: Here's a sample of the desired output 18 3320671 rs375411568 18 3320673 rs550898405 18 3320676 rs73366565 18 3320704 rs536519819 18 3320720 rs118037107 18 3320736 rs566910986 18 3320755 rs567626849 18 3320786 rs183777311 18 3320860 rs528977928 18 3320887 rs577743595 18 3320897 rs530122744 It looks very similar to the earlier example of files 1-22, and that's because it is essentially the same thing. I want for matches between columns 1 and 3 from FileA with columns 1 and 2 of the file x (x being any one file of a series of files numbered 1-22; in this example, 18), a line corresponding with the matching line of file x appears and is saved in output.txt.
Okay, I actually forgot I asked this question but someone just upvoted this post and I have since figured it out so I'll go ahead and post the answer. For the first snippet of output I listed above, 10 10011300 10011301 T C 10 10012494 10012495 G A 10 10028691 10028692 A T 10 10093496 10093497 G A 10 10102457 10102458 C T 10 10103252 10103253 G C 10 10122271 10122272 T C 10 10128778 10128779 T C 10 10130299 10130300 C A 10 10148307 10148308 G A all I needed to do was use the following awk command, cat [inputfile.txt] | awk '{print $1"_"$3"\t"$4"\t"$5}' | sort -k1,1 > outputfileA.txt The output would look like this and all rows would be sorted by the first column: 10_10011301 T C 10_10012495 G A 10_10028692 A T 10_10093497 G A I would also do the same thing for the second snippet of code, 1 10177 rs367896724 1 10235 rs540431307 1 10352 rs555500075 1 10505 rs548419688 1 10506 rs568405545 and use a similar awk command to generate another file: cat [inputfile.txt] | awk '{print $1"_"$2"\t"$3}' | sort -k1,1 > outputfileB.txt Resulting in: 1_10177 rs367896724 1_10235 rs540431307 1_10352 rs555500075 1_10505 rs548419688 1_10506 rs568405545 Since both of these output files have one column in common, and those columns are sorted identically, we could then invoke the join command: join -1 1 -2 2 outputfileA.txt outputfileB.txt | tr ' ' '\t' > outputfileC.txt tr ' ' '\t' translates all whitespaces into tabs in the output. All lines with identical matches between the first column of the first file and the first column of the second file are written into the output file, which might look something like this: 1_101850899 A C rs138958619 1_101856556 T C rs191666242 1_101867058 C T rs188447998 1_101874381 A C rs143747209 1_101877269 G A rs186149522 1_101878704 C A rs192815769 1_101885657 G T rs150829467 1_101891797 T G rs141886478 1_101893793 T A rs182950692 1_101897192 T C rs189527356 I hope I explained that well. Let me know if I need to clarify anything.
How do I compare one text file against about two dozen other text files and print out certain columns of each line whenever there is a match?
1,350,877,526,000
I am trying join two files by one common column (1ª) between two file (File1 and file2), resulting in file 3, but the resultant file is not working. Can someone help me? I used this command: awk 'NR==FNR {h[$1] = $0; next} {print h[$1],$0}' file1 file2 > file3 File 1. 1 1767 0 1986 28061997 1 1 1 0 29031998 972 34176 9 1 9 9 55 97 42 1 0 2 1876 0 1986 25051995 1 1 1 0 22072000 952 30438 1 1 3 9 1009 95 25 4 0 3 5878 0 1986 16071996 1 1 1 0 22071997 963 30438 4 1 3 9 1009 96 40 4 0 File 2: 1 0 5 8 25031998 2 11 1071997 943 11 1101997 944 11 1011998 951 18 1011998 95 BX1001337 AX54651 BR909511 0 2 0 53 8 19021996 2 8 22031996 6 8 27051996 1 14 27051996 2 7 27051996 2 15 27051996 2 8 21021997 2 8 22031997 6 8 20041997 1 14 20041997 3 15 20041997 3 8 22071997 6 8 28091998 1 14 28091998 4 15 28091998 4 8 25061999 2 8 22091999 6 8 7032000 1 14 7032000 5 15 7032000 3 16 7032000 1 11 1071995 923 11 1101995 924 11 1011996 931 18 1011996 93 11 1041996 932 11 1071996 933 11 1101996 934 11 1011997 941 18 1011997 94 11 1041997 942 11 1071997 943 11 1101997 944 11 1011998 951 18 1011998 95 11 1041998 952 17 1041998 1005 11 1071998 953 17 1071998 1007 11 1101998 954 11 1011999 961 18 1011999 96 11 1041999 962 17 1041999 1009 11 1071999 963 17 1071999 1007 11 1101999 964 17 1101999 1003 11 1012000 971 18 1012000 97 17 1012000 1005 11 1042000 972 11 1072000 973 BR1001881 AX62116 BR872756 0 3 0 6 8 12041997 2 11 1101996 934 11 1011997 941 18 1011997 94 11 1041997 942 11 1071997 943 BR1003576 AX52602 BR830819 0 File3. It’s working wrong.Is putting the file 2 in 2ª line.. 1 1767 0 1986 28061997 1 1 1 0 29031998 972 34176 9 1 9 9 55 97 42 1 0 1 1 0 5 8 25031998 2 11 1071997 943 11 1101997 944 11 1011998 951 18 1011998 95 BR1001337 AX54651 BR909511 0 2 1876 0 1986 25051995 1 1 1 0 22072000 952 30438 1 1 3 9 1009 95 25 4 0 2 4 0 53 8 19021996 2 8 22031996 6 8 27051996 1 14 27051996 2 7 27051996 2 15 27051996 2 8 21021997 2 8 22031997 6 8 20041997 1 14 20041997 3 15 20041997 3 8 22071997 6 8 28091998 1 14 28091998 4 15 28091998 4 8 25061999 2 8 22091999 6 8 7032000 1 14 7032000 5 15 7032000 3 16 7032000 1 11 1071995 923 11 1101995 924 11 1011996 931 18 1011996 93 11 1041996 932 11 1071996 933 11 1101996 934 11 1011997 941 18 1011997 94 11 1041997 942 11 1071997 943 11 1101997 944 11 1011998 951 18 1011998 95 11 1041998 952 17 1041998 1005 11 1071998 953 17 1071998 1007 11 1101998 954 11 1011999 961 18 1011999 96 11 1041999 962 17 1041999 1009 11 1071999 963 17 1071999 1007 11 1101999 964 17 1101999 1003 11 1012000 971 18 1012000 97 17 1012000 1005 11 1042000 972 11 1072000 973 BR1001881 AX62116 BR872756 0 3 5878 0 1986 16071996 1 1 1 0 22071997 963 30438 4 1 3 9 1009 96 40 4 0 3 4 0 6 8 12041997 2 11 1101996 934 11 1011997 941 18 1011997 94 11 1041997 942 11 1071997 943 BR1003576 AX52602 BR830819 0 File3. I would like that: 1 1767 0 1986 28061997 1 1 1 0 29031998 972 34176 9 1 9 9 55 97 42 1 0 0 5 8 25031998 2 11 1071997 943 11 1101997 944 11 1011998 951 18 1011998 95 BR1001337 AX54651 BR909511 0 2 1876 0 1986 25051995 1 1 1 0 22072000 952 30438 1 1 3 9 1009 95 25 4 0 0 53 8 19021996 2 8 22031996 6 8 27051996 1 14 27051996 2 7 27051996 2 15 27051996 2 8 21021997 2 8 22031997 6 8 20041997 1 14 20041997 3 15 20041997 3 8 22071997 6 8 28091998 1 14 28091998 4 15 28091998 4 8 25061999 2 8 22091999 6 8 7032000 1 14 7032000 5 15 7032000 3 16 7032000 1 11 1071995 923 11 1101995 924 11 1011996 931 18 1011996 93 11 1041996 932 11 1071996 933 11 1101996 934 11 1011997 941 18 1011997 94 11 1041997 942 11 1071997 943 11 1101997 944 11 1011998 951 18 1011998 95 11 1041998 952 17 1041998 1005 11 1071998 953 17 1071998 1007 11 1101998 954 11 1011999 961 18 1011999 96 11 1041999 962 17 1041999 1009 11 1071999 963 17 1071999 1007 11 1101999 964 17 1101999 1003 11 1012000 971 18 1012000 97 17 1012000 1005 11 1042000 972 11 1072000 973 BR1001881 AX62116 BR872756 0 3 5878 0 1986 16071996 1 1 1 0 22071997 963 30438 4 1 3 9 1009 96 40 4 0 0 6 8 12041997 2 11 1101996 934 11 1011997 941 18 1011997 94 11 1041997 942 11 1071997 943 BR1003576 AX52602 BR830819 0
Try this one liner!. awk 'FNR==NR{A[$1]=$0;next}{line="";for(i=2;i<=NF;i++)line=line $i" ";sub(" $","",line);if ($1 in A)print A[$1]" "line;}' file1.txt file2.txt > file3 Out 1 1767 0 1986 28061997 1 1 1 0 29031998 972 34176 9 1 9 9 55 97 42 1 0 0 5 8 25031998 2 11 1071997 943 11 1101997 944 11 1011998 951 18 1011998 95 BX1001337 AX54651 BR909511 0 2 1876 0 1986 25051995 1 1 1 0 22072000 952 30438 1 1 3 9 1009 95 25 4 0 0 53 8 19021996 2 8 22031996 6 8 27051996 1 14 27051996 2 7 27051996 2 15 27051996 2 8 21021997 2 8 22031997 6 8 20041997 1 14 20041997 3 15 20041997 3 8 22071997 6 8 28091998 1 14 28091998 4 15 28091998 4 8 25061999 2 8 22091999 6 8 7032000 1 14 7032000 5 15 7032000 3 16 7032000 1 11 1071995 923 11 1101995 924 11 1011996 931 18 1011996 93 11 1041996 932 11 1071996 933 11 1101996 934 11 1011997 941 18 1011997 94 11 1041997 942 11 1071997 943 11 1101997 944 11 1011998 951 18 1011998 95 11 1041998 952 17 1041998 1005 11 1071998 953 17 1071998 1007 11 1101998 954 11 1011999 961 18 1011999 96 11 1041999 962 17 1041999 1009 11 1071999 963 17 1071999 1007 11 1101999 964 17 1101999 1003 11 1012000 971 18 1012000 97 17 1012000 1005 11 1042000 972 11 1072000 973 BR1001881 AX62116 BR872756 0 3 5878 0 1986 16071996 1 1 1 0 22071997 963 30438 4 1 3 9 1009 96 40 4 0 0 6 8 12041997 2 11 1101996 934 11 1011997 941 18 1011997 94 11 1041997 942 11 1071997 943 BR1003576 AX52602 BR830819 0 Saves in A lines from file1 by first column, starts reading file2, skips first column and saves fields in line, if first column in A concatenates both lines in one line and prints.
How join two files by one common column (1ª) between two file (File1 and file2)?
1,350,877,526,000
I have two unsorted files, each with two columns. For any line in file1 whose column1 value matches that of any line in file2, but whose column 2 values differ, I want to print the column 1 value and each column 2 value. If data from column1 in file 1 does not exist in file2 it can be discarded. I do not need to preserve the sort order in the output file. file1: 2222,b2 4444,d4 1111,a1 3333,c3 5555,e5 file2: 2222,8f 5555,e9 4444,7c 3333,c3 OUTPUT file: 2222,b2,8f 4444,d4,7c 5555,e5,e9
Read the 2nd file, save the content into an array (key = 1st field, value = 2nd field) then read the 1st file and check if 1st field is a common key and if the corresponding 2nd field is different. If the result is positive, print the key and the two values: awk 'BEGIN{FS=OFS=","}NR==FNR{z[$1]=$2;next} {if (z[$1] && (z[$1]!=$2)){print $0, z[$1]}}' file2 file1
If column 2 of file2 is diferent than column2 of file1 print column1 from file1 and column2 from both files
1,350,877,526,000
file1.txt (50 lines) TERYUFV00000010753 TERYUFV00000009526 file2.txt (500 lines) TERYUFV00000009526 refids_739_known_8/10_target TERYUFV00000018907 refids_12023_known_21/22_target TERYUFV00000010753 refids_11775_known_1/1_target Output.txt TERYUFV00000010753 refids_11775_known_1/1_target TERYUFV00000009526 refids_739_known_8/10_target Compare file1.txt (has 50 lines) with file2.txt (has 500 lines), get the list from file2.txt which are identical to file1.txt. I tried both join & fgrep command and it outputs empty file
fgrep -f file1.txt file2.txt Here we are obtaining search pattern from file1.txt and searching it in file2.txt. As the text is fixed we are using fgrep for faster search operation.
compare two files get identical list
1,350,877,526,000
I have a CSV file that is like this name;address;phone;email John;123 La Sierra;555-121212;[email protected] Nick;456 La Bongaa;555-121232;[email protected] Carl;789 La Fountain;553-121212;[email protected] and I want to remove the last entry making it be like name;address;phone; John;123 La Sierra;555-121212; Nick;456 La Bongaa;555-121232; Carl;789 La Fountain;553-121212; The last ; has to be kept there but the last field removed. I have found this code on a question of mine and adapted to this case perl -000ne '@f=split(/;/); print join(";",@f[0..2]) , "\n"' myFile.csv I thought it would split by ; and then join just fields 0, 1 and 2 but it is not working. file command gives me this result about myFile.csv UTF-8 Unicode text, with CRLF line terminators The file contains accented characters that I think may interfere with this. Any ideas?
Since I'm not sure you want a perl code so much, here is a similar awk code: awk -F';' -v OFS=';' '{ $NF=""; print }' data.csv => This code empties the last field of each line ($NF=""). Input fields (-F\;) and output fields (OFS=';') are said to be separated with ";". The same with sed: sed 's/[^;]*$//' data.csv => This substitutes (s/.../.../) the longest sequence of characters that is not a ";" ([^;]*) at the end of the line ($) with nothing. The same with grep: grep -o '.*;' data.csv => grep regular expressions are greedy by default, that means that they match the longest sequence possible. Here .*; hence means "the longest sequence of characters that ends with a ";". The -o option outputs what is matched instead of the whole line. Finally, a perl equivalent would be (thanks to @steeldriver): perl -F';' -lpe '$F[-1]=""; $_ = join ";", @F' data.csv => It works similarly to awk, the joining being explicit here.
Removing a field from a comma delimited text with accented chars
1,350,877,526,000
I have two files--file1 and file2 that I want to join but some fields are missing in second file for which I want to insert string 'null'. One requirement is that the keys must be in the same order as in file1. The input files and expected output result are as below: file1.txt file2.txt a 7 nah a anau b 0 blah c bau c 5 bah d cau d 1 gah e 0 hah Expected output result: a 7 nah anau b 0 blah null c 5 bah bau d 1 gah cau e 0 hah null
join + sort solution: join -o1.1,1.2,1.3,2.2 -a1 -e"null" <(sort file1.txt) <(sort file2.txt) The output: a 7 nah anau b 0 blah null c 5 bah bau d 1 gah cau e 0 hah null
Join two files, keep key order of first file and fill missing values with string 'null'
1,350,877,526,000
I used the sort command to numerically sort file1.txt. ~]# sort -n -o file1.txt file1.txt 0 Barack 50 George 60 Ronald 100 Bill The sort check command states the file is not sorted. ~]# sort -c file1.txt sort: file1.txt:4: disorder: 100 Bill I manually modify the file and place "100 Bill" second. ~]# cat file1.txt 0 Barack 100 Bill 50 George 60 Ronald Now, the sort check command produces no stdout or stderr, meaning the file is properly sorted. Once properly sorted, I can now join the files. It seems that the first character is being used to know if the file is sorted instead of the entire integer. sort -c file1.txt Is there a way to sort the file so that "100 Bill" is on line 2? I cannot do this manually, as the real application of this issue deals with hundreds of lines of text processed by a daily job.
If you want to use a file with join, you need to make sure it’s sorted lexicographically on the join key, which means you shouldn’t use -n with sort. You can always sort the result of the join operation again numerically. When you run sort -c, you need to specify the same sorting parameters as you used to sort the file: so if you want to check whether a file is sorted numerically, run sort -c -n. Thus in your case, run sort -o file1.txt file1.txt to sort the file lexicographically for use with join (using the first field as the join field).
Numeric sort fails to properly sort file
1,350,877,526,000
I have two CSV files and am trying to merge them based on the first column in the first file matching the third column in the second file. They lines are not sorted. file1.csv: android,1,2 osx,2,5 file2.csv: Converting,:Developer::|[E],android,Exact,,,,8,31 Converting,:Developer::|[E],osx,Exact,,,,8,31 Converting,:Developer::|[E],windows,Exact,,,,8,31 and would desire the following output.csv: Converting,:Developer::|[E],android,Exact,,,,8,31,1,2 Converting,:Developer::|[E],osx,Exact,,,,8,31,2,5 Converting,:Developer::|[E],windows,Exact,,,,8,31,, I have tried every example of awk -F',' 'FNR==NR..... that I could find on here but just cant seem to get it right.
You could use join for this join -1 1 -2 3 -t ',' -a 2 -o 2.{1..9} 1.{2..3} <(sort file1.csv) <(sort file2.csv) -1 and -2 specifies which field from the files to compare -t specifies the seperator to use for the fields -a 2 says to print lines in <file2> that do not match -o configures the output based on <file>.<field>
Compare columns from two CSVs and merge on matches
1,350,877,526,000
I have 2 files and want to inner join them using awk. This is written using sql server : SELECT [file1.column1],[file2.column2] FROM file1 INNER JOIN file2 on file2.column1 = file1.column5; This is the file i want to join: file1 : file2: so the key is column5 file1 and column1 file2. How to write them in awk language? Your help will be useful for me to learn this awk.
One way: join -t"|" -1 5 -2 1 -o 1.1 2.2 file1 file2 -1 5 - Use the 5th column of file1 -2 1 - Use the 1st column of file2 -o 1.1 2.2 - Print as output 1st column of 1st file, 2nd column of 2nd file
How to join 2 files based one key and selected some specific column?
1,350,877,526,000
I have several lists with two fields - first field contain an URL, 2nd field an email-address (an account). The 2nd field is the same for all entries in a list. I concatenate the lists to one list, and sort it by the 1st field. Most entries are unique, but some are duplicates or triplicates (ie. the URL was in the list for multiple accounts). Is there a command or script that I can use to join the duplicates, so the 2nd field became a list of accounts when required? For example: url1 acct2 url2 acct1 url3 acct1 url3 acct2 url4 acct2 url4 acct3 url4 acct5 ... Should become: url1 acct2 url2 acct1 url3 acct1 acct2 url4 acct2 acct3 acct5 ...
With sort + awk pipeline: sort -k1,1 file \ | awk 'url && $1 != url{ print url, acc } { acc = ($1 == url? acc FS:"") $2; url = $1 }END{ print url, acc }' OFS='\t' Sample output: url1 acct2 url2 acct1 url3 acct1 acct2 url4 acct2 acct3 acct5
List sorted on 1st field, how can I join 2nd field on lines where 1st field is the same?
1,350,877,526,000
I am a novice programmer. I am using unix's join command to self-join couple long files together. join -j30 test test2 col1 col2 ... col30 col1 col2 ... col30 A B ZZZ ^M A B ZZZ I am getting this ^M character in my output. Why is it there? and How would I remove it? EDIT: Below is a screenshot of my part of my output
The ^M means you are bringing over/editing file in Windows. Use the dos2unix command over the files to convert them to Unix text mode. DOS uses carriage return and line feed "\r\n" as a line ending, while Unix uses just line feed "\n". The ^M are a visual representation of the "extra" \r characters. To install the dos2unix command, do (on Debian-based distros): sudo apt-get install dos2unix or in a Mac (MacPorts): sudo port install dos2unix Alternatively, you can also do it with sed as in: sed 's/\r$//' dosfile.txt > unixfile.txt
How to fix unix `join` command inserting ^M between join columns? -Unix
1,350,877,526,000
I'm struggling with this task: I have two files: file1 looks like: 102 13.342 103 7.456 105 6.453 107 3.567 108 4.210 file2 looks like: 0 098 0 0 0 -9 x 0 099 0 0 0 -9 x 0 100 0 0 0 -9 x 0 101 0 0 0 -9 x 0 102 0 0 0 -9 x 0 103 0 0 0 -9 x 0 104 0 0 0 -9 x 0 105 0 0 0 -9 x 0 106 0 0 0 -9 x 0 106 0 0 0 -9 x 0 107 0 0 0 -9 x 0 108 0 0 0 -9 x And I want a file3 that looks like 0 098 0 0 0 -9 x 0 099 0 0 0 -9 x 0 100 0 0 0 -9 x 0 101 0 0 0 -9 x 0 102 0 0 0 13.342 x 0 103 0 0 0 7.456 x 0 104 0 0 0 -9 x 0 105 0 0 0 6.453 x Basically, I want to join file1 and file2 by matching first and second fields in file1 and file2 respectively, keeping mismatches, and also, substituting the value of the sixth field in file2 with the value of the second field in file1 in each matching line... I know this task is related with the use of: join -a1 -a2 -o 1.2 whatsoever. But cannot figure it out how to continue... Also considering awk
When there are so many fields involved I tend to prefer awk: $ awk 'NR==FNR{a[$1]=$2; next}{if($2 in a){$6=a[$2]}}1;' file1 file2 0 098 0 0 0 -9 x 0 099 0 0 0 -9 x 0 100 0 0 0 -9 x 0 101 0 0 0 -9 x 0 102 0 0 0 13.342 x 0 103 0 0 0 7.456 x 0 104 0 0 0 -9 x 0 105 0 0 0 6.453 x 0 106 0 0 0 -9 x 0 106 0 0 0 -9 x 0 107 0 0 0 3.567 x 0 108 0 0 0 4.210 x Explanation NR==FNR{a[$1]=$2; next} : NR is the current line number and FNR is the current line number of the current file. When processing more than one file, the two will be equal only while the 1st file is being read. a[$1]=$2 uses the st field as a key to an array whose value is the 2nd field. Thenextskips to the next line. So, this will save all values fromfile1into the arraya`. if($2 in a){$6=a[$2]} : now we're reading the 2nd file. If the second field of this line is present in the array a, set the 6th field ($6) to be whatever was stored in a for the second field. 1; : this is shorthand for "print this line".
Joining two files matching two columns with mismatches and in each matching line, substitute second column from file 1 into 6th column in file 2
1,350,877,526,000
Imagine that we have two for example files. The first file is filled with unique names of employees created by combining the first two characters of the first name and the last 2 characters of the last name. Example : Peter Smith - Peht First file contains : Peht Mawo Stso Makr Bavo The second file contains recordings about them when they logged into the system. ( Obviously there are also employees that are not listed in file1. ) The second file: Mawo 21.4.2016 17:49 Peht 21.4.2016 17:58 Mawo 22.4.2016 7:58 Wato 22.4.2016 7:59 Stso 22.4.2016 8:02 Bavo 22.4.2016 8:15 Bane 22.4.2016 9:01 Bavo 23.4.2016 9:12 Mawo 23.4.2016 9:24 Dalo 23.4.2016 9:54 Peht 23.4.2016 9:58 Grma 24.4.2016 10:00 I need to find out how many times employes from file1 connected to the system ( file2 ). What is the best way of doing that? The only solution which came to my mind is to make some 2 loops and for each name from file1 loop the whole file2 then grep names, ask if the names match if yes then count++. Could anyone give me some elegant solution for this problem using for example awk if it's possible?
Something simple like: mapfile -t names < file1 for name in "${names[@]}" do echo "${name}" $(grep -c "^$name " file2) done Will provide output like: Peht 2 Mawo 3 Stso 1 Makr 0 Bavo 2 The grep string says to anchor the username at the beginning (^) of the line, and enforce a trailing space after the line.
Elegant way of counting how many times patterns from a file occur in another file
1,350,877,526,000
I would like to create a simple bash function to use for my convenience. Following the answer given at: Joining bash arguments into single string with spaces I've been able to mash up this small piece of code: function gcm { msg="'$*'" eval "git commit -m ${msg}" } Now, this example is very convenient for commit messages like "Hello, it's me" (simple set of word characters that is), but when I wan't a commit message like: "[WIP] Halfway trough code.", I get an error message as follows: zsh: no matches found: [WIP] Would you please clarify for me what is happening in the background and why this snippet fails?
ZSH is delightfully free of the word-splitting behaviour seen in other shells (unless for some bizarre reason the SH_WORD_SPLIT option has been turned on), so there is no need to use strange double-quoting constructs. % (){ print -l $* } a b c a b c % (){ print -l "$*" } a b c a b c % (){ local msg; msg="$*"; print -l $msg } a b c a b c % Thus, the following should suffice: function gcm { local msg msg="$*" git commit -m $msg } Globbing may be disabled by quoting strings like [WIP] as '[WIP]', or perhaps via a noglob alias: % function blah { print -l "$*" } % alias blah='noglob blah' % blah [de] *blah* [de] *blah* %
ZSH, concatenate passed in arguments into a single string
1,350,877,526,000
I'm trying to join two files removing duplicate head row and take only 1 last row for example: File1.txt head1 data1 data2 tail8 File2.txt head1 data3 data4 tail9 results desired in file3.txt: head1 data1 data2 data3 data4 tail8 or rail9 doesn't matter I try first this to remove duplicate head: awk '!seen[$0]++' file1.txt file2.txt > file3.txt second command is: awk 'NR > 1 { print prev } { prev = $0 }' file3.txt > file4.txt but result is with tail in the middle of file4.txt, not to the end head1 data1 data2 tail8 data3 data4 Any idea? Thank you in advance
$ awk 'NR==FNR{ if (NR>1) print prev; prev=$0; next } FNR>1' file1 file2 head1 data1 data2 data3 data4 tail9
Merge two file skip last row of 1 file with awk
1,350,877,526,000
I have a text file with format word @@@ type @@@ sentence on every line, sorted by 'word' in ascending order. Some lines however are not unique and they begin with the same word as the previous line, ie see word1 below: ... word0 @@@ type2 @@@ sentence0 word1 @@@ type1 @@@ sentence1 word1 @@@ type1 @@@ sentence2 word1 @@@ type1 @@@ sentence3 word1 @@@ type2 @@@ sentence4 word2 @@@ type1 @@@ sentence5 ... I want to join the lines that have the same word and type combination into one, by appending the sentences, so the file results in: ... word0 @@@ type2 @@@ sentence0 word1 @@@ type1 @@@ sentence1 ;;; sentence2 ;;; sentence3 word1 @@@ type2 @@@ sentence4 word2 @@@ type1 @@@ sentence5 ... The word and type fields have no whitespace.
Assuming your input is sorted on both word and type fields as it appears from your posted sample input: $ cat tst.awk BEGIN { FS=" @@@ "; ORS="" } { curr = $1 FS $2 } curr != prev { printf "%s%s", ORS, $0 prev = curr ORS = RS next } { printf " ;;; %s", $NF } END { print "" } $ awk -f tst.awk file word0 @@@ type2 @@@ sentence0 word1 @@@ type1 @@@ sentence1 ;;; sentence2 ;;; sentence3 word1 @@@ type2 @@@ sentence4 word2 @@@ type1 @@@ sentence5 The above will work using any awk in any shell on every UNIX box, only stores 1 line at a time in memory, and will produce output in the same order as the input.
Merge following portions of lines into current line in a 3 column file
1,350,877,526,000
I have a table A: 1 n m n ... 2 m n m ... 3 n m n ... 4 m n m ... 5 n m n ... I have a table B: 1 A 3 B 5 C I want to join the column 2 of table B with table A by matching column 1 of both tables, without removing the unique lines in table A to get the following (for no matches write a "NA"): 1 A n m n ... 2 NA m n m ... 3 B n m n ... 4 NA m n m ... 5 C n m n ...
From man join -a FILENUM also print unpairable lines from file FILENUM, where FILENUM is 1 or 2, corresponding to FILE1 or FILE2 -e EMPTY replace missing input fields with EMPTY so join -a1 -e 'NA' -o 0,2.2,1.2,1.3,1.4 A B 1 A n m n 2 NA m n m 3 B n m n 4 NA m n m 5 C n m n
Join without removing unique lines in Linux
1,350,877,526,000
I have two files: File1.txt 30 40 A T match1 string1 45 65 G R match2 string2 50 78 C Y match3 string3 File2.txt match1 60 add1 50 add2 match2 15 add1 60 add2 match3 20 add1 45 add2 and I want to obtain an output that looks like so: 30 40 A T match1 string1 60 add1 45 65 G R match2 string2 15 add1 50 78 C Y match3 string3 20 add1 I want to append column 2 and column 3 from file2.txt to the end of file1.txt if there is a match in column 5 from file1.txt. I've tried to use this join command: join -1 5 -2 1 -a 1 -o 1.1 -o 1.2 -o 1.3 -o 1.4 -o 1.5 -o 1.6 -o 2.2 -o 2.3 file1.txt fil2.txt However, this only seems to print the columns from the first file. Is there any other solutions other than join to tackle this problem?
I found a solution: awk -F "\t" 'FNR==NR {a[$1] = $2 "\t" $3;next} $5 in a{print $0 "\t" a[$5]}' file2.txt file1.txt > outing.txt
Joining columns from files if they contain a match in another column
1,350,877,526,000
I have a file (file-1) that looks like this, DIP-10097N|refseq:NP_416170|uniprotkb:P30015 DIP-10117N|refseq:NP_414973|uniprotkb:P08177 DIP-10168N|refseq:NP_418766|uniprotkb:P15005 DIP-10199N|refseq:NP_415632|uniprotkb:P30958 DIP-10358N|refseq:NP_418659|uniprotkb:P28903 DIP-10440N|refseq:NP_289596|uniprotkb:P20082 DIP-10441N|refseq:NP_417502|uniprotkb:P20083 DIP-10441N|refseq:NP_417502|uniprotkb:P20083 DIP-10467N|refseq:NP_415423|uniprotkb:P09373 DIP-10469N|refseq:NP_418386|uniprotkb:P32674 DIP-10562N|refseq:NP_418370|uniprotkb:P17888 DIP-10582N|refseq:NP_414864|uniprotkb:P77743 DIP-10592N|refseq:NP_415819|uniprotkb:P37344 and another (file-2) which looks like this, DIP-10331N|refseq:NP_311078|uniprotkb:P12638 DIP-10117N|refseq:NP_414973|uniprotkb:P08177 DIP-10331N|refseq:NP_311078|uniprotkb:P12638 DIP-10840N|refseq:NP_414640|uniprotkb:P10408 DIP-1025N|refseq:NP_414574|uniprotkb:P00968 DIP-10097N|refseq:NP_416170|uniprotkb:P30015 DIP-10467N|refseq:NP_415423|uniprotkb:P09373 DIP-10097N|refseq:NP_416170|uniprotkb:P30015 DIP-10117N|refseq:NP_414973|uniprotkb:P08177 DIP-10117N|refseq:NP_414973|uniprotkb:P08177 DIP-10117N|refseq:NP_414973|uniprotkb:P08177 DIP-10750N|refseq:NP_289799|uniprotkb:P02410 DIP-10117N|refseq:NP_414973|uniprotkb:P08177 DIP-10757N|refseq:NP_288150|uniprotkb:P02421 In output I want to print the contents of file-1 plus the value in either column of file-2 that has the same value as that of file-1 in the other column. Like this, DIP-10097N|refseq:NP_416170|uniprotkb:P30015 DIP-1025N|refseq:NP_414574|uniprotkb:P00968 DIP-10097N|refseq:NP_416170|uniprotkb:P30015 DIP-10467N|refseq:NP_415423|uniprotkb:P09373 DIP-10117N|refseq:NP_414973|uniprotkb:P08177 DIP-10117N|refseq:NP_414973|uniprotkb:P08177 DIP-10117N|refseq:NP_414973|uniprotkb:P08177 DIP-10750N|refseq:NP_289799|uniprotkb:P02410 DIP-10117N|refseq:NP_414973|uniprotkb:P08177 DIP-10757N|refseq:NP_288150|uniprotkb:P02421 DIP-10117N|refseq:NP_414973|uniprotkb:P08177 DIP-10331N|refseq:NP_311078|uniprotkb:P12638 DIP-10467N|refseq:NP_415423|uniprotkb:P09373 DIP-10097N|refseq:NP_416170|uniprotkb:P30015 Is there a way I can do it using awk or grep. Any help would be highly appreciated.
This is wide known operation in awk — collect array from key-file then use the array to operate with second file values awk ' FNR==NR{ A[$2] = A[$2] " " $1 next } $1 in A{ for(i=1;i<=split(A[$1], B);i++) print $1 B[i] } ' file2 file1 Or a little bit shorter: awk ' FNR==NR{ A[$2] = A[$2] $2 " " $1 "\n" next } $1 in A{ printf "%s", A[$1] } ' file2 file1 Other variant grep -f <(cat -E file1) file2 | sed 's/\(\S*\)\s*\(\S*\)/\2\t\1/' | sort At last easiest (as for me): join -2 2 file1 <(sort -k2 file2)
Comapring columns in 2 files and printing the values that differ
1,350,877,526,000
I'm trying to join two simple files in Solaris 5.8 Version as below: ~/temp/s: cat 1 work1 a 8058 51 work2 b 15336 51 ~/temp/s: cat 2 8058 77-11:29:32 /apps/sas 15336 100-12:23:49 /local/hotfix ~/temp/s: join -1 3 -2 1 1 2 8058 work1 a 51 77-11:29:32 /apps/sas (The other line is missing from the output) The output only contains one record where it should be two. I'm really not sure where it went wrong. Is there any way we may get all the records in the output?
I think this might be a bug with join. I just tried it on Fedora 14 using this version of join: $ join --version join (GNU coreutils) 8.5 Example $ join -1 3 -2 1 1 2 8058 work1 a 51 77-11:29:32 /apps/sas 15336 work2 b 51 100-12:23:49 /local/hotfix Alternative You could use awk to do this: $ awk 'NR==FNR{_[$3]=$3;next}$1 in _{print _[$1],$0}' 1 2 8058 8058 77-11:29:32 /apps/sas 15336 15336 100-12:23:49 /local/hotfix
Join command gives incorrect output?
1,350,877,526,000
I have two files file 1 contains 2*J=0 EXP= 0.00000 2*J=4 EXP= 1.27911 2*J=8 EXP= 1.57613 2*J=12 EXP= 1.69134 2*J=10 EXP= 2.72705 2*J=16 EXP= 4.55689 2*J=20 EXP= 5.62138 file 2 contains 2*J=0 EXC= 0.00000 2*J=8 EXC= 1.21836 2*J=4 EXC= 1.59642 2*J=12 EXC= 1.78359 2*J=10 EXC= 2.69484 2*J=16 EXC= 7.24518 2*J=20 EXC= 7.32688 I want to join the two files such that the output will be 2*J=0 EXP= 0.00000 EXC= 0.00000 2*J=4 EXP= 1.27911 EXC= 1.59642 2*J=8 EXP= 1.57613 EXC= 1.21836 2*J=12 EXP= 1.69134 EXC= 1.78359 2*J=10 EXP= 2.72705 EXC= 2.69484 2*J=16 EXP= 4.55689 EXC= 7.24518 2*J=20 EXP= 5.62138 EXC= 7.32688
with GNU awk, we can control how the output is sorted: joiner.awk #!/usr/bin/env -S gawk -f FILENAME == ARGV[1] { f1[$1] = $2 OFS $3 sort_key[$1] = $3 next } { f2[$1] = $2 OFS $3 } function sorter(idx1, val1, idx2, val2) { return sort_key[idx1] - sort_key[idx2] } END { PROCINFO["sorted_in"] = "sorter" for (key in f1) print key, f1[key], f2[key] } Then $ gawk -f joiner.awk file1 file2 2*J=0 EXP= 0.00000 EXC= 0.00000 2*J=4 EXP= 1.27911 EXC= 1.59642 2*J=8 EXP= 1.57613 EXC= 1.21836 2*J=12 EXP= 1.69134 EXC= 1.78359 2*J=10 EXP= 2.72705 EXC= 2.69484 2*J=16 EXP= 4.55689 EXC= 7.24518 2*J=20 EXP= 5.62138 EXC= 7.32688 The "PROCINFO" magic is documented: https://www.gnu.org/software/gawk/manual/html_node/Controlling-Array-Traversal.html
Join two files ordered numerically according to second column
1,350,877,526,000
I am trying to compare second columns of two text files and print the first columns of both files if match. I have tried the below awk codes but it was no use 1) awk 'NR==FNR {a[$2]=$2; next} {print $1,a[$1]}' nid8.txt nid9.txt 2) awk 'NR==FNR {a[$2]=$2; next} {print $1, $1 in a}' nid8.txt nid9.txt Example files: nid8.txt: 1000 500 1001 501 1002 502 1003 503 1004 504 1005 505 nid9.txt: 2000 504 2001 502 2002 508 2003 505 2004 500 2005 501 Output: 1000 2004 1001 2005 1002 2001 1004 2000 1005 2003
You can use join here: join -j 2 -o 1.1 2.1 <(sort -nk2,2 nid8.txt) <(sort -nk2,2 nid9.txt) Use the second field -j 2 on both files as the keys. and -output these fields: first field from the first file 1.1 first field from the second file 2.1 join requires input files to be sorted, so we sort them on the second field that will be used as the key numerically sort -nk2,2 input. Using awk which not require to have sorted inputs but load first input file into memory: awk '!second_file{ my_array[$2]=$1; next } ($2 in my_array) { print $1, my_array[$2] }' nid8.txt second_file=1 nid9.txt with my_array[$2]=$1 we are saving first column of first input file nid8.txt only where the keys are the second column of the same first file until second_file variable value is not set to 1 which cause !second_file expression value evaluate to false and won't execute that block for next input(s). With ($2 in my_array) condition we checks if the second field exist in our array my_array or not, if that was exist then we print first field $1 (which is from the second file) and the value for that same key with my_array[$2] which contains the first field from the first file with the same key.
Compare second column of two text files and print first columns of both files if match
1,350,877,526,000
I would like to perform script below but without creation of intermediate files (lsfs.out, df.out) on IBM AIX with ksh lsfs_out=`lsfs | sed -n '1d;p' | sort -b -k 3` df_out=`df -k | sed -n '1d;p' | sort -b -k 7` echo "$lsfs_out" > lsfs.out echo "$df_out" > df.out join -1 7 -2 3 df.out lsfs.out The output of the first command is like below: lsfs | sed -n '1d;p' | sort -b -k 3 /dev/hd4 -- / jfs2 4194304 -- yes no /dev/hd11admin -- /admin jfs2 1048576 -- yes no The output of the second command is: df -k | sed -n '1d;p' | sort -b -k 7 /dev/hd4 2097152 836284 61% 9360 5% / /dev/hd11admin 524288 523848 1% 7 1% /admin I can't understand how to replace file names for "join" command with commands output. What I was trying was useless. For example: join -1 7 -2 3 <(echo "$df_out") <(echo "$lsfs_out") ksh: 0403-057 Syntax error: `(' is not expected. one more join -1 7 -2 3 <`echo "$df_out"` <`echo "$lsfs_out"` A file or path name is too long. < some output > Cannot find or open the file.
an awk solution (can be onelined of course) (df -k ; lsfs ) | awk 'FNR==1 {next; } NF==7 { L[$7]=$0 ; next ; } { printf "%s %s\n",$0,L[$3];}' where FNR==1 {next; } filter header NF==7 { L[$7]=$0 ; next ; } store df line indexed on filesystem { printf "%s %s\n",$0,L[$3];} print lsfs line and join with df's this suppose no filesystem has a space in path (original problem also)
How to join results of two commands in IBM AIX ksh
1,350,877,526,000
I can't seem to get either grep or awk to do a relatively simple index pull of a list. I suspect it's because of adjacent duplicates in the index file, something I wouldn't have thought would cause an issue. Oddly looking for a solution online wasn't successful as all the queries I found are people who want to remove duplicates, not keep them! The Index file looks like this with ~40k entries, many being sorted duplicates: n0000003 n0000003 n0000008 n0000008 n0000017 n0000017 n0000017 n0000017 .....etc And the search file looks like this, with ~10k unique entries of each identifier: n0000003 216 -0.334 0.229 0.088 0.154 n0000008 16 0.117 0.200 0.508 0.621 n0000017 218 -0.353 0.196 0.042 0.084 ...etc What I need is output like this, with repeat output entries equaling the number of repeat index entries in the index file: n0000003 216 -0.334 0.229 0.088 0.154 n0000003 216 -0.334 0.229 0.088 0.154 n0000008 16 0.117 0.200 0.508 0.621 n0000008 16 0.117 0.200 0.508 0.621 n0000017 218 -0.353 0.196 0.042 0.084 n0000017 218 -0.353 0.196 0.042 0.084 n0000017 218 -0.353 0.196 0.042 0.084 n0000017 218 -0.353 0.196 0.042 0.084 ...etc But instead both grep and awk give only one entry each (making it identical to the search file). I figured a grep could handle repeat duplicates no problem but I can't find a workaround. These are commands I would have expected to work for example: grep -f index.txt searchfile.txt > output.txt awk -F'\t' 'NR==FNR{c[$1]++;next};c[$1]' index.txt searchfile.txt > output.txt Any advice on how I could get grep or awk to output the proper number of repeats would be great! Thanks so much! Andrew
I don't think you can do this with grep, no, but you can in awk. The simplest approach I can think of is to store the contents of searchfile.txt in memory and then print its lines each time you see an index: $ awk -F'\t' 'NR==FNR{c[$1]=$0;next}{if(c[$1]){print c[$1]}}' searchfile.txt index.txt n0000003 216 -0.334 0.229 0.088 0.154 n0000003 216 -0.334 0.229 0.088 0.154 n0000008 16 0.117 0.200 0.508 0.621 n0000008 16 0.117 0.200 0.508 0.621 n0000017 218 -0.353 0.196 0.042 0.084 n0000017 218 -0.353 0.196 0.042 0.084 n0000017 218 -0.353 0.196 0.042 0.084 n0000017 218 -0.353 0.196 0.042 0.084 If both files are sorted on the index, you can also use join: $ join -t$'\t' searchfile.txt index.txt n0000003 216 -0.334 0.229 0.088 0.154 n0000003 216 -0.334 0.229 0.088 0.154 n0000008 16 0.117 0.200 0.508 0.621 n0000008 16 0.117 0.200 0.508 0.621 n0000017 218 -0.353 0.196 0.042 0.084 n0000017 218 -0.353 0.196 0.042 0.084 n0000017 218 -0.353 0.196 0.042 0.084 n0000017 218 -0.353 0.196 0.042 0.084
Is it possible to use grep or awk to report duplicate output lines corresponding to repeating entries in an index file?
1,350,877,526,000
I would like to join data from two CSV files based on matching column information. The data to match is from File1.csv column 5, and File2 column 1, and i want to append the information from File2 column 2 upon match, and if no match leave empty double quotes. File1.csv "Z","P","W","K","1","1.18.24.59" "S","K","D","X","9","1.14.19.238" "R","M","P","Y","8","1.15.11.21" "B","D","0","U","5","1.9.20.159" "R","E","W","Q","6","135.0.0.1" "K","D","K","R","9","1.9.74.13" File2.csv "65.9.7.19","374 22 53" "1.9.74.13","123 256 51" "1.18.24.59","23 25 41" "1.15.11.21","98 77 8291" "1.14.19.238","8827 145 8291" "1.9.20.159","283 1 5734" Desired Output "Z","P","W","K","1","1.18.24.59","23 25 41" "S","K","D","X","9","1.14.19.238","8827 145 8291" "R","M","P","Y","8","1.15.11.21","98 77 8291" "B","D","0","U","5","1.9.20.159","283 1 5734" "R","E","W","Q","6","135.0.0.1","" "K","D","K","R","9","1.9.74.13","123 256 51"
Here's one solution, using awk. Tested on GNU awk 4.1.3. $ awk -F, 'NR==FNR{a[$1]=$2}NR!=FNR{print $0","(a[$6]?a[$6]:"\"\"")}' file2.csv file1.csv "Z","P","W","K","1","1.18.24.59","23 25 41" "S","K","D","X","9","1.14.19.238","8827 145 8291" "R","M","P","Y","8","1.15.11.21","98 77 8291" "B","D","0","U","5","1.9.20.159","283 1 5734" "R","E","W","Q","6","135.0.0.1","" "K","D","K","R","9","1.9.74.13","123 256 51" $ The NR==FNR{a[$1]=$2 matches the lines in file2, and adds them into an array, keyed on field #1. The NR!=FNR then matches the lines in file1. print $0"," prints the whole line from file1 followed by a comma. (a[$6]?a[$6]:"\"\"") prints the corresponding contents of the array built earlier, but just "" if no entry found.
Join two CSV files based on matching column data
1,542,262,746,000
I have two csv files: file1: C1, 1, 0, 1, 0, 1 C2, 1, 0, 1, 1, 0 C3, 0, 0, 1, 1, 0 file2: C3, 1.2 C1, 2.3 C2, 1.8 I want to merge these two files based on C column (which produces): C1, 1, 0, 1, 0, 1, 2.3 C2, 1, 0, 1, 1, 0, 1.8 C3, 0, 0, 1, 1, 0, 1.2 And then remove the second last column (to produce): C1, 1, 0, 1, 0, 2.3 C2, 1, 0, 1, 1, 1.8 C3, 0, 0, 1, 1, 1.2
You just have create a hash-map on the second file on the C column and use that on the first file as below. The actions right next FNR==NR applies to the first file specified at the end and the subsequent action happens on the last file. This is because of the special variables in awk, FNR and NR which track line numbers per file and across the files respectively. awk -v FS="," -v OFS="," 'FNR==NR { unique[$1]=$2; next } $1 in unique { $NF=unique[$1]; }1' file2 file1
Joining two csv files on common column and removing the second last column
1,542,262,746,000
From coreutils' manual about join -e string Replace those output fields that are missing in the input with string. I.e., missing fields specified with the -12jo options. I don't understand the option at all. What do the following mean "those output fields that are missing in the input" "missing fields specified with the -12jo options"? Thanks.
The slightly cryptic string -12jo refers to the four separate options -1, -2, -j and -o, of which the first three has to do with selecting what field in each file to join on and the last has to do with what fields from each file should be outputted. The -j option is an extension in GNU join and and -j n is the same as -1 n -2 n (where n is some integer). The -e option comes into effect when you, with -a, request to get unpaired lines from one or both of the files that you join. An unpaired line will have missing data, as the line from one file did not correspond to a line in the other file. The -e option replaces those fields with the given string. Likewise, if you request, with -o, a field that does not exist on a particular line in a file, you would use -e to replace the empty values with a string. Example: Two files that contain manufacturing costs and sales income for a number of products. Each file has the fields Product ID Product name Some number $ cat expenses.txt 1 teacup 5 2 spoon 7 3 bowl 10 $ cat sales.txt 1 teacup 30 2 spoon 24 To get the expenses and sales for all products, while replacing the number (from either the first or second file) that may be missing with the string NONE, I would do $ join -a1 -a2 -o0,1.2,1.3,2.3 -e NONE expenses.txt sales.txt 1 teacup 5 30 2 spoon 7 24 3 bowl 10 NONE Here, I use the -a option twice to request all lines from both files (a "full outer join" in SQL speak). The -o option is used to get specific fields from each file (field 0 is the join field, which is the first field in each file by default), and -e to specify the string NONE to replace missing value with. As you can see, we get NONE as the "sales value" since the product with ID 3 was not mentioned in that second file.
What does `join -e` mean?
1,542,262,746,000
I'm trying to identify all lines in common based on the first column of one file. I'm using the following command: awk '{print $1}' File1 | fgrep -wf - File2 >Out File1: M01605:153:000000000-B55NK:1:1101:10003:14536 chr1 150129998 A Rev 18 M01605:153:000000000-B55NK:1:1101:10007:14573 chr17 44166311 C 38 44166311 M01605:153:000000000-B55NK:1:1101:10007:14573 chr17 44166500 G Rev 34 M01605:153:000000000-B55NK:1:1101:10009:9160 chr8 16716272 G 35 16716395 M01605:153:000000000-B55NK:1:1101:10009:9160 chr8 16716336 A 37 16716337 M01605:153:000000000-B55NK:1:1101:10009:9160 chr8 16716336 A 38 16716459 M01605:153:000000000-B55NK:1:1101:10010:14111 chr8 89574844 A 38 89574844 M01605:153:000000000-B55NK:1:1101:10010:19939 chr3 181151945 T 36 181151945 M01605:153:000000000-B55NK:1:1101:10011:22802 chr17 43984669 A 34 43984765 M01605:153:000000000-B55NK:1:1101:10011:22802 chr17 43984669 A 38 43984689 File2: M01605:153:000000000-B55NK:1:1101:10003:14536 2:N:0:1 GTTTGCGCCGATGTA M01605:153:000000000-B55NK:1:1101:10003:4882 2:N:0:1 GCACTGTAAAAAGTA M01605:153:000000000-B55NK:1:1101:10007:14573 2:N:0:1 GGGGATAAGCGTTGC M01605:153:000000000-B55NK:1:1101:10007:5336 2:N:0:1 GTGTTTGTGTAGCTA M01605:153:000000000-B55NK:1:1101:10008:14477 2:N:0:1 GGGCGGAGGTGAAGA M01605:153:000000000-B55NK:1:1101:10009:18543 2:N:0:1 AGTTCGAGCGCAGTG M01605:153:000000000-B55NK:1:1101:10009:9160 2:N:0:1 CAGAAGAGGTAATGT M01605:153:000000000-B55NK:1:1101:10010:14111 2:N:0:1 CTGCGTACTGATAGC M01605:153:000000000-B55NK:1:1101:10010:19939 2:N:0:1 TCCGTGGTGCCGGCA M01605:153:000000000-B55NK:1:1101:10011:22802 1:N:0:1 TGAGTTCGGATAAAG Out: M01605:153:000000000-B55NK:1:1101:10003:14536 2:N:0:1 GTTTGCGCCGATGTA M01605:153:000000000-B55NK:1:1101:10007:14573 2:N:0:1 GGGGATAAGCGTTGC M01605:153:000000000-B55NK:1:1101:10009:9160 2:N:0:1 CAGAAGAGGTAATGT M01605:153:000000000-B55NK:1:1101:10010:14111 2:N:0:1 CTGCGTACTGATAGC M01605:153:000000000-B55NK:1:1101:10010:19939 2:N:0:1 TCCGTGGTGCCGGCA M01605:153:000000000-B55NK:1:1101:10011:22802 1:N:0:1 TGAGTTCGGATAAAG Expected Out: M01605:153:000000000-B55NK:1:1101:10003:14536 2:N:0:1 GTTTGCGCCGATGTA M01605:153:000000000-B55NK:1:1101:10007:14573 2:N:0:1 GGGGATAAGCGTTGC M01605:153:000000000-B55NK:1:1101:10007:14573 2:N:0:1 GGGGATAAGCGTTGC M01605:153:000000000-B55NK:1:1101:10009:9160 2:N:0:1 CAGAAGAGGTAATGT M01605:153:000000000-B55NK:1:1101:10009:9160 2:N:0:1 CAGAAGAGGTAATGT M01605:153:000000000-B55NK:1:1101:10009:9160 2:N:0:1 CAGAAGAGGTAATGT M01605:153:000000000-B55NK:1:1101:10010:14111 2:N:0:1 CTGCGTACTGATAGC M01605:153:000000000-B55NK:1:1101:10010:19939 2:N:0:1 TCCGTGGTGCCGGCA M01605:153:000000000-B55NK:1:1101:10011:22802 1:N:0:1 TGAGTTCGGATAAAG Note the bolded lines are missing from the actual output generated and are what I want to be in the output file. It seems like grep is running correctly, but then condensing all identical lines down into only one output line. Any suggestions?
That's exactly what the join command is made for: it joins two files based on a common field: $ awk '{print $1}' File1 | join - File2 M01605:153:000000000-B55NK:1:1101:10003:14536 2:N:0:1 GTTTGCGCCGATGTA M01605:153:000000000-B55NK:1:1101:10007:14573 2:N:0:1 GGGGATAAGCGTTGC M01605:153:000000000-B55NK:1:1101:10007:14573 2:N:0:1 GGGGATAAGCGTTGC M01605:153:000000000-B55NK:1:1101:10009:9160 2:N:0:1 CAGAAGAGGTAATGT M01605:153:000000000-B55NK:1:1101:10009:9160 2:N:0:1 CAGAAGAGGTAATGT M01605:153:000000000-B55NK:1:1101:10009:9160 2:N:0:1 CAGAAGAGGTAATGT M01605:153:000000000-B55NK:1:1101:10010:14111 2:N:0:1 CTGCGTACTGATAGC M01605:153:000000000-B55NK:1:1101:10010:19939 2:N:0:1 TCCGTGGTGCCGGCA M01605:153:000000000-B55NK:1:1101:10011:22802 1:N:0:1 TGAGTTCGGATAAAG M01605:153:000000000-B55NK:1:1101:10011:22802 1:N:0:1 TGAGTTCGGATAAAG Your files may be sorted numerically but not alphabetically, as expected by join. If join is complaining, slightly modify the command above to sort the input with GNU sort: $ awk '{print $1}' File1 | sort | join - <(sort -k1,1 --stable File2) As your second file seems to have duplicated lines (see coments), you may want to change the second sort command to sort -k1,1 --stable --unique File2 (still assuming you are using GNU sort, use uniq).
Grep not returning identical matches from awk pipe
1,542,262,746,000
I have two files each containing a timestamp and a count as follows File1.txt 9 2016-06-22 3 2016-06-23 2 2016-06-24 1 2016-06-25 2 2016-06-26 2 2016-06-27 File2.txt 3 2016-06-23 2 2016-06-25 5 2016-06-27 I would like to created an output where it uses the date column (col-3) in both the files and create an joined output which is as follows. Expected result: 9 2016-06-22 3 3 2016-06-23 2 2016-06-24 1 2 2016-06-25 2 2016-06-26 2 5 2016-06-27 Using paste command is very complex and involves manual effort to get the expected output.Can someone help me with this ? Thank you.
The best solution is to use the join command: join -j 2 -a 1 -e " " -o 1.1 2.1 1.2 File1.txt File2.txt Not the more elegant solution, but if you want to learn shell-script this should do the job: while read line1; do file1_number=$(echo ${line1} | cut -d ' ' -f 1) file1_date=$(echo ${line1} | cut -d ' ' -f 2) line2=$(grep ${file1_date} File2.txt) file2_number=$(echo ${line2} | cut -d ' ' -f 1) if [[ -z "${file2_number}" ]]; then file2_number=" " fi echo ${file1_number} "${file2_number}" ${file1_date} done < File1.txt
Joining two file data based on column comparision
1,542,262,746,000
file1: 0000002|SLM DEV CORP |PO 857 0000003|S TOPPING |APT 19 0000004|JD BROS LTD |PO 118 0000005|ZKZ SERVICES |14699 CREDITVIEW RD file2: 0000001|GTI CONSULTING |4513 GLADEBROOK CRES 0000002|SLM DEVELOPMENT CORP | SLM |PO BOX 857 0000003|S TOPPING |APT 19 0000004|JD PLETT BROS LTD |PO BOX 118 0000005|ZKZ SERVICES |ZKZ |14699 CREDITVIEW RD expected output: 0000002|SLM DEVELOPMENT CORP | SLM |PO BOX 857 0000003|S TOPPING |APT 19 0000004|JD PLETT BROS LTD |PO BOX 118 0000005|ZKZ SERVICES |ZKZ |14699 CREDITVIEW RD I have tried join command: join -j1 1 -j2 1 -t'|' -o 1.1 2.2 2.3 file1 file2 Due to the pipe delimiter, I got wrong output. Also, I cannot use any other symbol as delimiter since any symbol can be available in column 2. I need to match column 1 in file 2 and copy column 2 and 3 lengthwise. I also tried looping column 1 data of file1 in file2 and getting the output but that takes too long since my files are very large.
Simple with awk, if the join field is unique: awk -F"|" 'a[$1]++' file1 file2 -F"|" sets pipe as delimiter a[$1]++ is a condition. When the condition is true, the line is printed. The condition becomes true, when the first field $1 appears more than one times. If the join field is not unique: awk -F"|" 'a[$1]++&&FNR!=NR' file1 file2 FNR!=NR: is also a condition that must be true. This applies only on the second file file2 which is processed. That condition could be removed, when you can guarantee that the first field in both files is unique.
How to join two files by matching a column with an irregular number of columns?
1,542,262,746,000
Is there an easy way to print all lines of a file1 (so that the output has the same number of lines as the input file1) but print a message such as NoMatch where the first entries of file1 does not match the first entries of file2? file1: Entry1 Entry2 a 2 b 3 c 4 d 5 file2: a b b a d d Desired output: Entry1 a 2 b 3 NoMatch 4 d 5 I am trying with join -a1 -e "NoMatch" -11 -21 -o2.1 file1 file2 since I would like to keep the unpairable lines from file1 that do not match file2, and give a message for these cases as "no", but this keeps all of my records also in file2 (which contains duplicated records), what am I doing wrong? Could this be because my second file is tab delimited and my first file is space delimited? Thanks so much for all the help...
With awk read file2 first and save $1 in seen[$1] then read file1 and if $1 wasn't "seen" replace it with NoMatch: awk 'FNR==NR{seen[$1]++; next} {(FNR==1 || ($1 in seen)) || $1="NoMatch"};1' file2 file1 If you prefer join, you need sorted input. You'll have to extract the header from file1 first, sort the remaining lines then join the result with the sorted file2: { head -n1; sort | join -j1 -a1 -e "NoMatch" -o 2.1 1.2 - <(sort file2); } <file1 and if needed, pipe everything to uniq to avoid duplicate lines: { head -n 1; sort | \ join -j1 -a1 -e "NoMatch" -o 2.1 1.2 - <(sort file2) | \ uniq; } <file1
How do I print all records in file 1 but note the records that do not match file2?
1,542,262,746,000
I have a tabular file like this which is my index: a X001 a X002 a X003 b X002 c X006 z X007 z X008 z X001 I want to search the following single column file against that index and return each match from the second column of the index. a b z So the output from that search would be this: X001 X002 X003 X002 X007 X008 X001 Edit: Here is exact info from my files (top 10 lines from each) Index 10803548 COG4190 10803554 COG1476 10803555 COG1192 10803559 COG3385 10803567 COG0071 10803570 COG1695 10803571 COG0467 10803573 COG3883 10803574 COG0714 10803576 COG1192 File I am searching: 10956722 11497860 11497860 11497924 11497924 11497924 11497924 11497924 11497979 11497979 The Expected output of this search: COG3704 COG1474 COG1474 COG2801 COG2801 COG2801 COG2801 COG2801 COG2223 COG2223
With join: join -o1.2 <(sort -k1n index) <(sort -k1n file) This will merge the two files at field number 1 and if they match, print the second field of the first file 1.2. Or with awk: awk 'FNR==NR{a[$1]=1} FNR!=NR&&a[$1]{print $2}' file index The file file is loaded into an array a. When the second file index in processed awk checks if the first field is in the array as index a[$1]. If yes, print the second field $2.
Find multiple matches in a tabular file and print second column?
1,542,262,746,000
I have 3 csv files I want to join by first column (id column) Each file has the same 3 columns. Row example : id | timestamp | Name 3792318, 2014-07-15 00:00:00, "A, B" When I join the 3 csv files with join -t, <(join -t, csv1 csv2) csv3 > out.csv The out.csv file doesn't have the same number of columns for each row, probably because the delimiter is a comma and some rows (like in the example above) have commas in the contents of the cell.
Obviously, using a csv parser would be better but if we can safely assume that The 1st field will never contain a comma; You only want the ids that are present in the 1st file (if an id is in file2 or file3 and not in file1 of you ignore it); The files are small enough to fit in your RAM. Then this Perl approach should work: #!/usr/bin/env perl use strict; my %f; ## Read the files while (<>) { ## remove trailing newlines chomp; ## Replace any commas within quotes with '|'. ## I am using a while loop to deal with multiple commas. while (s/\"([^"]*?),([^"]*?)\"/"$1|$2"/){} ## match the id and the rest. /^(.+?)(,.+)/; ## The keys of the %f hash are the ids ## each line with the same id is appended to ## the current value of the key in the hash. $f{$1}.=$2; } ## Print the lines foreach my $id (keys(%f)) { print "$id$f{$id}\n"; } Save the script above as foo.pl and run it like this: perl foo.pl file1.csv file2.csv file3.csv The script above can also be written as a one-liner: perl -lne 'while(s/\"([^"]*?),([^"]*)\"/"$1|$2"/){} /^(.+?)(,.+)/; $k{$1}.=$2; END{print "$_$k{$_}" for keys(%k)}' file1 file2 file3
Merge CSV files with field delimiters also occuring inside quotes