date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,538,656,457,000 |
I have a tsv file of words and I want to write a bash that counts how many quartets are in the file and exports the name of the file and the number of quartets to a csv file.
For example for the file fileName.tsv: I,have,this,word,cat,home,dog,day
The result would be a csv file with fileName.tsv,2.
|
To get the number of quartets, you can count the number of words, use integer division to divide by four.
First I'd use sed 's/,/ /g' to substitute , with so that the number of words can be easily parsed. Then I'd pipe that into wc -w to count the number of words. Finally I'd use bash to perform integer division with $(( x / 4 )). That looks like this:
$ cat fileName.tsv
I,have,this,word,cat,home,dog,day
$ sed 's/,/ /g' fileName.tsv
I have this word cat home dog day
$ sed 's/,/ /g' fileName.tsv | wc -w
8
$ echo $(( $(sed 's/,/ /g' fileName.tsv | wc -w) / 4 ))
2
You mentioned making a csv file with <filename,quartet>. I assume you'd want more than one line so you can use a loop in bash to parse each file matching a pattern.
for filename in *.tsv; do
quartet=$(( $(sed 's/,/ /g' $filename | wc -w) / 4 ))
echo $filename,$quartet >> output.csv
done
| Count every fourth word in a file |
1,538,656,457,000 |
Strings cmd prints the printable characters in a binary file.
what does this printable character actually mean.. I mean the code from which the binary was made was itself printable.
|
The readable code has been converted into machine code, and the comments have been removed by the preprocessor.
However, literal strings in the program like "Hello, World!" are still there for use at run-time. Also, the names of symbols like function names and variable names are contained in a table for use by debug tools, unless they have been removed with the strip utility. The names of dynamic code libraries are also present.
Most of my C programs contain their own man page, which can be shown with a -H option. So strings would also report the whole man page, plus every print format string, error message etc. and a list of all library calls, like strcmp@@GLIBC_2.2.5.
| Printable characters in a binary file [closed] |
1,538,656,457,000 |
Why is this bash script read line code giving me errors?
read -p "Does this require cropping? (y/n)? " answer
case ${answer:0:1} in
y|Y )
mkdir cropped; for i in *.mp4; do ffmpeg -i "$i" -filter:v "crop=1900:1080:-20:0" cropped/"${i%.*}.mp4"; rm -r *.mp4; cd cropped; cp -r *.mp4 ../
;;
* )
mkdir no
;;
esac
When I give an answer, I get this back from terminal:
Does this require cropping? (y/n)? n
/usr/local/bin/prep: line 17: syntax error near unexpected token `;;'
/usr/local/bin/prep: line 17: ` ;;'
However, it works perfectly fine if my executed (YES) answer code is changed something like, rather than the whole mkdir cropped; for i in *.mp4...:
mkdir yes
|
You are missing the done on your for loop, so the no ) and stuff is part of the loop.
| Why is this bash script read line code giving me errors? [closed] |
1,538,656,457,000 |
I am working on macOs and tried size command on cc
$ which cc
/usr/bin/cc
it does not work correctly
$ size /usr/bin/cc
size: /usr/bin/cc: unknown load command 0x32
size: /usr/bin/cc: unknown load command 0x32
size: /usr/bin/cc: unknown load command 0x32
size: /usr/bin/cc: file format not recognized
$ size /bin/ls
size: /bin/ls: unknown load command 0x32
size: /bin/ls: unknown load command 0x32
size: /bin/ls: unknown load command 0x32
size: /bin/ls: file format not recognized
and the size is latest version
$ size --version
GNU size (GNU Binutils) 2.31.1
Copyright (C) 2018 Free Software Foundation, Inc.
This program is free software; you may redistribute it under the terms of
the GNU General Public License version 3 or (at your option) any later version.
This program has absolutely no warranty.
but on Centos
[root@iz2ze9wve43n2nyuvmsfx5z ~]# size /usr/bin/cc
text data bss dec hex filename
754853 8488 81856 845197 ce58d /usr/bin/cc
What's the problem with command size?
$ file /usr/bin/cc
/usr/bin/cc: Mach-O 64-bit executable x86_64
$ size --help
Usage: size [option(s)] [file(s)]
Displays the sizes of sections inside binary files
If no input file(s) are specified, a.out is assumed
The options are:
-A|-B --format={sysv|berkeley} Select output style (default is berkeley)
-o|-d|-x --radix={8|10|16} Display numbers in octal, decimal or hex
-t --totals Display the total sizes (Berkeley only)
--common Display total size for *COM* syms
--target=<bfdname> Set the binary file format
@<file> Read options from <file>
-h --help Display this information
-v --version Display the program's version
this works
me at Max-2018 in ~/desktop
$ /Library/Developer/CommandLineTools/usr/bin/size /usr/bin/cc
__TEXT __DATA __OBJC others dec hex
4096 4096 0 4294979584 4294987776 100005000
$ ls /Library/Developer/CommandLineTools/usr/bin | grep size
llvm-size
size
size-classic
|
It appears that you have installed GNU's size despite having the Command Line Tools package installed as you noted. Try: /Library/Developer/CommandLineTools/usr/bin/size.
If your OS is El Capitan (10.11) or later, you have to disable SIP (at least temporarily) in order to install into directories like /bin, /sbin and /usr (but not /usr/local).
| Size command is not recognized, thought installed correctly |
1,538,656,457,000 |
I have written a shell script named startup.sh which does a lot of things (basically start a lot of things for me after turning on my local machine) - here is an excerpt:
#!/bin/bash
gnome-terminal --tab &
veracrypt --auto-mount favorites &
thunderbird &
~/Application/IDEA/bin/./idea.sh &
/usr/bin/slack &
echo myuser mypass | skypeforlinux --pipelogin &
sh bsync-project-folder.sh &
exit
Open a console window and do:
. startup.sh
The shell script is executed and the window is closed afterwards.
Also working:
sh startup.sh
OR
./startup.sh
The shell script is executed and the terminal window stays open - however it does not return to the console and have to stop script execution with CTRL + C (no matter if I execute with the command line interpreter or with ./).
However I want a clean exit of my script and then return to the same console with a success message. What am I missing?
|
When you start a script with '< dot> < space> < script_name>' and you have in your script "exit", your window will be closed. The dot notation means you run it within the window and then the "exit" means to exit the window, not the script itself.
Try to add >/dev/null 2>&1 to each of the line (before final &) to find out which of the commands still holds stdout, eg.:
gnome-terminal --tab >/dev/null 2>&1 &
...
you may but need not to leave the exit at the end but it does not have any sense here.
Run the script: ./startup.sh
| How to properly write and execute a shell script and exit correctly? [duplicate] |
1,538,656,457,000 |
How do I change all files (different types including .sh, .py, and other executable files) in a known directory to permission 775 without listing all of the file names?
I mean "all" of them in that directory, no exception.
UPDATED: The command below actually solved the problem for me. Any explanations why "+" instead of a "\"?
find . -type f -name "*.*" -exec chmod 775 {} +
|
find and chmod
find path_to_dir -type f -name "*.*" -exec chmod 775 {} \;
change *.* to the type of files you would like to change its permissions. *.* will apply the changes to all files in the directory.
| chmod all files in a directory [duplicate] |
1,538,656,457,000 |
A few days ago I found the following command:
for i in 0 1 2 3 4 5 6 S ; do ln -s /etc/rc$i.d /etc/rc.d/rc$i.d ; done
As far as understand this command is going to create a symbolic link between each file using the for cycle, but what I can't really understand is the S in that numeration, what is it supposed to do?
|
Those numbers aren't randomly selected, they're the runlevels of your system. The runlevel used to determine which init scripts are run. They're mostly obsolete now. And if you're on Linux, it's highly likely that the runlevels S and 1 are the same. Your documents might be really old, or they'd probably be using update-rc.d instead of manually creating symlinks.
So your loop is iterating over all runlevels, 1-6 and S.
| Interpreting this command |
1,538,656,457,000 |
I am trying to list a number of offending IP's with a one line command, and am not sure how to do the last little bit, maybe someone can point me in the right direction.
cat /var/log/syslog* | grep "SRC=" | cut -d " " -f 14 | sort | uniq -c | sort -n -r
In English...this should print all syslog files (also those rotated), search for entries of the Firewall and grab the SRC value (IP), count them and list them from highest to lowest. All I want now is to limit it to the top 5... Anybody know a command that can do that ?
Example entry in syslog:
Jan 11 12:01:52 xxxx kernel: [47261.722647] INPUT packet died: IN=eth0
OUT= MAC=44:8a:5b:a0:24:eb:00:31:46:0d:21:e8:08:00 SRC=xx.xx.xx.xx
DST=xx.xx.xx.xx LEN=40 TOS=0x00 PREC=0x00 TTL=239 ID=33840 PROTO=TCP
SPT=1024 DPT=22151 WINDOW=1024 RES=0x00 SYN URGP=0
The entries are made by my custom Firewall, not part of this Question
Example output of the command:
47 SRC=13.82.59.79
2 SRC=77.72.82.145
2 SRC=213.157.51.11
2 SRC=159.203.72.216
1 SRC=77.72.85.15
1 SRC=77.72.85.10
1 SRC=77.72.83.238
1 SRC=77.221.1.237
1 SRC=222.186.172.43
1 SRC=216.170.126.109
1 SRC=191.101.167.253
1 SRC=190.198.183.234
1 SRC=173.254.247.206
1 SRC=164.52.13.58
1 SRC=141.212.122.145
1 SRC=125.78.165.42
1 SRC=118.139.177.119
1 SRC=111.75.222.141
1 SRC=103.30.40.9
|
awk '/SRC=/ { print $13 }' /var/log/syslog* | sort | uniq -c | sort -n -r | head -n 5
This does away with the catting, grepping and cutting from the original pipeline and replaces them with awk. The head -n 5 at the end will give you the top five results.
| Listing a Summary with limits |
1,538,656,457,000 |
I have tar.gz archive on external drive and to extract I need to copy to my home directory and then extract. Is there a way to have it in one go, extract to /home/me directory without the need to copy it first?
|
Use below commands:-
tar -xvzf filename.tar.gz -C /home/me
| How to extract file from tar.gz archive to different directory in bash [duplicate] |
1,538,656,457,000 |
I have a new arch install with Gnome Window Manager and Cinnamon. I created ~/.xinitrc with the command start cinnamon, and restarted the computer.
Now the system boots into the login screen for Gnome and Cinnamon, but there is no command line shell available in the GUIs, and I'm unable to boot the computer into command line mode either.
|
Usually in cinnamon or gnome there is terminal application. If there is no such application, then install it.
But if you wan to get back to tty just press ctrl+alt+F1 (or any other F in range 1-6)
| Get to command line from arch cinnamon [duplicate] |
1,538,656,457,000 |
Currently my text file looks like this..
David Webb Box 34 Rural Route 2 Nixa MO 65714 (417)555-1478 555-66-7788 09-13-1970
Martha Kent 1122 North Hwy 5 Smallville KS 66789 (785)555-2322 343-55-8845 04-17-1965
Edward Nygma 443 W. Broadway Gotham City NJ 12458 (212)743-3537 785-48-5524 08-08-1987
O'Reilly Baba 123 Torch Ln Joplin MO 64801 (417)222-1234 456-65-3211 12-13-1999
Martin Bob 44 Boss Rd Joplin MO 64801 (417)888-4565 456-98-1111 01-01-2007
the dates are in the 9th field and I want to display them as January 7, 2017 instead of 01-07-2017 for example.
How should I do that? If using options please explain what they do briefly. Doing this in bash. Needing to put it in a script and output to a new file to preserve original.
|
It can be easily done via GNU awk by its time functions (mktime and strftime) but sed can do this too
sed '
/^[0-9]/{ #for last field with date
y|-|/|
s/^/date +"%B %d, %Y" -d /e #reformat string by date
b #go to end (print)
}
s/\(.*\)\s/\1\n/ #separate last field
P #print string without last field
D #operate just last field from start
' original.file |
paste - - > new.file
| reformat date to have month in words |
1,538,656,457,000 |
How can I delete directories using the bash that do not contain directories named wav or mp3? I use macOS Sierra.
find . -type d \! -exec test -e '{}/wav' \; -print finds the directories not containing wav directories. How can I include mp3 to this command? And how can I delete the resulting directories?
My music library follows this structure:
/Musik/<Artist>/<format>/<Artist---Album>/<Track_Titel.wav>, where format is wav or mp3. There are many directories without any audio file but covers, e.g. Thus I can not just search for empty directories to delete those directories not containing audio files.
|
First of all, before you do this, make a backup of your files.
Seriously.
To find and remove Artist directories (and their contents, recursively) which do not directly contain directories (or files) titled either wav or mp3 (case sensitive), try the following:
find /Musik -mindepth 1 -maxdepth 1 -exec test \! -e {}/wav \; -exec test \! -e {}/mp3 \; -print
Only after you have confirmed the output matches the directories you expect to be deleted (and double checked your backup) should you then run:
find /Musik -mindepth 1 -maxdepth 1 -exec test \! -e {}/wav \; -exec test \! -e {}/mp3 \; -exec rm -rf {} \;
| bash: Deleting directories not containing given strings |
1,538,656,457,000 |
I would like to start learning Linux. Can anyone give me a sources and some tips?
|
The Linux® Command Line by William E. Shotts, Jr.
Free, well written, beginner-friendly, step-by-step, comprehensive, and with external references.
And the sooner you get used to the man pages, the better.
Also learn to read what the screen tells. Skipping output is a bad habit. If you really don't want it, learn how to filter it out to keep it under control.
| I want to start learning Unix and Linux [closed] |
1,538,656,457,000 |
Given the /etc/passwd file, what is the command to print only the login names of users that do not have /sbin/nologin as a shell?
Also, in my /etc directory, what can I use as a command to count the # of files that start with the letter 's'? Kind of new to this, thank you!
|
To print only users that have no login shell you can use awk only in it's simplest case:
awk -F/ '$NF != "nologin"' /etc/passwd
Here we use -F/ as delimiter and then '$NF =! "nologin"' where$NFis the last field of the line/row. The default action inawk` is print so it'll print the whole line.
Finding all files starting with an s can easily be done using find. GnuFind in this case
find /etc/ -maxdepth 1 -type f -name 's*' -printf '%P\n'
Here we use GNUfind to search for /etc/ being the path then we check one level (no subdirectories).
-type f tells find to check files only.
-name 's* self explanatory
-printf '%P\n' The '%P' is a printf format. See man find for more.
| Command to print users that don't have /sbin/nologin as a shell |
1,427,484,694,000 |
I have file with name --help.tgz. I want to delete but don't know how.
I tried rm "--help.tgz", rm \-\-help.tgz and it did not work. How can I delete a file with such name?
|
Try: rm -- --help.tgz
The -- tells rm that there are no further flags to process and that everything else are the files/directories to be deleted. Most unix utilities use -- in a similar way.
| delete file with name --help.tgz [duplicate] |
1,427,484,694,000 |
I want to generate a number between 0001 and 9999 on Linux, separate it into two variables, and print it like this:
I will go for 00 and 01
I'm using bash on Linux, and looking to generate this output (I assume I could use seq or echo together maybe?):
Example; from number 0001 to 0005 so the result will be like this:
I will go for 00 and 01
I will go for 00 and 02
I will go for 00 and 03
I will go for 00 and 04
I will go for 00 and 05
|
If the exercise is simply to generate that output:
seq -w 9999 | sed 's/../something something & and /'
This reads the numbers generated by seq with sed and replaces the first two digits with the text something something & and (where the & will be replaced by the two first digits). The other group of two digits will be untouched and remains at the end of the line.
With awk:
awk 'BEGIN {
for (i = 0; i <= 99; ++i)
for (j = 0; j <= 99; ++j) {
if (i == 0 && j == 0) continue
printf "something something %.2d and %.2d\n", i, j } }'
This uses a double loop to generate all the numbers from 00 to 99 in the inner loop for each number in the outer loop.
The inner if statement ensures not to output the result for 00 and 00.
| generate number and devided in line for linux |
1,427,484,694,000 |
I want to convert the lowercase sequences (a,t,c,g) into '-' using the Unix command. But it's not saving the file in place. Rather it's showing errors and removing '-i' outputs in the terminal but doesn't change anything in the file. However, it outputs the desired result for the following code in the terminal but doesn't change the file. You can see that after header information, there are sequence pairs. I just want to change the sequence pairs, keeping header info unchanged. The file's nature is:
0 chr1 11680 11871 chr6 28206109 28206336 - 4581
ctggagattctta-ttagtgatttgggctggggcc-tggccatgtgtattttttta-aatttccactgatgattttgctgcatggccggtgttgagaatgactgCG-CAAATTTGCCGGATTTCCTTTGCTGTTCCTGCATGTAGTTTAAACGAGATTGCCAGCACCGGGTATCATTCAC----------------------------------------------CATTTTTCTTTTCGTT
-TAGGGAGTCTTAGTCAAAGGTTTGGACCAAGTCCCTGGCCATGCAGATCTTTGTAGAATCTCCACTCGTGACTTTCCTGCATAACCAGAGTTGAGCATCTTTGAGTCAAGTGTGCCA-ACTTTCTTTGCTGTT-------------TAAATAAGGATGCCAACACCGCATGTCATTAACAGTCTCGTAGGTTGATTGATTTGTTGGCTGGCTCAAAAATGAGAGTTATTTTTCATTTTGTT
1 chr1 11872 12139 chr6 28206484 28206708 - 4257
AACTTGCCGTCAGCCTTTTCTTTGACCTCTTCTTTCTGTTCATGTGTATTTGCTGTCTCTTAGCCCAGACTTCCCGTGTCCTTTCCACCGGGCCTTTGAGAGGTCACAGGGTCTTGATGCTGTGGTCTTCATCTGCAGGTGTCTGACTTCCAGCAACTGCTGGCCTGTGCCAGGGTGCAAGCTGAGCACTGGAGTGGAGTTTTCCTGTGGAGAGGAGCCATGCCTAGAGTGGGATGGGCCAT-TGTTCATCTTCTGGCCCCTGTTGTCT
AGTTTTCTGTCTGCTAATT-TGCCACCAGTCATTTCCTATTACGTGTGTCTGCTGCCTCCTAGCCCAGGCT-----TGCCCTTCCTCCC--TCTTCTGAGGTGTCATAGGGTCGTGAC--------------------TTACCTGGTTTGGGGGAGTAGTTGG---------------AAGCTGAGTGA-GTGGTGGGGTTTTCTTATGCTAAAGACCTGCGTCCAGTATAGGAAGAGCCATGTGCCTCCACTCTGGCCCTTGTGGTCT
2 chr1 12177 12259 chr17 66149263 66149338 + 3811
GATTGGAGGAAAGATGAGTGAGAGCATCAACTTCTCTCACAACCTAGGCCAGTAAGTAGTGCTTGTGCTCATCTCCTTGGCTG
GGTTGGAGGGAAGATGAGTGAAGGGATCAATTTCTCTGATGACCTGGGCCGGTAGG-------TGTGGTGTCCTCTTTGTCTG
Desired Output:
0 chr1 11680 11871 chr6 28206109 28206336 - 4581
---------------------------------------------------------------------------------------------------------CG-CAAATTTGCCGGATTTCCTTTGCTGTTCCTGCATGTAGTTTAAACGAGATTGCCAGCACCGGGTATCATTCAC----------------------------------------------CATTTTTCTTTTCGTT
-TAGGGAGTCTTAGTCAAAGGTTTGGACCAAGTCCCTGGCCATGCAGATCTTTGTAGAATCTCCACTCGTGACTTTCCTGCATAACCAGAGTTGAGCATCTTTGAGTCAAGTGTGCCA-ACTTTCTTTGCTGTT-------------TAAATAAGGATGCCAACACCGCATGTCATTAACAGTCTCGTAGGTTGATTGATTTGTTGGCTGGCTCAAAAATGAGAGTTATTTTTCATTTTGTT
1 chr1 11872 12139 chr6 28206484 28206708 - 4257
AACTTGCCGTCAGCCTTTTCTTTGACCTCTTCTTTCTGTTCATGTGTATTTGCTGTCTCTTAGCCCAGACTTCCCGTGTCCTTTCCACCGGGCCTTTGAGAGGTCACAGGGTCTTGATGCTGTGGTCTTCATCTGCAGGTGTCTGACTTCCAGCAACTGCTGGCCTGTGCCAGGGTGCAAGCTGAGCACTGGAGTGGAGTTTTCCTGTGGAGAGGAGCCATGCCTAGAGTGGGATGGGCCAT-TGTTCATCTTCTGGCCCCTGTTGTCT
AGTTTTCTGTCTGCTAATT-TGCCACCAGTCATTTCCTATTACGTGTGTCTGCTGCCTCCTAGCCCAGGCT-----TGCCCTTCCTCCC--TCTTCTGAGGTGTCATAGGGTCGTGAC--------------------TTACCTGGTTTGGGGGAGTAGTTGG---------------AAGCTGAGTGA-GTGGTGGGGTTTTCTTATGCTAAAGACCTGCGTCCAGTATAGGAAGAGCCATGTGCCTCCACTCTGGCCCTTGTGGTCT
2 chr1 12177 12259 chr17 66149263 66149338 + 3811
GATTGGAGGAAAGATGAGTGAGAGCATCAACTTCTCTCACAACCTAGGCCAGTAAGTAGTGCTTGTGCTCATCTCCTTGGCTG
GGTTGGAGGGAAGATGAGTGAAGGGATCAATTTCTCTGATGACCTGGGCCGGTAGG-------TGTGGTGTCCTCTTTGTCTG
#for even lines
sed -n 2~2p h.txt| sed 's/a/-/g' | sed 's/t/-/g' | sed 's/c/-/g' | sed 's/g/-/g'
#for odd lines
sed -n 1~2p h.txt| sed -n 2~2p | sed 's/a/-/g'| sed 's/t/-/g' | sed 's/c/-/g' | sed 's/g/-/g'
|
I would use perl instead since that has the $. special variable which holds the current line number. So if $. modulo 2 is 0 (if $. % 2 == 0), then we change all occurrences of a, t, c or g to -:
$ perl -pe 's/[actg]/-/g if $. % 2 == 0' file
0 chr1 11680 11871 chr6 28206109 28206336 - 4581
--------------------------------------------------------------------------------------------------------CG-CAAATTTGCCGGATTTCCTTTGCTGTTCCTGCATGTAGTTTAAACGAGATTGCCAGCACCGGGTATCATTCAC----------------------------------------------CATTTTTCTTTTCGTT
cTAGGGAGTCTTAGTCAAAGGTTTGGACCAAGTCCCTGGCCATGCAGATCTTTGTAGAATCTCCACTCGTGACTTTCCTGCATAACCAGAGTTGAGCATCTTTGAGTCAAGTGTGCCA-ACTTTCTTTGCTGTT-------------TAAATAAGGATGCCAACACCGCATGTCATTAACAGTCTCGTAGGTTGATTGATTTGTTGGCTGGCTCAAAAATGAGAGTTATTTTTCATTTTGTT
1 chr1 11872 12139 chr6 28206484 28206708 - 4257
AACTTGCCGTCAGCCTTTTCTTTGACCTCTTCTTTCTGTTCATGTGTATTTGCTGTCTCTTAGCCCAGACTTCCCGTGTCCTTTCCACCGGGCCTTTGAGAGGTCACAGGGTCTTGATGCTGTGGTCTTCATCTGCAGGTGTCTGACTTCCAGCAACTGCTGGCCTGTGCCAGGGTGCAAGCTGAGCACTGGAGTGGAGTTTTCCTGTGGAGAGGAGCCATGCCTAGAGTGGGATGGGCCAT-TGTTCATCTTCTGGCCCCTGTTGTCT
AGTTTTCTGTCTGCTAATT-TGCCACCAGTCATTTCCTATTACGTGTGTCTGCTGCCTCCTAGCCCAGGCT-----TGCCCTTCCTCCC--TCTTCTGAGGTGTCATAGGGTCGTGAC--------------------TTACCTGGTTTGGGGGAGTAGTTGG---------------AAGCTGAGTGA-GTGGTGGGGTTTTCTTATGCTAAAGACCTGCGTCCAGTATAGGAAGAGCCATGTGCCTCCACTCTGGCCCTTGTGGTCT
2 chr1 12177 12259 chr17 66149263 66149338 + 3811
GATTGGAGGAAAGATGAGTGAGAGCATCAACTTCTCTCACAACCTAGGCCAGTAAGTAGTGCTTGTGCTCATCTCCTTGGCTG
GGTTGGAGGGAAGATGAGTGAAGGGATCAATTTCTCTGATGACCTGGGCCGGTAGG-------TGTGGTGTCCTCTTTGTCTG
To make the change in the original file, just use -i:
perl -i -pe 's/[actg]/-/g if $. % 2 == 0' file
However, I don't think you actually want only the even lines. You seem to want to change lower case residues to dashes on the sequence lines. If so, you can focus on lines that don't contain any spaces and make the change there:
$ perl -pe 's/[actg]/-/g if !/ /' file
0 chr1 11680 11871 chr6 28206109 28206336 - 4581
--------------------------------------------------------------------------------------------------------CG-CAAATTTGCCGGATTTCCTTTGCTGTTCCTGCATGTAGTTTAAACGAGATTGCCAGCACCGGGTATCATTCAC----------------------------------------------CATTTTTCTTTTCGTT
-TAGGGAGTCTTAGTCAAAGGTTTGGACCAAGTCCCTGGCCATGCAGATCTTTGTAGAATCTCCACTCGTGACTTTCCTGCATAACCAGAGTTGAGCATCTTTGAGTCAAGTGTGCCA-ACTTTCTTTGCTGTT-------------TAAATAAGGATGCCAACACCGCATGTCATTAACAGTCTCGTAGGTTGATTGATTTGTTGGCTGGCTCAAAAATGAGAGTTATTTTTCATTTTGTT
1 chr1 11872 12139 chr6 28206484 28206708 - 4257
AACTTGCCGTCAGCCTTTTCTTTGACCTCTTCTTTCTGTTCATGTGTATTTGCTGTCTCTTAGCCCAGACTTCCCGTGTCCTTTCCACCGGGCCTTTGAGAGGTCACAGGGTCTTGATGCTGTGGTCTTCATCTGCAGGTGTCTGACTTCCAGCAACTGCTGGCCTGTGCCAGGGTGCAAGCTGAGCACTGGAGTGGAGTTTTCCTGTGGAGAGGAGCCATGCCTAGAGTGGGATGGGCCAT-TGTTCATCTTCTGGCCCCTGTTGTCT
AGTTTTCTGTCTGCTAATT-TGCCACCAGTCATTTCCTATTACGTGTGTCTGCTGCCTCCTAGCCCAGGCT-----TGCCCTTCCTCCC--TCTTCTGAGGTGTCATAGGGTCGTGAC--------------------TTACCTGGTTTGGGGGAGTAGTTGG---------------AAGCTGAGTGA-GTGGTGGGGTTTTCTTATGCTAAAGACCTGCGTCCAGTATAGGAAGAGCCATGTGCCTCCACTCTGGCCCTTGTGGTCT
2 chr1 12177 12259 chr17 66149263 66149338 + 3811
GATTGGAGGAAAGATGAGTGAGAGCATCAACTTCTCTCACAACCTAGGCCAGTAAGTAGTGCTTGTGCTCATCTCCTTGGCTG
GGTTGGAGGGAAGATGAGTGAAGGGATCAATTTCTCTGATGACCTGGGCCGGTAGG-------TGTGGTGTCCTCTTTGTCTG
Once again, to modify the original file instead of printing to standard output, use -i:
perl -i -pe 's/[actg]/-/g if !/ /' file
| How can I change lowercases to '-' in the even and odd lines using sed piping? |
1,427,484,694,000 |
I am new to the terminal world and would like to process some images in a directory. Some examples of images are as follows (they are from the foggy cityscapes dataset):
frankfurt_000000_000294_leftImg8bit_foggy_beta_0.01.png
frankfurt_000000_000294_leftImg8bit_foggy_beta_0.02.png
frankfurt_000000_000294_leftImg8bit_foggy_beta_0.005.png
munster_000000_000019_leftImg8bit_foggy_beta_0.01.png
munster_000000_000019_leftImg8bit_foggy_beta_0.02.png
munster_000000_000019_leftImg8bit_foggy_beta_0.005.png
Note that _leftImg8bit_foggy_beta_ part of the names are common to all the images and the part before that is used to identify different images. I would like to first separate these images into three separate sub-directories with respect to beta suffix of 0.01 or 0.02 or 0.005. And after separating the files, I would like to remove the file name after _leftImg8bit, for all file names in a subdirectory, while retaining the .png extension.
Could someone help with the linux (CentOs to be specific) terminal commands as I am not so familiar with them. Thanks in advance.
|
With zsh:
autoload -Uz zmv # best in ~/.zshrc
mkmv() { mkdir -p -- $2:h && mv -- "$@"; }
zmv -n -P mkmv '(*)_leftImg8bit_foggy_beta_(*)(.png)' '$2/$1$3'
(remove the -n (dry-run) if happy).
Or with any POSIX-like shell (though without the safeguards of zmv):
for file in *_leftImg8bit_foggy_beta_*.png; do
dir=${file#*_leftImg8bit_foggy_beta_}
dir=${dir%.*}
mkdir -p -- "$dir" &&
mv -- "$file" "${file%%_leftImg8bit_foggy_beta_*}.png"
done
| process files and their names in a directory |
1,427,484,694,000 |
I have used the history -c command in terminal, but it only works for the open session. If I log out and back into command-window-terminal the history is still remembered. How do I stop this from occurring?
I want to manually delete the history of the typed information in the terminal, permanently, preferably just by command, or with an actual explanation to do, not step skipping tech talk, as I am a low experienced user and the scripting is rarely used and is easy to forget.
|
If you want to delete typed history manually, on bash shell you can delete .bash_history file or edit it to remove the lines you want removed.
To check which shell you use type echo $SHELL.
| What to do about key history being re-remembered in terminal? |
1,427,484,694,000 |
I have been trying to upload files from debian machine (raspberrypi) to mega cloud storage via CLI.
I have created the .megarc file in my home directory in the following format
[Login]
Username = ******@gmail.com
Password = ***********
I getting the following error
$ megadf -h
ERROR: Can't login to mega.nz: API call 'us' failed: Server returned error EEXPIRED
Let me know if there are any fixes
|
The fix I could find is as follows:-
Instead of installing and using megatools package from raspbian/debian package manager, download the installer package from https://mega.nz/cmd.
update and upgrade using:-
# apt-get update
# apt-get upgrade
Install the downloaded .deb file
# dpkg -i megacmd-Raspbian_9.0_armhf.deb
Login to the Mega account using:-
# mega-login username password
| Upload files from raspberry pi to Mega cloud using megatools |
1,427,484,694,000 |
I decided to slim down my .zshrc and declutter oh-my-zsh entries. It seems that zsh can now auto install referenced plugins and auto update them, which was something I wanted to achieved. However, every time I run terminal I get the following output:
[zplug] Start to update 0 plugins in parallel
[zplug] Elapsed time: 0.0074 sec.
==> Updating finished successfully!
It is of course not needed and I'd like it to be removed.
Here's the section of .zshrc where I have oh-my-zsh entries:
# Check if zplug is installed
[ ! -d ~/.zplug ] && git clone https://github.com/zplug/zplug ~/.zplug
source ~/.zplug/init.zsh && zplug update
zplug 'zplug/zplug', hook-build:'zplug --self-manage'
zplug "seebi/dircolors-solarized", ignore:"*", as:plugin
zplug "plugins/mvn", from:oh-my-zsh
zplug "plugins/gradle", from:oh-my-zsh
zplug "plugins/git", from:oh-my-zsh
zplug "plugins/sudo", from:oh-my-zsh
zplug "plugins/dnf", from:oh-my-zsh
# Supports checking out a specific branch/tag/commit
zplug "b4b4r07/enhancd", at:v1
# Support bitbucket
zplug "b4b4r07/hello_bitbucket", \
from:bitbucket, \
as:command, \
use:"*.sh"
zplug "zsh-users/zsh-completions", defer:0
zplug "zsh-users/zsh-autosuggestions", defer:2, on:"zsh-users/zsh-completions"
zplug "zsh-users/zsh-syntax-highlighting", defer:3, on:"zsh-users/zsh-autosuggestions"
zplug "zsh-users/zsh-history-substring-search", defer:3, on:"zsh-users/zsh-syntax-highlighting"
# Install plugins if there are plugins that have not been installed
if ! zplug check --verbose; then
printf "Install? [y/N]: "
if read -q; then
echo; zplug install
fi
fi
# Then, source plugins and add commands to $PATH
zplug load
|
Based on reviewing the code at https://github.com/zplug/zplug, those messages are emitted by zplug to stdout via printf, so you could silence them by changing this line:
source ~/.zplug/init.zsh && zplug update
to this one:
source ~/.zplug/init.zsh && zplug update > /dev/null
To inhibit the update altogether, simply remove the update command:
source ~/.zplug/init.zsh
| Remove message shown each time ZSH runs (with oh-my-zsh installed) |
1,427,484,694,000 |
i am working on an application using delphi 7 as the front end and postgres 9.0 as the back end.
I have to upload images to the server so i use \lo_import and \lo_export for inserting images on the server and geting the images from the server.
i had come across a problem where is needed the LASTOID after a \lo_import so i can use the OID to update a row in my table but i cant set the syntax correct in windows
here is my question on stackoverflow.com.. i have got the answer but the script is in linux sheel command ..i cannot run it in windows
psql -h 192.168.1.12 -p 5432 -d myDB -U my_admin << EOF
\lo_import '/path/to/my/file/zzz4.jpg'
UPDATE species
SET speciesimages = :LASTOID
WHERE species = 'ACAAC04';
EOF
and
echo "\lo_import '/path/to/my/file/zzz4.jpg' \\\\ UPDATE species SET speciesimages = :LASTOID WHERE species = 'ACAAC04';" | \
psql -h 192.168.1.12 -p 5432 -d myDB -U my_admin
i have tried this though in windows
"C:\Program Files\PostgreSQL\9.0\bin\psql.exe" -h 192.168.1.12 -p 5432 -d myDB -U my_admin -c "\lo_import 'C://im/zzz4.jpg'";
then immediately (programmatically) im doing
"C:\Program Files\PostgreSQL\9.0\bin\psql.exe" -h 192.168.1.12 -p 5432 -d nansis -U nansis_admin -c " update species SET speciesimages = :LASTOID WHERE species='ACAAC24'"
But i get Syntax error at or near ":"
can any1 tell tell me how to covert it to windows commnad?
|
I got this answer from my question on stackoverflow postgres-9-0-linux-command-to-windows-command-conversion
Just put the commands in a file (say import.psql)
-- contents of import.psql
\lo_import '/path/to/my/file/zzz4.jpg'
UPDATE species
SET speciesimages = :LASTOID
WHERE species = 'ACAAC04';
then issue command
"C:\Program Files\PostgreSQL\9.0\bin\psql.exe" -h 192.168.1.12 -p 5432 -d myDB -U my_admin -f import.psql
| Postgres 9.0 linux command to windows command conversion [closed] |
1,427,484,694,000 |
I have multiple files namely SRR3384742.Gene.out.tab SRR3384743.Gene.out.tab SRR3384744.Gene.out.tab and many more in that order. I am extracting first and fourth columns from these files and store it in an output file. I am trying to ensure that when my script reads a new file it should extract the data tab separated manner instead of data being appended at the end of each file.
Input files:
SRR3384742.Gene.out.tab
N_unmapped 313860 313860 313860
N_multimapping 5786679 5786679 5786679
N_noFeature 286816 31696770 438410
N_ambiguous 1283487 32117 65902
AT1G01010 301 0 301
AT1G01020 623 1 622
AT1G03987 5 5 0
AT1G01030 151 2 149
SRR3384743.Gene.out.tab
N_unmapped 780346 780346 780346
N_multimapping 4621162 4621162 4621162
N_noFeature 182428 28470016 362650
N_ambiguous 1451612 43059 117293
AT1G01010 154 3 151
AT1G01020 685 2 683
AT1G03987 0 0 0
AT1G01030 63 0 63
Output I am getting:
SRR3384742.Gene.out.tab
AT1G01010 301
AT1G01020 622
AT1G03987 0
AT1G01030 149
SRR3384743.Gene.out.tab
AT1G01010 151
AT1G01020 683
AT1G03987 0
AT1G01030 63
Output desired:
SRR3384742.Gene.out.tab SRR3384743.Gene.out.tab
AT1G01010 301 151
AT1G01020 622 683
AT1G03987 0 0
AT1G01030 149 63
I tried the following script:
for sample in *Gene.out.tab; do echo -en $sample "\n"; awk 'NR>4 {print $1 "\t" $4}' $sample; awk '{print $0, $sample}' OFS='\t' $sample; done > output
|
This should give you the output described in the comments, using GNU awk:
gawk 'FNR==1{names[c++]=FILENAME}
FNR>4{ lines[$1] = "x"lines[$1] ? lines[$1]"\t"$4 : $4; }
END{
for(i=0;i<=c;i++){
printf "\t%s",names[i]
}
printf "\n";
for(i in lines){
print i,lines[i]
}
}' *Gene.out.tab
SRR3384742.Gene.out.tab SRR3384743.Gene.out.tab
AT1G01010 301 151
AT1G01020 622 683
AT1G01030 149 63
AT1G03987 0 0
And, to get it all nicely aligned visually as well, pass it through column:
$ gawk 'FNR==1{names[c++]=FILENAME}FNR>4{ lines[$1] = "x"lines[$1] ? lines[$1]"\t"$4 : $4; } END{ for(i=0;i<=c;i++){printf "\t%s",names[i];} printf "\n"; for(i in lines){ print i,lines[i]}}' *Gene.out.tab | column -s$'\t' -t
SRR3384742.Gene.out.tab SRR3384743.Gene.out.tab
AT1G01010 301 151
AT1G01020 622 683
AT1G01030 149 63
AT1G03987 0 0
FNR is a special awk variable that always holds the line number of the current file being processed. FILENAME is a GNU awk special variable that holds the name of the file currently being processed.
FNR==1{names[c++]=FILENAME}: if this is the first line of one of the input files, then use the variable c as the index for the names array whose values are the file names, and also increment its value yb 1 (c++). After all files have been processed, files[0] will be the first file name, files[1] will be the second and so on.
FNR>4{ lines[$1] = "x"lines[$1] ? lines[$1]"\t"$4 : $4; }: This is equivalent to this:
if(FNR>4){
if("x"lines[$1]){
lines[$1]"\t"$4
else{
lines[$1] = $4
}
}
If the current input file's line number is 5 or more, check if this first field has an associated value in the array lines. We check using "x"lines[$i] because if lines[$1] is 0, then the test would be false, but x0 is true, so the x protects from that. So if we do have a value, we append a tab and the current line's 2nd field to it, and if we do not have a value, we set it to be the current line's 4th field.
END{ ... }: do this after processing all input.
for(i=0;i<=c;i++){printf "\t%s",names[i]}; printf "\n"; : print each file name in the names array, preceded by a tab. We want the leading tab to ensure that we have the same number of fields in the header lines and in the content. After printing the file names, print a newline.
for(i in lines){print i,lines[i]}: for each index of the lines array, print the index (the ID) and then print the associated value that was stored in the first step.
Limitation: this requires storing all output data in memory. That really shouldn't be an issue on modern machines since we only store the IDs and just one value per ID per file, so it should be able to handle enormous amounts of input before choking on a reasonably decent machine, but it might become a problem with really enormous amounts of data.
| convert a new line to a tab formatted file |
1,427,484,694,000 |
I need help to find out a way to extract specific information of the lines below using Linux commands.
391,(INSIDE-A),to,(OUTSIDE-A),source,static,SRV_I_N1909,SRV_NAT_I_N1909,destination,static,REDE_AMX_MCK,REDE_AMX_MCK,translate_hits=4399,untranslate_hits=4413
431,(INSIDE-A),to,(OUTSIDE-A),source,static,WK_I_5.5.4.56,SRV_NAT_10.9.3.212,translate_hits=284903,untranslate_hits=8472
432,(INSIDE-A),to,(OUTSIDE-A),source,dynamic,GRP_WKS_HOSTS_,WK_NAT_10.9.7.229,destination,static,G_SRV_ENG_CL,G_SRV_E_CL,translate_hits=0,untranslate_hits=0
436,(INSIDE-A),to,(OUTSIDE-A),source,static,SRV_I_ND007,NAT_10.9.4.238,destination,static,R_MCK,R_MCK,translate_hits=1966,untranslate_hits=1966
437,(INSIDE-A),to,(OUTSIDE-A),source,static,WK_I_5.8.104.120,NAT_A_10.9.7.245,translate_hits=84908,untranslate_hits=1965
440,(INSIDE-A),to,(OUTSIDE-A),source,dynamic,REDE_NET1,NAT_A_10.9.7.247,destination,static,SRV_BT_10.3.33.9,SRV_BT_10.3.33.9,translate_hits=18970,untranslate_hits=18970
As you can see, the lines are different, desired information:
440, translate_hits=18970,untranslate_hits=18970
|
Assuming no field in the file has an embedded comma or newline character (i.e. it's a "simple CSV file"), you can get the first and the last two fields from each line with
$ awk -F , 'BEGIN { OFS=FS } { print $1, $(NF-1), $NF }' file.csv
391,translate_hits=4399,untranslate_hits=4413
431,translate_hits=284903,untranslate_hits=8472
432,translate_hits=0,untranslate_hits=0
436,translate_hits=1966,untranslate_hits=1966
437,translate_hits=84908,untranslate_hits=1965
440,translate_hits=18970,untranslate_hits=18970
NF is a special variable that contains the number of fields on each line, and we set both input and output field separator to a comma. In the print block, we print only the fields that we're interested in.
| How I can extract just some fields from a CSV line of text |
1,427,484,694,000 |
Consider the following code:
UPDATE user set password=PASSWORD('NEWPASSWORD_CAME_HERE') WHERE User='root';
Where it is written NEWPASSWORD_CAME_HERE I've putted my password (between the quote marks).
Yet when I executed this query I got this error:
ERROR 1046 (3D000): No database selected
Why have I got this error?, I followed different guides and this is the most traditional way I've seen so I can't understand why it failed.
|
First of all, you'd have to specify the database in your UPDATE statement, not only the table:
UPDATE mysql.user ...
otherwise MySQL can't know on which database you're operating (hence the error).
However, this is not the proper way to change an user's password in MySQL. Fiddling with the mysql database (which contains database metadata) is not recommended. Do this instead:
SET PASSWORD FOR 'root'@'localhost' = PASSWORD('NEWPASSWORD_CAME_HERE');
| Changing root mysql password failed when done from mysql CLI |
1,427,484,694,000 |
I am trying to build a custom kernel on ubuntu and I saw this document and it said I needed to install the packages
https://help.ubuntu.com/community/Kernel/Compile
To start, you will need to install a few packages. The exact commands to install those packages depends on which release you are using:
Hardy (8.04):
sudo apt-get install linux-kernel-devel fakeroot kernel-wedge build-essential
Note: The package makedumpfile is not available in Hardy.
Lucid (10.04):
sudo apt-get install fakeroot build-essential crash kexec-tools makedumpfile kernel-wedge
sudo apt-get build-dep linux
sudo apt-get install git-core libncurses5 libncurses5-dev libelf-dev asciidoc binutils-dev
sudo apt-get install linux-kernel-devel fakeroot kernel-wedge build-essential
sudo = permits users to execute command as super user
apt-get = Get a package from the Advance Packing Tool ( something like extract a package from a library kind of I
believe)
Install - Operation to do
linux-kernel-devel -
??????????????( what does this mean)
fakeroot - Lets user
do file manupulations as a fake user.
kernel-wedge
build-essential - ??????????????( what does this mean)
|
Commands can be viewed with man. For example 'man sudo' would bring up documentation for the sudo command. If you are looking for information on programs like 'linux-kernel-devel' you can get that from google or from /usr/share/doc/<name> directory.
| What does the commands sudo, apt-get, install, and fakeroot stand for? [closed] |
1,427,484,694,000 |
I know there must be a way to do it, only my skills are not that sharp.
|
Create empty files
You can simply touch a find to create it and you can combine it with bash range expansion to make this easier:
touch somefile # Will create a single file called `somefile`
touch somefiles{01..10} # Will create 10 files called `somefile01` to `somefile10`
touch some{bar,baz} # Will create a file called `somebar` and one called `somebaz`
Creating files with content
You can redirect output from a command into a file with bash redirection:
echo "some content" > somefile
ls > somefile
Or for longer fixed input you can use heredocs:
cat <<EOF >somefile
some multi
line
file
EOF
| Create multiple non .txt files in a single directory without involving editors (vim, emacs, etc) [closed] |
1,427,484,694,000 |
I have a server running under Debian wheezy. I want to create a directory in /. I can't without running sudo mkdir myDirectory. Once in /myDirectory, I can't write anything or create a repertory. I tried to run as root: chmod -R /myDirectory 775, but in vain. I am sure that I am missing something obvious, but I can't find what. Could somebody point me in the right direction?
|
Indeed, it seems that you haven't understood the basics of file permissions.
ls -ld /myDirectory shows you that root is both the owner and the group of the new directory. I.e. if you access the directory then you do that as other. And you have defined (775) that other have no write permission in this directory.
Probably the best solution is to change the owner:
sudo chown $USER: /myDirectory
| I don't have writing access to a subdirectory in / |
1,427,484,694,000 |
find . -type f -name '*.c' -exec cat {} \; | sed '/^\s*#/d;/^\s*$/d;/^\s*\/\//d' | wc -l
Can anyone explain the meaning?
|
Explanation:
find . -type f -name '*.c' - find all files in current directory recursively with any symbols in name and .c extention. See man find
-exec cat {} \; - get content of files found on previous step. See -exec construction: -exec command {} +
sed '/^\s*#/d;/^\s*$/d;/^\s*///d' - remove several types of "comments" (or something similar). This part contain 3 section devided with ;. Where:
/^\s*#/d - start of line (^), 0, 1 or more spaces (\s*) and # symbol. //d - delete matched string;
/^\s*$/d - empty lines. The same of the previous part but with $ symbol which mean end of line
/^\s*\/\//d - matches lines with two slashes (\/\/, \ - backslash to quote the slash) and 0,1 or more spaces in front of them
wc -l - counting the number of lines of code
| This is the command to find number of lines of code that i have written? [closed] |
1,427,484,694,000 |
I want to make multiple copies of a file. I found a readily available solution and I tried. Surprisingly it did not work.
Code:
for i in {1,2,3,4}; do cp MainFile.asy 'CopyFile_$i.asy'; done
Present output:
Folder location
MainFile.asy
CopyFile_$i.asy
I am surprised where I went wrong?
More info:
Attempt1:
Attempt2: from below accepted answer and it worked
Attemp3: from below answer and it did not work
|
The problem is the single quotes which prevent $i from being expanded. Change it to this:
for i in {1,2,3,4}; do cp MainFile.asy "CopyFile_$i.asy"; done
For a more generic version that works in more shells maybe try:
for i in 1 2 3 4; do cp MainFile.asy "CopyFile_$i.asy"; done
Or this without manually entering each value in the range:
for i in $(seq 1 4); do cp MainFile.asy "CopyFile_$i.asy"; done
| WSL make multiple copies of a file |
1,590,319,044,000 |
Is there any way to manipulate terminal input before it gets executed? Example:
apt-get update is entered in the terminal. Now I want to change it to sudo apt-get update before it gets executed.
What I tried:
I thought about a possibility to execute a script which would then execute the entered command with the manipulated Input, but I didn't find a way to run that script while another program should be executed.
I appreciate every answer. I'm on Linux Mint Serena.
Clarification:
Whenever a command is invoked there should be a script or whatsoever running in the background and check if it matches a pattern. If no then execute input and if yes then manipulate that command and execute -> That's why my example with apt-get update to sudo apt-get update. The only thing I can` figure out is, how that script would get that command before it would be executed.
Possible purposes:
When trying to access a specific file/directory do a password prompt
Creating shortcuts (cd _1 will be changed to cd ~/Desktop)
|
UH, something completely new (to me): read the answer to use PS4: how to intercept every command
If just tested:
bash: PS4='$(echo $(date) $(history 1) >> /tmp/trace.txt) TRACE: '
bash: set -x
bash: date
TRACE: date
Fr 28. Jul 14:13:24 CEST 2017
As described in the answer above, PS4 ... is evaluated for every command being executed .... So if you substitute this to pass any of your commands through your wrapper and validate for execution, this way you can intercept any of your commands.
| Manipulate terminal input |
1,590,319,044,000 |
I was trying to use !! on my new install of debian and I get the following error:
$ sudo !!
sudo: !!: command not found
I do I gain use of !!?
Also what can I call !! so that I can actually google something about it?
|
You are referencing the history function of your shell when you refer to !!
I'm not sure what shell you are using. From man bash:
HISTORY EXPANSION
The shell supports a history expansion feature that is similar to the
history expansion in csh.
...
Event Designators
An event designator is a reference to a command line entry in the his‐
tory list. Unless the reference is absolute, events are relative to
the current position in the history list.
! Start a history substitution, except when followed by a blank,
newline, carriage return, = or ( (when the extglob shell option
is enabled using the shopt builtin).
!n Refer to command line n.
!-n Refer to the current command minus n.
!! Refer to the previous command. This is a synonym for `!-1'.
Is there anything in your shell history? When you type the history command do you get any output?
I'm not able to duplicate the error you see:
~$ ls -l | head -1
total 54324
~$ sudo !!
sudo ls -l | head -1
total 54324
~$
| How to install the '!!' command? |
1,590,319,044,000 |
Redirect the output to two different files, One should have new output whenever the commands execute and the other should have both new & old content.
For example:
openstack port create --network testnetwork1 test1-subport110 --disable-port-security --no-security-group
I need to redirect output into 2 different file. File A.txt and B.txt. Whenever executed the openstack port create command the new output should be in A.txt and old & new output should be in B.txt.
I want like below,
cat A.txt
port2UUID
cat B.txt
port1.UUID
port2.UUID
Kindly help me. Thanks in advance
|
cmd | tee A.txt >> B.txt
Or
cmd | tee -a B.txt > A.txt
Would tee (think of a plumber's T) cmd's output both into A.txt and in append mode into B.txt.
With the zsh shell, you can also do:
cmd > A.txt >> B.txt
(where zsh does the T'ing internally by itself when redirecting the same file descriptor several times).
To include cmd's stderr into the inflow of the T, use:
cmd 2>&1 | tee A.txt >> B.txt
Or in zsh:
cmd > A.txt >> B.txt 2>&1
Or:
cmd >& A.txt >>& B.txt
| Redirect the output to two different files, One should have new output whenever the commands execute and the other should have both new & old content |
1,590,319,044,000 |
I have a text file containing a list of strings. The strings are separated by newlines and have the same length, 8-digit. I need to split larger file into smaller chunks, where each chunk contains 4 strings, all strings in the same sequence as they are in a large file.
So I need to create 16 files, 15 files x 4 string each + 1 file x 2 strings. The files should be named as list1.txt, list2.txt, etc.
What is simplest way to solve this using tools such as awk, sed, etc.?
|
You can easily use split.
split --lines=4 --additional-suffix=".txt" --numeric-suffixes inputfile list
where inputfile is, obviously, the input file.
| Split text file into smaller chunks |
1,590,319,044,000 |
I need to copy a file, so that a destination file has some specific string on beginning of each line, and it needs to be a bash one liner. So no script and loops, just bol.
bol - bash one liner
I personally need this done with command that uses grep program. I appreciate if you can solve it any way possible, I just don't have that much use of it, if not with grep.
EDIT: Okay, can't be done with grep, sed is okay.
|
$ sed 's/^/specific string/' input >output
You said you needed to use grep, okay...
$ sed 's/^/specific string/' input | grep . >output
| How to copy the file while making changes to every line [bol]? |
1,590,319,044,000 |
If I have typed in Service apache and service tomcat for example,
how do I switch to previously written stuff?
|
You can use the Page up/Page down keys.
| How can I autocomplete a console line I wrote before? |
1,590,319,044,000 |
I have these files in MAC which have weird ._ character before filenames/folders. Which I want to delete in one shot. Is there a way to do it in commandline?
eg.
._js
._css
._image
if I go into normal image folder. I see another swarm of these files along with the actual files.
|
In bash, this will delete everything in the current working directory which has the prefix ._:
rm ._*
If what you actually wanted to do was change their names to a form without the prefix, you can run:
ls ._* | while read line
do
mv -- "$line" "${line:2}"
done
| how to delete swarm of ._ files using commandline |
1,590,319,044,000 |
There are multiple .tar files in a directory. I am trying to extract them all.
The following command works
for a in $(ls -1 *.tar); do tar -xvf $a; done
But when I try following command, it prints all the file names but does nothing. It does not extract the .tar files.
% tar -xvf *.tar
Solarized-Dark-Cyan-3.0.3.tar
Solarized-Dark-Green-3.0.3.tar
Solarized-Dark-Magenta-3.0.3.tar
Solarized-Dark-Orange-3.0.3.tar
Solarized-Dark-Red-3.0.3.tar
Solarized-Dark-Violet-3.0.3.tar
Why is that, given unzip '*.zip' works for multiple .zip files.
|
unzip handles '*.zip'-style arguments, tar doesn’t. There might be archive extractors with this feature, with the ability to extract tarballs, but I’m not aware of any.
You should avoid using ls for this:
for a in *.tar; do tar -xvf "$a"; done
tar -xvf *.tar tries to extract the second and further tarballs from the first one, which usually won’t do anything.
| Untar multiple files in a directory |
1,590,319,044,000 |
I have a massive file of customer account information that is currently sorted like this, into one column. I am looking to split each row, using the : as the separator. But in doing so, for each row, when separated, I wanted to make into a new column, placing the data after each : into the respective column. My ultimate goal is to make this into CSV form to import somewhere for data analytics and/or to build a database.
firstName:John
middleName:null
lastName:Doe
companyName:John Doe Corp
suffix:null
primaryEmail:[email protected]
primaryPhone:555.555.5555
secondaryEmail:[email protected]
secondaryPhone:null
Also, this is not the total amount of rows per customer. Each customer is 55 rows.
|
Using perl, which is present on any desktop or server Linux distro:
perl -lne '
BEGIN{$,=","}
($k,$v)=split":",$_,2;
next unless defined $v;
for($k,$v){s/"/""/g,$_=qq{"$_"}if/[$,"]/}
$k=$t{$k}//=$t++;
if(exists$f[$k]){print@f;@f=()}
$f[$k]=$v;
END{print@f;print STDERR sort{$t{$a}<=>$t{$b}}keys%t}
' your_file
This should convert the file to standard CSV, except that the header (the first line with the field names) will be printed the stderr, after having processed the whole file. You can save it somewhere with ... >body 2>hdr and then cat hdr body > final_file.csv.
This does not attach any special significance to empty lines, etc: a record is considered as comprised of a cluster of fields which have different names, no matter what order they're in.
Fields which contain either , or " will be put inside "...", and any inner " will be escaped by doubling it as "" (using the CSV convention).
You can adjust the field separator by changing $,="," to eg. $,="|" (or $,="\t" for Tabs). You can get rid of the quoting & escaping by removing the for($k,$v){ ... } line.
This could be done in awk (NOT in sed or tr, though), only that it will be a bit more complicated, as awk has no way to print entire arrays at once (you have to loop through them), nor the ability to split a string in a limited number of fields (you'd have to use a substr trick for that).
| Using sed, awk, or tr to split each row, using colon (:) as the separator, into CSV format |
1,590,319,044,000 |
I have a file containing space for example like this:
ACTTTTTTTTGSGSGSGSG TTT
RTATATTATRSSTSTSTST HHH
I want to eliminate the space and get the result:
ACTTTTTTTTGSGSGSGSG__TTT
RTATATTATRSSTSTSTST__HHH
|
With sed, assuming that the purpose is to replace each blank space with an underscore (_), for all blanks spaces in the lines
sed 's/ /_/g' file
Tests
$ cat file
ACTTTTTTTTGSGSGSGSG TTT
RTATATTATRSSTSTSTST HHH
$ sed 's/ /_/g' file
ACTTTTTTTTGSGSGSGSG__TTT
RTATATTATRSSTSTSTST__HHH
| How to replace space by __ |
1,590,319,044,000 |
I came across the command
du -xh / | grep -P "G\t"
I am interested in the switch -P of grep and what does it do. Also, can anyone explain what the "G\t" part does?
Please do not explain du -xh or the basics of the command grep.
$ du -xh / | grep -P "G\t"
5.1G /var/oracle/XE/datafile
5.1G /var/oracle/XE
5.1G /var/oracle
1.1G /var/lib
6.9G /var
1.9G /opt/softwareag/webMethods/install/fix
1.9G /opt/softwareag/webMethods/install
1.2G /opt/softwareag/webMethods/Designer
1.3G /opt/softwareag/webMethods/common
1.9G /opt/softwareag/webMethods/CCE
1.2G /opt/softwareag/webMethods/IntegrationServer/instances/default/replicate/salvage
3.0G /opt/softwareag/webMethods/IntegrationServer/instances/default/replicate
2.3G /opt/softwareag/webMethods/IntegrationServer/instances/default/packages
5.5G /opt/softwareag/webMethods/IntegrationServer/instances/default
5.5G /opt/softwareag/webMethods/IntegrationServer/instances
5.7G /opt/softwareag/webMethods/IntegrationServer
16G /opt/softwareag/webMethods
16G /opt/softwareag
16G /opt
1.1G /usr/share
3.0G /usr
11G /u01/app/oracle/oradata/XE
11G /u01/app/oracle/oradata
12G /u01/app/oracle
12G /u01/app
12G /u01
39G /
|
As explained in the grep manual, -P enables the use of PCREs, i.e. Perl Compatible Regular Expressions.
The PCRE expression G\t matches a G followed by a tab.
The effect is that you only get a listing of directories whose size is listed in gigabytes (or whose name happens to match the pattern).
An alternative pipeline that would more reliable match the G at the end of the first tab-delimited column only:
... | awk -F '\t' '$1 ~ /G$/'
Would you also want to see the entries that are shown in units smaller or larger than gigabytes, then change G into [KMGTPEZY].
| Command du -xh / | grep -P "G\t" explained? |
1,590,319,044,000 |
I have this script
#!/bin/bash
module load bedtools/2.21.0
bamfiles=(
/temp/hgig/fi1d18/1672_WTSI-COLO_023_1pre/mapped_sample/HUMAN_1000Genomes_hs37d5_RNA_seq_WTSI-COLO_023_1pre.dupmarked.bam
/temp/hgig/fi1d18/1672_WTSI-OESO_005_w3/mapped_sample/HUMAN_1000Genomes_hs37d5_RNA_seq_WTSI-OESO_005_w3.dupmarked.bam
/temp/hgig/fi1d18/1672_WTSI-OESO_036_2post/mapped_sample/HUMAN_1000Genomes_hs37d5_RNA_seq_WTSI-OESO_036_2post.dupmarked.bam
/temp/hgig/fi1d18/1672_WTSI-COLO_021_1pre/mapped_sample/HUMAN_1000Genomes_hs37d5_RNA_seq_WTSI-COLO_021_1pre.dupmarked.bam
/temp/hgig/fi1d18/1672_WTSI-COLO_027_1pre/mapped_sample/HUMAN_1000Genomes_hs37d5_RNA_seq_WTSI-COLO_027_1pre.dupmarked.bam
/temp/hgig/fi1d18/1672_WTSI-COLO_011_1pre/mapped_sample/HUMAN_1000Genomes_hs37d5_RNA_seq_WTSI-COLO_011_1pre.dupmarked.bam
/temp/hgig/fi1d18/1672_WTSI-COLO_176_1pre/mapped_sample/HUMAN_1000Genomes_hs37d5_RNA_seq_WTSI-COLO_176_1pre.dupmarked.bam
/temp/hgig/fi1d18/1672_WTSI-COLO_170_1pre/mapped_sample/HUMAN_1000Genomes_hs37d5_RNA_seq_WTSI-COLO_170_1pre.dupmarked.bam
/temp/hgig/fi1d18/1672_WTSI-COLO_141_1pre/mapped_sample/HUMAN_1000Genomes_hs37d5_RNA_seq_WTSI-COLO_141_1pre.dupmarked.bam
/temp/hgig/fi1d18/1672_WTSI-COLIVM_005_1pre/mapped_sample/HUMAN_1000Genomes_hs37d5_RNA_seq_WTSI-COLIVM_005_1pre.dupmarked.bam
/temp/hgig/fi1d18/1672_WTSI-COLO_099_1pre/mapped_sample/HUMAN_1000Genomes_hs37d5_RNA_seq_WTSI-COLO_099_1pre.dupmarked.bam
/temp/hgig/fi1d18/1672_WTSI-COLO_085_1pre/mapped_sample/HUMAN_1000Genomes_hs37d5_RNA_seq_WTSI-COLO_085_1pre.dupmarked.bam
/temp/hgig/fi1d18/1672_WTSI-COLO_075_1pre/mapped_sample/HUMAN_1000Genomes_hs37d5_RNA_seq_WTSI-COLO_075_1pre.dupmarked.bam
/temp/hgig/fi1d18/1672_WTSI-COLO_027_a_RNA/mapped_sample/HUMAN_1000Genomes_hs37d5_RNA_seq_WTSI-COLO_027_a_RNA.dupmarked.bam
/temp/hgig/fi1d18/1672_WTSI-COLO_021_a_RNA/mapped_sample/HUMAN_1000Genomes_hs37d5_RNA_seq_WTSI-COLO_021_a_RNA.dupmarked.bam
/temp/hgig/fi1d18/1672_WTSI-OESO_036_a_RNA/mapped_sample/HUMAN_1000Genomes_hs37d5_RNA_seq_WTSI-OESO_036_a_RNA.dupmarked.bam
/temp/hgig/fi1d18/1672_WTSI-COLO_005_1pre/mapped_sample/HUMAN_1000Genomes_hs37d5_RNA_seq_WTSI-COLO_005_1pre.dupmarked.bam
/temp/hgig/fi1d18/1672_WTSI-COLO_023_a_RNA/mapped_sample/HUMAN_1000Genomes_hs37d5_RNA_seq_WTSI-COLO_023_a_RNA.dupmarked.bam
/temp/hgig/fi1d18/1672_WTSI-OESO_121_1pre/mapped_sample/HUMAN_1000Genomes_hs37d5_RNA_seq_WTSI-OESO_121_1pre.dupmarked.bam
/temp/hgig/fi1d18/1672_WTSI-OESO_013_a_RNA/mapped_sample/HUMAN_1000Genomes_hs37d5_RNA_seq_WTSI-OESO_013_a_RNA.dupmarked.bam
/temp/hgig/fi1d18/1672_WTSI-OESO_005_a_RNA/mapped_sample/HUMAN_1000Genomes_hs37d5_RNA_seq_WTSI-OESO_005_a_RNA.dupmarked.bam
/temp/hgig/fi1d18/1672_WTSI-COLO_011_a_RNA/mapped_sample/HUMAN_1000Genomes_hs37d5_RNA_seq_WTSI-COLO_011_a_RNA.dupmarked.bam
/temp/hgig/fi1d18/1672_WTSI-COLO_019_1pre/mapped_sample/HUMAN_1000Genomes_hs37d5_RNA_seq_WTSI-COLO_019_1pre.dupmarked.bam
)
for file in "${bamfiles[@]}"; do
fname=$(basename "$file")
fdir=$(dirname "$file")
bamtofastq -i "$file" -fq "${fdir}/${fname%.bam}.fq"
done
I run this
[fi1d18@cyan01 ~]$ chmod +x run.sh
[fi1d18@cyan01 ~]$ run.sh
basename: invalid option -- 's'
Try `basename --help' for more information.
./run.sh: line 30: bamtofastq: command not found
basename: invalid option -- 's'
Try `basename --help' for more information.
./run.sh: line 30: bamtofastq: command not found
basename: invalid option -- 's'
Try `basename --help' for more information.
./run.sh: line 30: bamtofastq: command not found
basename: invalid option -- 's'
Try `basename --help' for more information.
./run.sh: line 30: bamtofastq: command not found
basename: invalid option -- 's'
Try `basename --help' for more information.
./run.sh: line 30: bamtofastq: command not found
basename: invalid option -- 's'
Try `basename --help' for more information.
./run.sh: line 30: bamtofastq: command not found
basename: invalid option -- 's'
Try `basename --help' for more information.
./run.sh: line 30: bamtofastq: command not found
basename: invalid option -- 's'
Try `basename --help' for more information.
./run.sh: line 30: bamtofastq: command not found
basename: invalid option -- 's'
Try `basename --help' for more information.
./run.sh: line 30: bamtofastq: command not found
basename: invalid option -- 's'
Try `basename --help' for more information.
./run.sh: line 30: bamtofastq: command not found
basename: invalid option -- 's'
Try `basename --help' for more information.
./run.sh: line 30: bamtofastq: command not found
basename: invalid option -- 's'
Try `basename --help' for more information.
./run.sh: line 30: bamtofastq: command not found
basename: invalid option -- 's'
Try `basename --help' for more information.
./run.sh: line 30: bamtofastq: command not found
basename: invalid option -- 's'
Try `basename --help' for more information.
./run.sh: line 30: bamtofastq: command not found
basename: invalid option -- 's'
Try `basename --help' for more information.
./run.sh: line 30: bamtofastq: command not found
basename: invalid option -- 's'
Try `basename --help' for more information.
./run.sh: line 30: bamtofastq: command not found
basename: invalid option -- 's'
Try `basename --help' for more information.
./run.sh: line 30: bamtofastq: command not found
basename: invalid option -- 's'
Try `basename --help' for more information.
./run.sh: line 30: bamtofastq: command not found
basename: invalid option -- 's'
Try `basename --help' for more information.
./run.sh: line 30: bamtofastq: command not found
basename: invalid option -- 's'
Try `basename --help' for more information.
./run.sh: line 30: bamtofastq: command not found
basename: invalid option -- 's'
Try `basename --help' for more information.
./run.sh: line 30: bamtofastq: command not found
basename: invalid option -- 's'
Try `basename --help' for more information.
./run.sh: line 30: bamtofastq: command not found
basename: invalid option -- 's'
Try `basename --help' for more information.
./run.sh: line 30: bamtofastq: command not found
[fi1d18@cyan01 ~]$
[fi1d18@cyan01 ~]$ run.sh
./run.sh: line 32: bamtofastq: command not found
./run.sh: line 32: bamtofastq: command not found
./run.sh: line 32: bamtofastq: command not found
./run.sh: line 32: bamtofastq: command not found
./run.sh: line 32: bamtofastq: command not found
./run.sh: line 32: bamtofastq: command not found
./run.sh: line 32: bamtofastq: command not found
./run.sh: line 32: bamtofastq: command not found
./run.sh: line 32: bamtofastq: command not found
./run.sh: line 32: bamtofastq: command not found
./run.sh: line 32: bamtofastq: command not found
./run.sh: line 32: bamtofastq: command not found
./run.sh: line 32: bamtofastq: command not found
./run.sh: line 32: bamtofastq: command not found
./run.sh: line 32: bamtofastq: command not found
./run.sh: line 32: bamtofastq: command not found
./run.sh: line 32: bamtofastq: command not found
./run.sh: line 32: bamtofastq: command not found
./run.sh: line 32: bamtofastq: command not found
./run.sh: line 32: bamtofastq: command not found
./run.sh: line 32: bamtofastq: command not found
./run.sh: line 32: bamtofastq: command not found
./run.sh: line 32: bamtofastq: command not found
[fi1d18@cyan01 ~]$
|
Given the limited information in your question, and assuming your bamtofastq command is this one from the bedtools package, I have come up with the following:
#!/bin/bash
bamfiles=(
/path/to/file1.bam
/path/to/file2.bam
/path/to/file3.bam
)
for file in "${bamfiles[@]}"; do
fname=$(basename "$file")
fdir=$(dirname "$file")
bedtools bamtofastq -i "$file" -fq "${fdir}/${fname%.bam}.fq"
done
This assumes you want to manually hand jam all your bam files into the script and want the .fq files to reside in the same directory as it's corresponding bam file. If this is not the case please provide more information that could help us answer your question more efficiently.
| Running a pipelin in linux [closed] |
1,590,319,044,000 |
I have a log file like
name = CE_20_122 assigned_hostnames = host1 cpuset_name = usr_1397032
name = CE_21_122 assigned_hostnames = host4 cpuset_name = usr_1397028
name = CE_22_122 assigned_hostnames = host4 cpuset_name = usr_1397024
.
.
.
name = CE_76_122 assigned_hostnames = host27 cpuset_name = usr_1397012
name = CE_77_122 assigned_hostnames = host28 cpuset_name = usr_1397128
The command
sort logfile
sorts the lines as a whole.
How do I sort lines by one of the columns, e.g. by hostX or by usr_X?
|
by hostX:
sort -nk 6.6
by usr_X
sort -nk 9.6
| text file sorting |
1,590,319,044,000 |
When I execute man foo the terminal shows only the first page and then pauses. Then I have to manually press the keys to scroll here and there.
After executing man foo how do I get an output which is continuously scrolling till the end of it. It'll be bonus if I could control the speed of scrolling.
|
man itself does only call $PAGER to display the man page. $PAGER is usually set to less, which does not support such kind of scrolling.
You can simply set $PAGER to any other command that does support such a feature.
You can also simply do something like:
man man|perl -pe 'sleep 1'
Of course you can also make it sleep two seconds for each line. ;)
| Auto continuous scrolling of man output |
1,590,319,044,000 |
I'm having an interesting issue with XFCE Terminal/Gnome Terminal (not reproducible in XTerm), where executing bash or logging in using login or su will open a new Bash instance inside a Bash instance as shown:
_randall@manbearpig:/home/randall[root@manbearpig randall]#
Ctrl+D and exit both exit back to the original bash instance. How do I make these terminal emulators behave like Xterm, which opens the new user account or bash instance over the original one?
|
I don't understand the problem. typing bash, login or su are SUPPOSED to start a new shell.
What is it that you expect to happen?
I cannot see where your system is doing anything wrong.
if you want to open another TERMINAL program, then type gnome-terminal or whatever the program name is.
Bash is a shell, where you type commands, gnome-terminal, xterm, konsole (and lots more) are just terminal emulators which show the output of a shell (bash/sh/dash/ksh/csh/zsh...)
| Duplicate bash prompts |
1,590,319,044,000 |
I would like to edit something like this;
ABC
abc
123
Into something like this;
Aa1Bb2Cc3
|
Asumming the string are exactly 3 characters long you can do it with awk for example. Something like do the work:
awk 'BEGIN {FS="" } {a=a$1;b=b$2;c=c$3} END {print a b c}' input_file >output_file
If the length is different and constant over the lines you can use something like:
awk -v N=3 'BEGIN {FS="" } { for(i=1;i<=N;i++) a[i]=a[i]$i} END { for(i=1;i<=N;i++) {printf "%s",a[i]};}' input_file >output_file
Replace 3 with the number of chars per line
This will work well on GNU awk. Not sure about the rest of variant.
| How to convert multiple columns from file to single string |
1,590,319,044,000 |
I have a huge .csv file in this format:
"acc","lineage"
"MT993865","B.1.509"
"MW483477","B.1.402"
"MW517757","B.1.2"
"MW517758","B.1.2"
"MW592770","B.1.564"
...
i.e, the first column is a string representing the accession_id of the data sample and the second column is a covid variant lineage. I would like to extract accession_ids along with their lineage for a few specific variants of interest, for example, Omicron i.e. B.1.1.529. I tried to grep the file with -w but since . is a non-word character, it fetches me the results of variants that extend omicron for example, B.1.1.529.1
For elaborated discussion, please have a look at this bash script I wrote:
# filter data based on the selected lineages (refer to variants_lineage.txt for more info) as given below.
# File with metadata
metadata_file="$HOME/thesis/SARS-CoV2-data/metadata.csv"
cat "$metadata_file" | tr -d '"' | tr ',' $'\t' > adj_metadata.tsv
# list of lineages of interest
selected_lineages=("B.1.1.7" "B.1.351" "P.1" "B.1.617.2" "B.1.1.5290" "C.37" "B.1.621" "B.1.429" "B.1.427" "CAL.20C" "P.2" "B.1.525" "P.3" "B.1.526" "B.1.617.1" )
pattern=$(echo ${selected_lineages[*]}|tr ' ' '|')
if [ -f "adj_metadata.tsv" ]
then
echo "File exists"
for lineage in ${selected_lineages[@]}
do
echo "Filtering for lineage $lineage"
grep -w "$lineage" adj_metadata.tsv >> filtered_metadata.tsv
done
else
echo "Adjusted metadata file does not exist."
fi
# Check for the uniqueness of the filtered_metadata.csv file, this should fetch the list of selected_lineages
cut -d$'\t' -f2 filtered_metadata.tsv | sort | uniq
Any suggestions/advice are very much appreciated.
And also please feel free to comment on improvements that are not related to the question.
Thank you in advance.
|
Method 1
Since the string in your .csv is always between double-quotes ", you could include the quotes in your match. You then simply use single quotes ' for the expression.
Example:
asdf.csv:
"foo","B.1.1.529"
"bar","B.1.1.529.1"
╰─$ grep '"B.1.1.529"' ./asdf
"foo","B.1.1.529"
As you see B.1.1.529.1 will not match in this case.
Method 2
While method 1 would work with your input data, it would not with the adj_metadata.tsv as it is stripped of all quotes. You could of course modify your script to first match and then pipe the output through tr, but that would include unnecessary work.
What you could do there is anchor the regular expression to the end of the line with $
Example:
adj-metadata.tsv:
foo B.1.1.529
bar B.1.1.529.1
╰─$ grep "B.1.1.529$" adj_metadata.tsv
foo B.1.1.529
The only modification you'll need to make to your script with this method is to add \$ at the right spot in your grep command:
#!/bin/bash
# filter data based on the selected lineages (refer to variants_lineage.txt for more info) as given below.
# File with metadata
metadata_file="$HOME/thesis/SARS-CoV2-data/metadata.csv"
cat "$metadata_file" | tr -d '"' | tr ',' $'\t' > adj_metadata.tsv
# list of lineages of interest
selected_lineages=("B.1.1.7" "B.1.351" "P.1" "B.1.617.2" "B.1.1.5290" "C.37" "B.1.621" "B.1.429" "B.1.427" "CAL.20C" "P.2" "B.1.525" "P.3" "B.1.526" "B.1.617.1" )
#replace all occurrences of "." with "\."
selected_lineages=$(echo $selected_lineages | sed 's/\./\\./g')
if [ -f "adj_metadata.tsv" ]
then
echo "File exists"
for lineage in ${selected_lineages[@]}
do
echo "Filtering for lineage $lineage"
grep -w "$lineage\$" adj_metadata.tsv >> filtered_metadata.tsv
done
else
echo "Adjusted metadata file does not exist."
fi
# Check for the uniqueness of the filtered_metadata.csv file, this should fetch the list of selected_lineages
cut -d$'\t' -f2 filtered_metadata.tsv | sort | uniq
Note: While . is usually used as an expression for any character, you would need to escape with a \ to search for a literal . like so: B\.1\.1\.529$.
You could still keep it without \, for the sake of simplicity while typing.
| grep to find an exact word match with a period in it |
1,590,319,044,000 |
Is there any way to search a string after given match ?
e.g. If I run dmidecode then it will give a lot of information. e.g.
BIOS Information
Vendor: ABCD
Version: 123456(V1.01)
Release Date: 01/01/1970
Address: 0xE0000
Runtime Size: 128 kB
ROM Size: 8192 kB
Characteristics:
PCI is supported
BIOS is upgradeable
BIOS shadowing is allowed
Boot from CD is supported
Selectable boot is supported
BIOS ROM is socketed
EDD is supported
Japanese floppy for NEC 9800 1.2 MB is supported (int 13h)
Japanese floppy for Toshiba 1.2 MB is supported (int 13h)
5.25"/360 kB floppy services are supported (int 13h)
5.25"/1.2 MB floppy services are supported (int 13h)
3.5"/720 kB floppy services are supported (int 13h)
3.5"/2.88 MB floppy services are supported (int 13h)
8042 keyboard services are supported (int 9h)
CGA/mono video services are supported (int 10h)
ACPI is supported
USB legacy is supported
Targeted content distribution is supported
UEFI is supported
BIOS Revision: 1.21
Firmware Revision: 1.21
Now If I grep "version" then it will enlist various matches from dmidecode output.
Instead, Is there any way to search for the line "version" right after ^BIOS word and stop at first match ?
So output will be like:
Version: 123456(V1.01)
|
$ sudo dmidecode |
awk '/^BIOS/ { ++Bios } Bios && /Version/ { print; exit; }'
Version: 02PI.M505.20110824.LEO
$
We just count that BIOS went past, and trigger on the Version line.
As the BIOS is the first block (on my system), and grep has a max-count option, this should also work
sudo dmidecode | grep -m 1 'Version'
| Search a string after a match |
1,590,319,044,000 |
My current directory contains two directories test1 and test2.file1 is present in test1.
How can I create symbolic link in folder test2/lin for file1?
After the link operation Link file in test2/lin should point to test1/file1
|
The symlink resolution by the system is relative to the target (unless the link is absolute of course).
So it has to be considered as if you went into the final directory. In this case that would be (with explicit naming of the target):
cd test2/lin
ln -s ../../test1/file1 file1
The source doesn't change, that's the useful "content" of the symlink. So if you don't change directory, instead:
ln -s ../../test1/file1 test2/lin/file1
| How to link file in subdirectory to another subdirectory in shell script |
1,590,319,044,000 |
I want to apply diff to compare to directory recursively and store it output in c.txt \n
when I use diff -qr dir1 dir2 >c.txt It store something like Files dir1/a.txt and dir2/a.txt differ. But I want to store only a in it. Any suggestion
|
Looks like you need to compare only files with the same name. diff exits with code 1 if files are different and with 0 otherwise.
for f in dir1/*
do
b = `basename $f`
[[ -f dir2/$b ]] && [[ diff -q $f dir2/$b ]] && echo $b >> c.txt
done
It's easier than to process diff output.
| diff to store only file name in another file |
1,590,319,044,000 |
echo INPUT | MAGIC > OUTPUT
INPUT: a random number that can be 0-999999999999 (so very big)
OUTPUT: a number between: 0-1023
MAGIC: a solution where the random smaller/bigger input is "converted" to the interval that the OUTPUT uses, so 0-1023
example:
INPUT: 0
OUTPUT: 0
another example:
INPUT: 1634
OUTPUT: 609
INPUT needs to "overflow" the OUTPUT.
|
echo Enter the INPUT:
read INPUT
echo OUTPUT: $(echo "$INPUT % 1024" | bc)
| Generate between interval while having overflow |
1,590,319,044,000 |
To expand on my previous question, I have another pattern of a file, I am trying to change the name of first column ranging from seq1 to seq20 (seq1-seq20) as seq1 similarly ranging from seq21 to seq60 (seq21-seq60)as seq2. File name is file.txt anf the following format is:
seq22 19301 20914 fill_color=green_a0
seq55 16726 18337 fill_color=green_a0
seq10 167934 169650 fill_color=green_a0
seq36 200621 202367 fill_color=red_a0
seq7 160164 161903 fill_color=green_a0
seq56 31356 33104 fill_color=green_a0
seq25 15030 16656 fill_color=green_a0
seq43 99693 101326 fill_color=red_a0
seq19 66168 67689 fill_color=green_a0
seq50 55955 57479 fill_color=green_a0
seq9 454456 456277 fill_color=green_a0
seq35 282633 284453 fill_color=green_a0
seq10 354264 355872 fill_color=green_a0
seq36 10125 11742 fill_color=red_a0
seq3 106668 110910 fill_color=green_a0
Output file look like
seq2 19301 20914 fill_color=green_a0
seq2 16726 18337 fill_color=green_a0
seq1 167934 169650 fill_color=green_a0
seq2 200621 202367 fill_color=red_a0
seq1 160164 161903 fill_color=green_a0
seq2 31356 33104 fill_color=green_a0
seq2 15030 16656 fill_color=green_a0
seq2 99693 101326 fill_color=red_a0
seq1 66168 67689 fill_color=green_a0
seq2 55955 57479 fill_color=green_a0
seq1 454456 456277 fill_color=green_a0
seq2 282633 284453 fill_color=green_a0
seq1 354264 355872 fill_color=green_a0
seq2 10125 11742 fill_color=red_a0
seq1 106668 110910 fill_color=green_a0
I tried with this
sed -e "s/seq[1:20]*/seq1/" -e "s/seq[21:60]*/seq2/" file.txt
and
awk 'NR>=seq1 && NR<=seq20{sub("seq*","seq1",$0)} 1' file.txt
|
I suggest:
awk '{gsub(/[^0-9]/,"",$1); if($1+0<21){$1="seq1"} else {$1="seq2"}; print}' file
gsub(/[^0-9]/,"",$1) removes all but numbers from first column.
| Renaming object or element in perticular range in a column of text file |
1,667,644,510,000 |
please, help for this commande. so I write this commande
awk -F":" '{tab[$5]+=1} END { for (i in tab) {print i,tab[i]}}' AnnuaireBis.txt
who give this output:
Ketou 4
Anneho 4
Panhouignan 4
Bohicon 2
Kpedekpo 2
but I want to get this format:
Please, help me. Thank you so much
|
you don't even need to put both elements inside the s/printf portion :
mawk -F':' '$!_=sprintf("%-15s",$!_)'
Ketou 4
Anneho 4
Panhouignan 4
Bohicon 2
Kpedekpo 2
Confirmed to work for gawk, mawk-1, mawk-2, and nawk
| awk commande-line [closed] |
1,667,644,510,000 |
I have a problem that I think sed is probably perfect for, but I don't know enough about it to figure out how to employ it correctly.
Here's what I have - a file like this, but much longer:
https://www.npmjs.com
https://www.npmjs.com/package/rabin
https://www.politico.com/news/magazine/blah/blah
https://www.raspberrypi.org
https://www.raspberrypi.org/documentation/blah
https://www.raspberrypi.org/products/raspberry-pi-zero-w/
https://www.reddit.com
https://www.reddit.com/
https://www.reddit.com/r/geology/blah/blah/blah
https://www.reddit.com/r/commandline/blah/blah/blah
...thousands more...
What I need are just the items in bold, that is there are many series of URLs that share a domain name, and I need the last URL in each series for the whole text file.
So just the ones that have an arrow in front
https://www.npmjs.com
->https://www.npmjs.com/package/rabin
->https://www.politico.com/news/magazine/blah/blah
https://www.raspberrypi.org
https://www.raspberrypi.org/documentation/blah
->https://www.raspberrypi.org/products/raspberry-pi-zero-w/
https://www.reddit.com
https://www.reddit.com/
https://www.reddit.com/r/geology/blah/blah/blah
->https://www.reddit.com/r/commandline/blah/blah/blah
...thousands more...
Any ideas?
Thank you!
|
This did the trick:
cat input.txt | \
gawk -e '{match($0, /(https?:\/\/(?:www.)?[a-zA-Z0-9-]+?[a-z0-9.]+)/, url)} \
!a[url[1]]++{ \
b[++count]=url[1] \
} \
{ \
c[url[1]]=$0 \
} \
END{ \
for(i=1;i<=count;i++){ \
print c[b[i]] \
} \
}' > output.txt
The regular expression could probably be simplified quite a bit, and maybe made to capture more variance in domain names, but for my case it worked fine. The awk command is modified from this answer. (Funny how someone removed the ‘bash’ tag from my question, and the answer that actually helped me was tagged with ‘bash’...
Reflecting on this problem more, I suppose you could also use ask to add the matched domain to the end as a separate “field”, use sort unique to select the last, and then remove the domain “field” at the end, or rather use ask to print only the first “field”, i.e. the original URL, after the sort unique.
| Remove all URLs in a series with the same domain except the last occurrence, in a long list of many URL series |
1,667,644,510,000 |
I attached both picture and *.txt file https://1drv.ms/t/s!Aoomvi55MLAQh1jODfUxa-xurns_ of a sample work file. In this file Reactions which only start with "r1f", "r2f", "r3f"......and so on. And for each reaction the reaction rates is situated couple of lines later with a "+" sign.
I want to change the first And 3rd numbers in reaction rates with +/-75%. So there will be 4 changed values for each reaction.
So if in the Prob01.txt file there are 6 reactions then I want to have 6*4=24 txt files each have only one change in the reaction rates.
That means only for first reaction i want four Prob01.txt files comprised of 4 changes in the reaction 1.
|
How about this .... definitely a sledgehammer.
Invoke it as thisScript Prob01.txt 0.75 0.25 to apply combinations of +/-75% change on the 1st and +/-25% on the 3rd values on each reaction and write them to separate files
#!/bin/bash
#takes $inputFile $pct1 $pct3
#note $pct is the multiplier expressed as a decimal
#global variables
#$w : the line number and original figures, space separated
#$newFile : the new file name
#$o : the original figures in the original format
#$n : the new figures in the correct format
inputFile=$1
#strip the suffix (.txt) from the inputFile name
outFile=${inputFile%.*}
pct1=$2
pct3=$3
function domath {
# takes $value $multiplier
local m=$(echo 1+$2 | bc -l)
local theanswer=$(echo $1 $m | awk '{printf "%7.6E\n" , $1*$2}' | sed -E -e 's/[Ee]\+*/E/g' -e 's/^([^-])/+\1/g')
echo $theanswer
}
function makechange {
#takes $reaction $case
#compose new file name
newFile=${outFile}_$1_$(printf "%02g" $2).txt
#make a copy
cp $inputFile $newFile
#change the appropriate line
sed -i "${w[0]}s/$o/$n/" $newFile
}
#get all the reaction names
grep -Po "^r[0-9]+f(?=:)" Prob01.txt > stepA
#get all the figures and their line numbers in case duplicates occur
grep -Pon "^\+[^\!]*" Prob01.txt > stepB
for ((i=1; i<=$(cat stepA | wc -l); i++)); do
reaction=$(sed "${i}q;d" stepA)
figures=$(sed "${i}q;d" stepB | sed 's/:/ /g')
w=($figures)
#retrieve the old string
o=$(echo $figures | grep -Po "(?<= ).*")
#compose the new string for each of the 4 cases
for ((j=1; j<=4; j++)); do
case $j in
1)
n=$(echo "$(domath ${w[1]} $pct1) ${w[2]} ${w[3]}")
;;
2)
n=$(echo "$(domath ${w[1]} -$pct1) ${w[2]} ${w[3]}")
;;
3)
n=$(echo "${w[1]} ${w[2]} $(domath ${w[3]} $pct3)")
;;
4)
n=$(echo "${w[1]} ${w[2]} $(domath ${w[3]} -$pct3)")
;;
esac
#make the changes
makechange $reaction $j
done
done
#clean up
rm step{A..B}
| How to manipulate numbers in a file"? [closed] |
1,667,644,510,000 |
Suppose I have 100 texts under the same dir, i.e., text1.txt, text2.txt,...,text100.txt. I want to extract certain lines(e.g., first 100 lines)from each text, and save the lines to another new 100 texts respectively,that is, each new text has 100 lines.
I know head -100 text1.txt > text1_new.txt, head -100 text2.txt > text2_new.txt ... can make it. But are there any more efficient methods to extract them simultaneously in the terminal?
Thanks!!
|
One way would be
find . -name "text*.txt" -type f -print0 | xargs -0 -I{} sh -c 'f="{}"; head -100 "$f" > "${f%.txt}_new.txt"'
find . -name "text*.txt" -type f finds all text files in the directory
-print0 prints the file path with a null character to preserve spaces
xargs -0 takes the null terminated arguments
-I{} is used as placeholder for the argument
sh -c executes dash with a command string
f="{}" saves the argument in variable f
head -100 "$f" the head command
"${f%.txt}_new.txt" replaces ".txt" with "_new.txt" in the argument
| How to obtain certain lines from several texts simultaneously? |
1,667,644,510,000 |
I have a directory that has several subdirectories in it. Each subdirectory has several files in it. I want to delete all the files in the subdirectories except the .pdf ones. And leave the subdirectories alone. I used
find . -type f ! -iname "*.pdf" -delete
But I have to be in the subdirectory to make it work. I want to do it recursively too.
|
In bash, to delete all of the non-pdf files in all of the subdirectories of the current one:
shopt -s extglob
rm */!(*.pdf)
The initial */ matches every subdirectory, and the extglob option enables the !( ... ) pattern that says: match all files except what's inside the parenthesis; in this case, the pattern to exclude is *.pdf. If you may also have files with .PDF as an extension, use this instead:
rm */!(*.[pP][dD][fF])
| Removing certain files in a series of subdirectories |
1,667,644,510,000 |
How can I associate one public IP address per Namespace in my Ubuntu 14.04 server? I need to launch one specific process per Namespace then per public IP address.
I want to do that:
configuration:
nameSpace1 use: publicIP1
nameSpace2 use: publicIP2
nameSpace3 use: publicIP3
Terminal:
nameSpace1 ffmpeg etc...
nameSpace2 ffmpeg etc...
nameSpace2 youtube-dl etc...
nameSpace2 streamlink etc...
nameSpace3 ffmpeg etc...
|
Partial answer, based on some assumptions that may not be true:
In general, network namespaces provide control over what network interface is visible to what processes. Assigning a network namespace to an interface will make sure it's only seen by processes running in that namespace. Conversely, processes running in that namespace will only see network interfaces in their own namespace.
So there are basically two ways to let processes running in a namespace communicate with the outside: (1) Move an existing network interface connecting to the outside into the namespace using ip link set, and (2) make a virtual ethernet pair (a sort of "pipe"), put one end in the namespace, let the other end stay global, and set up routing, optionally NAT etc. for it.
OVH seems to own (at least) the 176.31.0.0/16, and according to this help page and a bit of whois sleuthing, they have divided it up into 4096 blocks of 16 IP addresses, where the first and the last four of each block are reserved for internal use. This also means they either have less than 4096 customers who bought RIPE blocks, or they do carrier grade NAT, and you'll be sharing your IP addresses with someone else.
So, if you bought a /28 RIPE block and got for example the block 176.31.154.160/28, the 15th address (176.31.154.175) is for broadcast, the 14th address (176.31.154.174) is the gateway, and you can use addresses 176.31.154.161 to 176.31.154.171.
For OVH, a failover IP as added by creating a virtual interface of eth0. So I'd assume your 11 usable addresses would be handled in the same way. However, some people report problems when doing that, which may or may not related to the particular way they are trying to do it. You'll also likely need a macvlan instead of a virtual address interface/label to be able to move it into the network namespace. So by convention, you'll likely and up with mac0 (long form mac0@eth0) instead of the eth0:0 used below.
Assuming that this is the correct way, whatever is needed to make it work, you'd end up with interfaces eth0:0 up to eth0:10 for each usable address. You can then create a namespace (ip netns add space0), move the interface into this namespace (ip link set eth0:0 netns space0), set up the correct routing in the namespace (ip netns exec space0 ip route add default via 176.31.154.174 dev eth0:0), and run your application inside that namespace (ip netns exec space0 ffmpeg ...).
If you are just using netspaces and not full-blown containers, handling DNS resolving is a bit fiddly and restricting it to the namespace in my experience doesn't always work (and I haven't found a good way to fix it yet), though I had a very different setup which will probably not apply to your case. So you might experience problems if you are using hostnames instead of raw IP addresses in the applications, but fixing that is probably worth another question.
Edit: The more I think about it, the more it seems that the OVH setup with an address block with its own broadcast address is really meant to serve as a means to run multiple VMs: Each VM should get its own IP address (instead of a network namespace), and then you have something like a private LAN where your VMs can communicate with each other. So we're likely abusing the concept quite a bit by using namespaces instead.
Edit: Looking at OVH's VPS offers instead of help pages, I see they offer a "13 geolocation and 16 IPs" options. If that's what you bought, it again depends how they set this up. I couldn't find any help pages about that, so I assume that information is in the email you received. If you can just assign those IP addresses directly to your eth0 interface, then the procedure is essentially the same, though you'll probably need different gateways.
Also note if your just running a web server, you can often tell the web server to only bind to a specific IP (or even bind to several IPs, and act differently depending on which IP you are bound to), so you don't need network spaces in this case.
| One public IP address per Namespace [closed] |
1,667,644,510,000 |
For instance, consider the following example. I have a process running on my machine and I only want a specific string in my output.
When I run the following command
ps -ef | grep pmon
I get
oracle 3680 1 0 Oct04 ? 00:00:08 ora_pmon_SEED
I want the command to only display "SEED"
SEED
|
In that specific example you could pipe it through to cut:
ps -ef | grep pmon | cut -d _ -f 3
Why you would want to, however, is a bit of a mystery.
| How do I get a specific string using ps command in linux [closed] |
1,667,644,510,000 |
I have a directory with several duplicate files, created by a program. The duplicates have the same name (except for a number), but not all files with the same name are duplicates.
What's a simple command to delete the duplicates (ideally a single line limited to GNU coreutils, unlike the question about scripts)?
Example filename:
parra1998.pdf
parra1998(1).pdf
parra1998(2).pdf
|
A quick and dirty solution is to hash the files, then search the hashes which appear more than once and delete those whose filename is numbered.
For instance:
sha1sum * > files.sha1sum
cat files.sha1sum | cut -f1 -d" " | sort | uniq -c | grep -v " 1 " | sed --regexp-extended 's/^[^0-9]+[0-9] //g' | xargs -n1 -I§ grep § files.sha1sum | sed --regexp-extended 's/^[^ ]+ +//g' | grep -v '(' | xargs -n1 -I§ rm "§"
| Command to delete duplicate files from current directory [duplicate] |
1,667,644,510,000 |
I piped one echo command into the other
üåê echo a b c d e f g h i | echo
üåê echo $?
0
Contrary to my intuition, there was no output, however there was also no error returned. I expected, that echo a b c d | echo is only an unnecessary redundant alternative to echo a b c d .
But it is not the case, Why were the arguments lost on their way through the pipe?
|
This is due to echo not reading from standard input. Pipes are only useful for sending the standard output from one command to the standard input of the next command.
Since the output ef echo a b c ... is not consumed by the second echo, it is lost and there is no output from the pipe, except for the single newline from the second echo.
Since the last echo successfully outputs a blank line, the exit status is zero.
| Why does "echo a b c d e | echo" display no result? [duplicate] |
1,667,644,510,000 |
So I have tried over 10 different attempts and I'm just stumped
Here are two inverted-tree diagrams. Issue a command to change the left diagram to the right diagram. Assume that you are in your home directory and use relative pathnames. [home] is your home directory:
Here is the diagram:
[home] [home]
| |
+----------+----------+ +-------------+
| | | | |
systems ideas courses ideas courses
| |
notes systems
|
notes
Here's what I have tried so far:
mv system courses/system
mv [home] home
mv [home] [home]
mv [home]/courses/ideas/systems/notes [home]/ideas/courses/system/notes
and many more.. any ideas?
I'm using SSH Secure Shell to do this by the way.
|
You want to move the systems directory into the courses directory. Just:
mv systems courses/
| What command would I issue in order to complete this question? |
1,667,644,510,000 |
I have trouble executing binary file, both from the GUI and the command line. I am running Ubuntu 17.10 . Here are the logs :
julien@julien-PC:~/JEUX/ROMS/Logiciels/snes9x-1.53$ ls
data docs snes9x-gtk
julien@julien-PC:~/JEUX/ROMS/Logiciels/snes9x-1.53$ ./snes9x-gtk
bash: ./snes9x-gtk: Aucun fichier ou dossier de ce type
PS : The last line is in French but it means "no file or directory of this type" .
I also have this issue with the Super Meat Boy installer I have downloaded from Humble Bundle.
UPDATE :
Using file, I have :
julien@julien-PC:~/JEUX/ROMS/Logiciels/snes9x-1.53$ file ./snes9x-gtk
./snes9x-gtk: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, for GNU/Linux 2.6.9, not stripped
I tried the command /lib/ld-linux.so.2 ./snes9x-gtk (because it is the interpreter) and it was not found. After some research on Internet, I found it in the package lib32z1, and after installing it, when I retried the command, I get error while loading shared libraries: libX11.so.6: cannot open shared object file: No such file or directory.
By using the command ldd I have as output :
julien@julien-PC:~/JEUX/ROMS/Logiciels/snes9x-1.53$ ldd ./snes9x-gtk
linux-gate.so.1 => (0xf7f82000)
libX11.so.6 => not found
libdl.so.2 => /lib32/libdl.so.2 (0xf7f5b000)
libXext.so.6 => not found
libGL.so.1 => not found
[...]
libm.so.6 => /lib32/libm.so.6 (0xf7e54000)
libgcc_s.so.1 => not found
libc.so.6 => /lib32/libc.so.6 (0xf7c81000)
/lib/ld-linux.so.2 (0xf7f84000)
There is a lot of missing dependencies...
I tried to fix both libX11 and libXext, but I had issues :
I assumed libX11 was in package libx11-6 but after trying to install it, it says that it is already installed. Same for libXext and package libxext-6.
Do you have any suggestions ? Thanks.
|
These are 32-bit binaries; to get them running on your Ubuntu system, you need to install :i386 packages. The i386 architecture should already be enabled, but just in case, run
sudo dpkg --add-architecture i386
sudo apt update
Then install the missing libraries, e.g.
sudo apt install libx11-6:i386 zlib1g:i386
etc. To find packages containing the libraries you need, install apt-file:
sudo apt install apt-file
sudo apt-file update
apt-file search libX11.so.6
| Can not execute binary file on Ubuntu 17.10 [closed] |
1,667,644,510,000 |
I want to login in remote server and I don't know the remote server password. I am doing the below command
cat id_rsa.pub | ssh [email protected] | cat > authorized_keys
or
scp id_rsa.pub [email protected]:~./ssh/authorized_keys
But it asking for password
|
There is no possible way to access to server and put or edit anything without any authentication, because in the end you are trying to edit authorized_keys file, and if there is no authentication you could copy and paste any thing you want to that file and that will be a horrible security vulnerability
| How to copy the id_rsa.pub keys in remote server authorized_key without knowing the remote server password [closed] |
1,667,644,510,000 |
The command:
mount -t cifs -o username=root //ipadress/map/mnt/map
So I mean like what does "mount" mean? What does "-t" mean? Etc.
|
Whenever you are wondering what a command means, the first step is to run man command (where "command" is the command in question, so man mount in this case). That will bring up the command's manual which usually includes a short description of what it does and also an explanation of the various options. Admittedly, the man pages are not always very clear to new users, but they're always a good start.
Now, let's have a look at man mount. The first lines are:
NAME
mount - mount a filesystem
So mount is the command you use to mount a filesystem. Mounting a file system simply means attaching it to a directory. So that when you cd into that directory, you see the contents of the filesystem. The most common scenario is where the filesystem is a hard drive or a hard drive partition. So, on your Linux box, your main hard drive partition is mounted on the root (/) directory. On a Windows machine, it is mounted at C:\. Same basic idea.
Now, the things that start with - are command line options, also known as "switches" or "flags". The -t specifies the filesystem type:
-t, --types fstype
The argument following the -t is used to indicate the filesystem
type. The filesystem types which are currently supported depend
on the running kernel. See /proc/filesystems and /lib/mod‐
ules/$(uname -r)/kernel/fs for a complete list of the filesys‐
tems. The most common are ext2, ext3, ext4, xfs, btrfs, vfat,
sysfs, proc, nfs and cifs.
In this case, you are mounting a remote directory using CIFS, the Common Internet File System. This is basically a protocol for file sharing, essentially. It's an easy and portable way of mounting a remote directory onto your local machine.
The -o is how you set the various possible options for mounting. Here, you are only setting one option: the username of the user to whom the files in the mounted filesystem will belong. Specifically, you will be mounting as root so everything on that filesystem will appear to belong to the root user.
The final argument is what you are mounting. The general format of the mount command is:
mount [OPTIONS] -t FILESYSTEM TARGET MOUNTPOINT
The TARGET is what you are attempting to mount. In your case, you seem to want to mount the directory /map/mnt/map which is found on the machine with the IP address ip. If you were to actually run the command to, for example, mount something from the machine on your local network with the IP of 192,168.1.10, you would run:
mount -t cifs -o username=root //192.168.1.10/map/mnt/map TARGET
However, the command is incomplete. You also need a target, the directory where this will be mounted. The mountpoint. This can be any directory on your local machine, preferably an empty one1. so, to mount the remote directory /map/mnt/map from the server 192.168.1.10 onto your local directory /mnt/myshare (create the directory first with sudo mkdir /mnt/myshare), you would run:
mount -t cifs -o username=root //ipadress/map/mnt/map /mnt/myshare
1If you choose a non-empty directory, any files in it will be masked by the contents of the mounted filesystem. Unmounting will bring them back, but it can be a cause for some consternation, so choose an empty directory for this.
| What does every part of the "mount -t cifs -o username=root //ipadress/map/mnt/map" command mean? [closed] |
1,667,644,510,000 |
I have a simple text file named as 'file.txt'
I want to create a .zip file which only include that 'file.txt'.
I tried cat file.txt | zip newZipFun.zip -@. But it compressed my parent folder.
Additionally I need to output my .zip to a different location too.
|
If you can't just do zip newZipFun.zip file.txt as larsks suggested, you can imagine doing
find . -name "toto*" | xargs zip totos.zip
where "toto*" is the name of all files starting with toto, in the current working directory and it's sub directories
| txt file to zip file |
1,667,644,510,000 |
hieupa@cpt00108094a:/media$ ll
total 16
drwxr-xr-x 4 root root 4096 Th04 4 13:47 ./
drwxr-xr-x 24 root root 4096 Th04 4 09:29 ../
drwxrwxrwx+ 3 hieupa hieupa 4096 Th04 4 13:47 hieupa/
drwxr-x---+ 2 root root 4096 Th04 4 13:45 hieupalocal/
I have just created an user hieupa, but it not have permission root like that.
So How can I change hieupa hieupa to root root and then delete user hieupalocal
Thanks.
|
If you just want the directory permissions changed:
sudo chown heiupa:heiupa hieupa
If you want it to be recursive, affecting all the files within, run this instead:
sudo chown -R heiupa:heiupa hieupa
To delete the entire directory hieupalocal including the contents.
sudo rm -r hieupalocal
If you are prompted on whether you should "descend into write-protected directory", then just press "y" for yes. Alternatively, to skip the prompts, you can "force" the rm with:
sudo rm -rf hieupalocal
| How to set root permissions for one directory and delete another? |
1,667,644,510,000 |
Where does Linux keep the list of valid commands that can be called from the command prompt (using ENTER key after command is typed)?
Is this list exhaustive or are there ways to type other things at the command prompt that are NOT in this list; and if so, what are they? (ie: CTRL+C -- to break from the command prompt, etc)
If you don't know the answers to those two questions, feel free to answer this instead:
Where does the source code begin on Linux when hitting ENTER at the command line after typing a command?
Where does the source code begin on Linux if ANY command is executed at the command prompt?
My questions may be user specific based on security, so to keep it simple, let's use user root for brevity.
|
Nowhere.
Linux is the kernel, and just the kernel. Any commands are either shell built-ins, which are listed in the documentation of each shell, or executable binaries located usually in /bin and /sbin directories in the /, /usr and /usr/local. Shells themselves are also binaries located in those directories.
There are no limits what binaries are included in a Linux distribution. Certain binaries are considered standard tools (echo, ls, grep etc.) but there are no requirements for any developer to include them.
The last questions make no sense. Source code is what you write to create an executable binary.
| Exhaustive list of Linux commands for command prompt [closed] |
1,667,644,510,000 |
Why is there no summary option in coreutils ls command, like MS-DOS/Windows has?
With summary option I mean:
count the files and dirs and sum up their sizes.
Update:
It should read: "Even DOS/Windows has one."
It's:
command.com vs. sh
cmd.exe vs. bash
with clear points for the latters.
But for some reason, and that is the question, Linux/Unix has no summary in the directory listing.
And instead of fixing that, statements go out that this is right and the right thing to do and "well done"... Only after that threads explodes with solutions to fix this vacancy by scripting!
It seams to me a good example of the X-Y Problem:
User wants to do X.
User doesn't know how to do X, but thinks they can fumble their way to a solution if they can just manage to do Y.
User doesn't know how to do Y either.
User asks for help with Y.
Others try to help user with Y, but are confused because Y seems like a strange problem to want to solve.
After much interaction and wasted time, it finally becomes clear that the user really wants help with X, and that Y was a dead end.
Imagine the following:
You sit in a restaurant, the waiter brings the bill. He has listed all the dishes, but no summary! You have to do it yourself - he has already "well done".
Or hasn't he?
Closing remark:
Of course know I - and love - the UNIX toolkit. But the basic functions should be provided by the tool itself. To add a few numbers - at the right place, and especially in such a heavily needed case - is no thing. And I see no reason not to do it.
Conclusion:
My understanding is now: It's POSIX!
The POSIX standard has no mention of a summary. And that's it.
It's carved in stone.
People don't even think about X. They are used to dealing with Y.
Nevertheless, it is astonishing how completely the possibility that it could also be otherwise is lost from view.
|
But the basic functions should be provided by the tool itself.
This is correct. The UNIX/Linux philosophy is to have commands/tools that do one thing only, and do that thing extremely well. Then the stdout of one command is fed to the stdin of another, via a pipe, to produce complex results.
The purpose of the ls command is, from its manpage, to "list information about files". It is able to produce a listing including or excluding patterns, sorted by different criteria, showing different information, etc. but it's all it does. To calculate the total size of a group of files is not ls's duty. In fact, this is what du does. The du command, again from its manpage, is a tool to "summarize disk usage of the set of files".
So, this might not be the answer you were looking for, but ls behaves like this by UNIX design. A design which has proven itself efficient during many decades, I might add.
On the other hand, the philosophy of Windows, as I have noticed, is different. To me, it looks like it aims to provide the user with some general-purpose tool that does a bit of everything, perhaps in the purpose of not having an (inexperienced) user learn too many commands.
| coreutils ls summary |
1,667,644,510,000 |
I ran this command on Fedora, which I anyway wanted to uninstall, so I decided to check out this command: sudo rm -rf /* just for fun. As soon as I ran this command, the GUI stopped working and patches of black started appearing, I thought the work was done, and did a forced shutdown.
[By the way, I was multibooting Windows 10, Linux Mint, Garuda Linux & Fedora]
When I rebooted, I was expecting Garuda Linux's Grub to show up, but nothing happened and DELL's Support Assist showed up. Then I learned from the BIOS, that the EFI partition was completely erased, which makes sense as it was the /boot/efi directory in Fedora.
Then I had to go through the all the recovery stuff to get my OSs booting again.
I was worried that, like the EFI partition, which was mounted to Fedora was completely erased, all my Data partitions would also be erased with the command.
But when I checked out after getting every thing right, every thing was saved. And even the Fedora partition had some space used.
I then formatted the Fedora partition from g-parted of Garuda Linux.
Now I wonder what exactly does the command: sudo rm -rf /* really does, just to check I didn't lose any other data.
|
sudo rm -rf /* (-r means to remove directories and their contents recursively and -f to ignore nonexistent files and arguments and never prompt for confirmation and /* just expands to everything in /) removes everything in / and as you found out with /boot/efi this also includes mounted filesystems. The reasons why some data was not removed can be:
Partitions from your other distributions/operating systems were not mounted, rm can't remove data from unmounted devices/filesystems.
They were mounted as read only.
As Kamil pointed out, you could stop the recursive remove by the force shutdown in time for some data to survive.
As for why the Fedora had some space used, it depends on how you checked it. Even empty filesystem has some space used (metadata, filesystem reserve etc.) and for example GParted will show this but that doesn't necessary mean some data survived.
| What does ' sudo rm -rf /* ' do? |
1,667,644,510,000 |
Suppose I have 4 directories :
Directory_1 Directory_2 Directory_3 Directory_4
Is there a command line in a terminal Linux I can use to copy a file to all of these directories?
Is there a command line in a terminal Linux I can use to remove a file to all of these directories in the same time?
|
Is there a command line in a terminal Linux I can use to copy a file to all of these directories?
Yes, but it's not something obvious to a beginner
tee Directory_{1..4}/file <file >/dev/null
Another approach is to use four separate commands
cp file Directory_1
cp file Directory_2
cp file Directory_3
cp file Directory_4
or with a shell such as bash that understands the {1..4} expression, a loop covering all four directories
for d in Directory_{1..4}; do cp file "$d"; done
Is there a command line in a terminal Linux I can use to remove a file to all of these directories in the same time?
Yes. rm with a wildcard
rm Directory_*/file
| Remove or copy a file all at once |
1,667,644,510,000 |
I tried to find info online, but I could not find any. It seems that many people use a specific sequence of numbers, without actually providing any explanation why.
More specifically, my $PS1 in bash is the following:
\[\033[38;5;21m\][\[\033[38;5;20m\]\u@\[\033[38;5;1m\]\h \W\[\033[38;5;21m\]]\[\033[0m\]\$
I cannot understand what is the 38;5 sequence. Does anybody knows that is the 38;5?
I know what is does, but I do not know that is it! I mean, I know that I have to use it in order to assign the next value (i.e. 38;5;1m, the 1m is the next value) as the foreground colour and use values from 256 colours, but I do not know why 38 and why 5 and what other options are there and what these options represent.
For example, why after 38 we have to use either 5 or 2 and not 1 or 3? Is there any general form that both the 38 and 48 codes correspond to? For example, is there any general form of code that is something like the <code>;<switch>;<value>which the 38 and 48 have?
Any help?
|
Originally the codes came from DEC as part of their VT52/VT100/VT220 series serial display consoles. These were later standardised as part of ECMA and ANSI, and over time extended.
You can see one such early ECMA standards document from 1979, specifically page 40 of the document (page 48 of the PDF file) section 7.2.63 SGR. The ESC [ 38 sequence is reserved for future use. These colour tables are that future use.
| What is the 38;5 sequence in $PS1? |
1,667,644,510,000 |
This is actually for brew search
but to simplify, let's say I do:
echo -e 'somedir\n\otherdir'
somedir
otherdir
Can I use otherdir as an argument without doing this?
ls $(echo -e 'somedir\n\otherdir |tail -n1)
otherdir
Update to Clarify Goal
To be more clear somedir and otherdir go to stdout, can I run a command on only otherdir
Without capturing the output in $(), then executing the command on that?
I just wanted to know if it's possible, although I think it can't be done.
|
You can always use xargs which is intended for this kind of thing: convert output streams to list of arguments:
printf 'somedir\n\otherdir\n' |
tail -n1 |
xargs -rd '\n' ls -l --
Beware -d '\n', needed here for arguments to be expected as lines requires the GNU implementation of xargs.
If that last line doesn't contain blanks nor quotes nor backslashes, you can omit it.
-r, needed to avoid running ls if the input is empty is also from GNU xargs but nowadays widely supported (and will be in the next version of the POSIX standard).
If you don't want to use command substitution, in bash, you can always read the lines of the output of the command into an array and run the command on the last element of that array:
readarray -t lines < <(printf 'somedir\notherdir\n')
if (( ${#lines[@]} > 0 )); then
ls -l -- "${line[@]: -1}"
fi
Or use perl:
printf 'somedir\notherdir\n' |
perl -lne 'exec qw(ls -l --), $_ if eof && $.'
Note that with command substitution, since it strips all trailing newline characters, you can't directly differentiate between an empty input and the last line being empty. In both cases, you get and empty expansion.
One advantage of using command substitution, is that you can check whether the command has succeeded (based on its exit status) before deciding to use its output:
die() { [ "$#" -eq 0 ] || printf>&2 '%s\n' "$@"; exit 1; }
last_line=$(
set -o pipefail
cmd | tail -n 1
) || die "cmd failed"
ls -l -- "$last_line"
Or you can use grep to also check that the output is non-empty:
if
last_line=$(
set -o pipefail
cmd |
tail -n 1 |
grep '^'
)
then
ls -l -- "$last_line"
else
echo>&2 "Either cmd failed or it didn't produce any output"
fi
grep '^' matches on the beginning of a line. So it will return true if the output contains at least one line (whether it's empty or not). Beware though that if the input doesn't end in a newline character, it's not considered as a line as per the POSIX definition of a line, and some grep implementations could fail in that case and even discard that non-delimited line.
| how to to execute a command output as an argument without saving it using $()? |
1,519,164,598,000 |
What does the -d mean in the below?
sed 's/^ *//' < /tmp/list.txt | xargs -d '\n' mv -t /app/dest/
|
From man xargs:
--delimiter=delim
-d delim
Input items are terminated by the specified character. Quotes and
backslash are not special every character in the input is taken
literally. Disables the end-of-file string, which is treated like
any other argument. This can be used when the input consists of
simply newline-separated items, although it is almost always
better to design your program to use --null where this is
possible. The specified delimiter may be a single character, a
C-style character escape such as \n, or an octal or hexadecimal
escape code. Octal and hexadecimal escape codes are understood as
for the printf command. Multibyte characters are not supported.
Note the comments to question and take them to heart. This answer could easily been found by either checking your own man page, or using your favorite search engine to search for an on-line version of the man page, or searching for man xargs -d.
| What does the "-d" stand for in xargs -d [closed] |
1,519,164,598,000 |
When I run the mkdir {2009..2011}-{1..12} command, why can not I see it consecutively like 2009-1 2009-2 2009-3 ... 2009-12?
|
The listing is sorted lexically in columns. So 12 comes before 2 because it sorts on the first digit (as if it were a word with ab coming before b in standard sorting) and 1 comes before 2.
The normal way to handle this would be to include a leading zero on the single digits. 2009-01, 2009-02, ..., 2009-09, 2009-10. You can achieve this with mkdir {2009..2011}-{01..12}.
| Why can not I see the screen output consecutively? |
1,519,164,598,000 |
I want to move files from parent directory to sub-directory using command only. because i have only SSH access to remote server.
I have files on /var/www/html/ and i want to move to /var/www/html/myfolder
UPDATE: i somehow followed these steps was able to move files.
Check this Answer
|
Ok, I'll bite. You use the mv command.
Say that you are in directory foo:
ls foo/*
foo/file1 foo/file2
foo/bar:
Now you want to move file1 and file2 from directory foo to directory foo/bar:
mv -v file1 file2 bar/
file1 -> bar/file1
file2 -> bar/file2
Result:
ls foo
bar
ls foo/bar/
file1 file2
https://www.linux.com/learn/how-move-files-using-linux-commands-or-file-managers
| How to move files from parent diretory to sub-directory using command? [duplicate] |
1,519,164,598,000 |
How are apps like vim and w3m built? I was trying to find info about this online but I couldn’t really find much.
|
You could check the project's source code repositories. For instance, the vim source code can be found at: https://github.com/vim/vim
However, you probably want to start a few steps earlier, as such large and grown projects tend to be very complicated.
But I don't think this was your intended question. I assume you are rather interested in how to create terminal UIs. Behind the scenes this is basically writing a bunch of magic characters with printf which make the terminal do stuff like switching to alternative buffers or printing in different colors. There is even a Wikipedia page on the codes. But normally you would use a library that is abstracting away the low level parts like ncurses.
For more details, you could for instance check this article which looks like it answers your actual question.
| How to make CLI applications in Unix? [closed] |
1,519,164,598,000 |
I am new to UNIX.
How do I achieve the below
input: text1=ABC/text2=DEF
output: text1,text2
Thanks
|
Your question is not very exact in term of structure of the string you would like to convert. So I'm going to guess that your input characteristics are:
Multiple KEY=VALUE pairs are provided in single line.
Each pair will be separated from other pairs by / character.
/ must be placed only between pairs (not at the start or the end of string).
No consecutive repetition of / is allowed.
In each pair, key cannot be empty, but value can be empty (= is optional if value is empty).
Each key and value cannot contain = and/or / character.
And you needed to extract keys, then output them delimited by comma...
Sed-based Approach (Cheat)
This can be done from your script by running your input through one-line sed-based search/replace operation:
sed 's/=[^/]*//g;y/\//,/'
Translation: Remove all instances of = together with consecutive non-/ characters following it; then, replace all / characters with comma.
Example code follows (should run on any POSIX shell, not just GNU Bash):
#!/bin/sh
# This is ssv-keys-sed.sh
echo -n "input: "
IFS= read -r INPUT
echo -n "output: "
echo "$INPUT" | sed 's/=[^/]*//g;y/\//,/'
Example run:
$ sh ssv-keys-sed.sh
input: keyA=valueA/ k e y B =/keyC/keyD=valueD
output: keyA, k e y B ,keyC,keyD
Shell Script Approach (Full Parsing)
If you insist on doing this by using shell script-based parsing rather than substitution-based cheat above, you can toy with IFS word separator variable and for loop. Be sure to take notice of quoting (and lack of thereof) in different contexts; this can make or break the program, since we are tinkering with shell's internal word separator.
If you use shell script variable unquoted, it values will be split by delimiters specified in IFS variable, then taken as multiple tokens.
If you enclosed shell script variable by double quote, its value will be used as whole, not split.
If you enclosed shell script variable by single quote, it will not be treated as variable; everything written inside single quote will taken literally.
Following script should run on any POSIX shell, not just GNU Bash...
#!/bin/sh
# This is ssv-keys-parse.sh
# Show input prompt
echo -n "input: "
# Read one line from standard input into variable INPUT,
# no parsing or escape-processing
IFS= read -r INPUT
# Prepare empty output variable OUTPUT
OUTPUT=""
# Set parsing separator for extracting pairs
IFS="/"
# Extract each pair
# ^ Note that all pairs will be extracted before the loop is run,
# so separator set inside the loop won't effect pairs extraction.
for PAIR in $INPUT
do
# Set parsing separator for extracting key
IFS="="
for KEY in $PAIR
do
# Stop at the first split part of key-value pair (i.e. key)
break
done
# If this is not the first key in the output, append comma to the output
if [ -n "$OUTPUT" ]
then
OUTPUT="$OUTPUT,"
fi
# Append the extracted key to the output
OUTPUT="$OUTPUT$KEY"
done
# Emit output
echo "output: $OUTPUT"
Example run:
$ sh ssv-keys-parse.sh
input: keyA=valueA/ k e y B =/keyC/keyD=valueD
output: keyA, k e y B ,keyC,keyD
P.S. My test runs are done with Debian Almquist Shell installed as /bin/sh, and sed being GNU sed.
| find a string and get the previous string delimited by a comma [closed] |
1,519,164,598,000 |
I understand what the "cat" command does.
I.e.
cat file1 file2 > file3
Will put the contents of file1 and file2 in file3. (If I am not mistaken)
But what exactly does:
cat file1 | file2 > file3
do?
I don't have a UNIX machine to test this on, and I can't google " | ", hence my question.
|
It's a pipeline. It will redirect the standard output of the first command into the standard input of the second command.
cat file1 | grep example
For example, the above command will catenate the requested file into grep's stdin.
The command you posted would fail.
cat file1 | file2 > file3
file2 isn't an executable and thus the operation would stop there.
| What does "|" between arguments in cat command do? |
1,519,164,598,000 |
I would like to create a new .wc command.
|
With the exception of shell builtins, commands are just programs. This means your question reduces to, "How do I write a program?"
(Or, "How do I write a script?" which amounts to the same thing, since a script is just another type of program. The distinction between scripting and programming is not important to get into here.)
wc is a good example, because it is not a shell builtin. It is just another program on the system, typically installed in /usr/bin/wc or /bin/wc, depending on the OS.
In order to make your new command behave like the existing ones on the system, the program implementing it has to be installed somewhere in the PATH. It is common on Linux distributions to put $HOME/bin into the user's PATH if the directory is present on login. If you want the command to be available to all users on the system, you probably want to put it somewhere else, like /usr/local/bin.
| How to create new script , role like wc [closed] |
1,519,164,598,000 |
I googled this question many times but could not find a good reason why the knowledge of command line is important.Some say GUI cannot be used always but there are no examples supporting their statement.
|
On command line you need to learn a few comparatively low level but extremely flexible tools, which serve as building blocks for anything more complex. You only need to learn them once and you can combine them later for any task you need to do.
GUI-based tools tend to be more powerful, but also tend to be more specialized and inflexible. - They are good for one single (complex) task. If you need to do something else you need to find another tool, hope it is doing exactly the right thing and you then need to learn how to use yet another tool.
| Why is learning command line imporatant? [duplicate] |
1,519,164,598,000 |
I have a couple of questions about the find command.
How to show how many files and directories(only the result numbers) within the /var directory (and below) are owned by someone other than you or root.
same as above, but this time is to show how many users.
Modify the command to show those other owners (in alphabetical order) with the output numbered. It should be something like this:
1 avahi-autoipd
2 colord
3 daemon
4 dirmngr
5 libuuid
6 lp
7 man
8 mdm
9 ntp
10 speech-dispatcher
11 syslog
Each of these questions should use 3 separate but very similar command lines.
EDIT:
I figured it out by myself
sudo find /var/ -not -user root -not -user myusername | wc -l
sudo find /var/ -not -user root -not -user myusername -printf '%u\n' | sort -u | wc -l
sudo find /var/ -not -user root -not -user myusername -printf '%u\n' | sort -u
These command lines will do the same thing too:
Part1:
sudo ls -oAu1QBR /var | tr -s ' ' | cut -d' ' -f3 | grep -Ev '(\"|root|^[[:space:]]*$)' | grep -v ${USER} | wc -l
Part2:
sudo ls -oAu1QBR /var | tr -s ' ' | cut -d' ' -f3 | grep -Ev '(\"|root|^[[:space:]]*$)' | grep -v ${USER} | sort -u | wc -l
Part3:
sudo ls -oAu1QBR /var | tr -s ' ' | cut -d' ' -f3 | grep -Ev '(\"|root|^[[:space:]]*$)' | grep -v ${USER} | sort -u | nl
|
Have I already mentioned I like zsh's glob qualifiers?
files_in_var_not_owned_by_me_or_root=(/var/**/*(^u0u$UID))
echo $#files_in_var_not_owned_by_me_or_root
typeset -U owners_of_files_in_var
zstat -s -A owners_of_files_in_var +uid -- $files_in_var_not_owned_by_me_or_root
echo $#owners_of_files_in_var
i=1
for x in ${(o)owners_of_files_in_var}; do
printf '%4d %s\n' $((i++)) $x
done
| Find Command: display the file numbers [closed] |
1,349,361,022,000 |
So I was surfing the net and stumbled upon this article. It basically states that FreeBSD, starting from Version 10 and above will deprecate GCC in favor of Clang/LLVM.
From what I have seen around the net so far, Clang/LLVM is a fairly ambitious project, but in terms of reliability it can not match GCC.
Are there any technical reasons FreeBSD are choosing LLVM as their compiler infrastructure, or does the whole matter boil down to the eternal GNU/GPL vs. BSD licenses?
This question has (somehow) relevant information about the usage of GCC in FreeBSD
|
Summary: The primary reason for switching from GCC to Clang is the incompatibility of GCC's GPL v3 license with the goals of the FreeBSD project. There are also political issues to do with corporate investment, as well as user base requirements. Finally, there are expected technical advantages to do with standards compliance and ease of debugging. Real world performance improvements in compilation and execution are code-specific and debatable; cases can be made for both compilers.
FreeBSD and the GPL: FreeBSD has an uneasy relationship with the GPL. BSD-license advocates believe that truly free software has no usage restrictions. GPL advocates believe that restrictions are necessary in order to protect software freedom, and specifically that the ability to create non-free software from free software is an unjust form of power rather than a freedom. The FreeBSD project, where possible, tries to avoid the use of the GPL:
Due to the additional complexities that can evolve in the commercial
use of GPL software, we do, however, endeavor to replace such software
with submissions under the more relaxed FreeBSD license whenever
possible.
FreeBSD and the GPL v3: The GPL v3 explicitly forbids the so-called Tivoisation of code, a loophole in the GPL v2 which enabled hardware restrictions to disallow otherwise legal software modifications by users. Closing this loophole was an unacceptable step for many in the FreeBSD community:
Appliance vendors in particular have the most to lose if the large
body of software currently licensed under GPLv2 today migrates to the
new license. They will no longer have the freedom to use GPLv3
software and restrict modification of the software installed on their
hardware... In short, there is a large
base of OpenSource consumers that are suddenly very interested in
understanding alternatives to GPL licensed software.
Because of GCC's move to the GPL v3, FreeBSD was forced to remain using GCC 4.2.1 (GPL v2), which was released way back in 2007, and is now significantly outdated. The fact that FreeBSD did not move to use more modern versions of GCC, even with the additional maintenance headaches of running an old compiler and backporting fixes, gives some idea of the strength of the requirement to avoid the GPL v3. The C compiler is a major component of the FreeBSD base, and "one of the (tentative) goals for FreeBSD 10 is a GPL-free base system".
Corporate investment: Like many major open source projects, FreeBSD receives funding and development work from corporations. Although the extent to which FreeBSD is funded or given development by Apple is not easily discoverable, there is considerable overlap because Apple's Darwin OS makes use of substantial BSD-originated kernel code. Additionally, Clang itself was originally an in-house Apple project, before being open-sourced in 2007. Since corporate resources are a key enabler of the FreeBSD project, meeting sponsor needs is probably a significant real-world driver.
Userbase: FreeBSD is an attractive open source option for many companies, because the licensing is simple, unrestrictive and unlikely to lead to lawsuits. With the arrival of GPL v3 and the new anti-Tivoisation provisions, it has been suggested that there is an accelerating, vendor-driven trend towards more permissive licenses. Since FreeBSD's perceived advantage to commercial entities lies in its permissive license, there is increasing pressure from the corporate user base to move away from GCC, and the GPL in general.
Issues with GCC: Apart from the license, using GCC has some perceived issues. GCC is not fully-standards compliant, and has many extensions not found in ISO standard C. At over 3 million lines of code, it is also "one of the most complex and free/open source software projects". This complexity makes distro-level code modification a challenging task.
Technical advantages: Clang does have some technical advantages compared to GCC. Most notable are much more informative error messages and an explicitly designed API for IDEs, refactoring and source code analysis tools. Although the Clang website presents plots indicating much more efficient compilation and memory usage, real world results are quite variable, and broadly in line with GCC performance. In general, Clang-produced binaries run more slowly than the equivalent GCC binaries:
While using LLVM is faster at building code than GCC... in most
instances the GCC 4.5 built binaries had performed better than
LLVM-GCC or Clang... in the rest of the tests the performance was
either close to that of GCC or well behind. In some tests, the
performance of the Clang generated binaries was simply awful.
Conclusion: It's highly unlikely that compilation efficiency would be a significant motivator to take the substantial risk of moving a large project like FreeBSD to an entirely new compiler toolchain, particularly when binary performance is lacking. However, the situation was not really tenable. Given a choice between 1) running an out-of-date GCC, 2) Moving to a modern GCC and being forced to use a license incompatible with the goals of the project or 3) moving to a stable BSD-licensed compiler, the decision was probably inevitable. Bear in mind that this only applies to the base system, and support from the distribution; nothing prevents a user from installing and using a modern GCC on their FreeBSD box themselves.
| Why is FreeBSD deprecating GCC in favor of Clang/LLVM? |
1,349,361,022,000 |
I understand how to define include shared objects at linking/compile time. However, I still wonder how do executables look for the shared object (*.so libraries) at execution time.
For instance, my app a.out calls functions defined in the lib.so library. After compiling, I move lib.so to a new directory in my $HOME.
How can I tell a.out to go look for it there?
|
The shared library HOWTO explains most of the mechanisms involved, and the dynamic loader manual goes into more detail. Each unix variant has its own way, but most use the same executable format (ELF) and have similar dynamic linkers¹ (derived from Solaris). Below I'll summarize the common behavior with a focus on Linux; check your system's manuals for the complete story.
(Terminology note: the part of the system that loads shared libraries is often called “dynamic linker”, but sometimes “dynamic loader” to be more precise. “Dynamic linker” can also mean the tool that generates instructions for the dynamic loader when compiling a program, or the combination of the compile-time tool and the run-time loader. In this answer, “linker” refers to the run-time part.)
In a nutshell, when it's looking for a dynamic library (.so file) the linker tries:
directories listed in the LD_LIBRARY_PATH environment variable (DYLD_LIBRARY_PATH on OSX);
directories listed in the executable's rpath;
directories on the system search path, which (on Linux at least) consists of the entries in /etc/ld.so.conf plus /lib and /usr/lib.
The rpath is stored in the executable (it's the DT_RPATH or DT_RUNPATH dynamic attribute). It can contain absolute paths or paths starting with $ORIGIN to indicate a path relative to the location of the executable (e.g. if the executable is in /opt/myapp/bin and its rpath is $ORIGIN/../lib:$ORIGIN/../plugins then the dynamic linker will look in /opt/myapp/lib and /opt/myapp/plugins). The rpath is normally determined when the executable is compiled, with the -rpath option to ld, but you can change it afterwards with chrpath.
In the scenario you describe, if you're the developer or packager of the application and intend for it to be installed in a …/bin, …/lib structure, then link with -rpath='$ORIGIN/../lib'. If you're installing a pre-built binary on your system, either put the library in a directory on the search path (/usr/local/lib if you're the system administrator, otherwise a directory that you add to $LD_LIBRARY_PATH), or try chrpath.
| Where do executables look for shared objects at runtime? |
1,349,361,022,000 |
What is the Fedora equivalent of the Debian build-essential package?
|
The closest equivalent would probably be to install the below packages:
sudo dnf install make automake gcc gcc-c++ kernel-devel
However, if you don't care about exact equivalence and are ok with pulling in a lot of packages you can install all the development tools and libraries with the below command.
sudo dnf groupinstall "Development Tools" "Development Libraries"
On Fedora version older than 32 you will need the following:
sudo dnf groupinstall @development-tools @development-libraries
| What is the Fedora equivalent of the Debian build-essential package? |
1,349,361,022,000 |
I need to compile some software on my Fedora machine. Where's the best place to put it so not to interfere with the packaged software?
|
Rule of thumb, at least on Debian-flavoured systems:
/usr/local for stuff which is "system-wide"—i.e. /usr/local tends to be in a distro's default $PATH, and follows a standard UNIX directory hierarchy with /usr/local/bin, /usr/local/lib, etc.
/opt for stuff you don't trust to make system-wide, with per-app prefixes—i.e. /opt/firefox-3.6.8, /opt/mono-2.6.7, and so on. Stuff in here requires more careful management, but is also less likely to break your system—and is easier to remove since you just delete the folder and it's gone.
| Where should I put software I compile myself? |
1,349,361,022,000 |
What benefit could I see by compiling a Linux kernel myself? Is there some efficiency you could create by customizing it to your hardware?
|
In my mind, the only benefit you really get from compiling your own linux kernel is:
You learn how to compile your own linux kernel.
It's not something you need to do for more speed / memory / xxx whatever. It is a valuable thing to do if that's the stage you feel you are at in your development. If you want to have a deeper understanding of what this whole "open source" thing is about, about how and what the different parts of the kernel are, then you should give it a go. If you are just looking to speed up your boot time by 3 seconds, then... what's the point... go buy an ssd. If you are curious, if you want to learn, then compiling your own kernel is a great idea and you will likely get a lot out of it.
With that said, there are some specific reasons when it would be appropriate to compile your own kernel (as several people have pointed out in the other answers). Generally these arise out of a specific need you have for a specific outcome, for example:
I need to get the system to boot/run on hardware with limited resources
I need to test out a patch and provide feedback to the developers
I need to disable something that is causing a conflict
I need to develop the linux kernel
I need to enable support for my unsupported hardware
I need to improve performance of x because I am hitting the current limits of the system (and I know what I'm doing)
The issue lies in thinking that there's some intrinsic benefit to compiling your own kernel when everything is already working the way it should be, and I don't think that there is. Though you can spend countless hours disabling things you don't need and tweaking the things that are tweakable, the fact is the linux kernel is already pretty well tuned (by your distribution) for most user situations.
| What is the benefit of compiling your own linux kernel? |
1,349,361,022,000 |
Will the executable of a small, extremely simple program, such as the one shown below, that is compiled on one flavor of Linux run on a different flavor? Or would it need to be recompiled?
Does machine architecture matter in a case such as this?
int main()
{
return (99);
}
|
It depends. Something compiled for IA-32 (Intel 32-bit) may run on amd64 as Linux on Intel retains backwards compatibility with 32-bit applications (with suitable software installed). Here's your code compiled on RedHat 7.3 32-bit system (circa 2002, gcc version 2.96) and then the binary copied over to and run on a Centos 7.4 64-bit system (circa 2017):
-bash-4.2$ file code
code: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.2.5, not stripped
-bash-4.2$ ./code
-bash: ./code: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory
-bash-4.2$ sudo yum -y install glibc.i686
...
-bash-4.2$ ./code ; echo $?
99
Ancient RedHat 7.3 to Centos 7.4 (essentially RedHat Enterprise Linux 7.4) is staying in the same "distribution" family, so will likely have better portability than going from some random "Linux from scratch" install from 2002 to some other random Linux distribution in 2018.
Something compiled for amd64 would not run on 32-bit only releases of Linux (old hardware does not know about new hardware). This is also true for new software compiled on modern systems intended to be run on ancient old things, as libraries and even system calls may not be backwards portable, so may require compilation tricks, or obtaining an old compiler and so forth, or possibly instead compiling on the old system. (This is a good reason to keep virtual machines of ancient old things around.)
Architecture does matter; amd64 (or IA-32) is vastly different from ARM or MIPS so the binary from one of those would not be expected to run on another. At the assembly level the main section of your code on IA-32 compiles via gcc -S code.c to
main:
pushl %ebp
movl %esp,%ebp
movl $99,%eax
popl %ebp
ret
which an amd64 system can deal with (on a Linux system--OpenBSD by contrast on amd64 does not support 32-bit binaries; backwards compatibility with old archs does give attackers wiggle room, e.g. CVE-2014-8866 and friends). Meanwhile on a big-endian MIPS system main instead compiles to:
main:
.frame $fp,8,$31
.mask 0x40000000,-4
.fmask 0x00000000,0
.set noreorder
.set nomacro
addiu $sp,$sp,-8
sw $fp,4($sp)
move $fp,$sp
li $2,99
move $sp,$fp
lw $fp,4($sp)
addiu $sp,$sp,8
j $31
nop
which an Intel processor will have no idea what to do with, and likewise for the Intel assembly on MIPS.
You could possibly use QEMU or some other emulator to run foreign code (perhaps very, very slowly).
However! Your code is very simple code, so will have fewer portability issues than anything else; programs typically make use of libraries that have changed over time (glibc, openssl, ...); for those one may also need to install older versions of various libraries (RedHat for example typically puts "compat" somewhere in the package name for such)
compat-glibc.x86_64 1:2.12-4.el7.centos
or possibly worry about ABI changes (Application Binary Interface) for way old things that use glibc, or more recently changes due to C++11 or other C++ releases. One could also compile static (greatly increasing the binary size on disk) to try to avoid library issues, though whether some old binary did this depends on whether the old Linux distribution was compiling most everything dynamic (RedHat: yes) or not. On the other hand, things like patchelf can rejigger dynamic (ELF, but probably not a.out format) binaries to use other libraries.
However! Being able to run a program is one thing, and actually doing something useful with it another. Old 32-bit Intel binaries may have security issues if they depend on a version of OpenSSL that has some horrible and not-backported security problem in it, or the program may not be able to negotiate at all with modern web servers (as the modern servers reject the old protocols and ciphers of the old program), or SSH protocol version 1 is no longer supported, or ...
| Will a Linux executable compiled on one "flavor" of Linux run on a different one? |
1,349,361,022,000 |
I want to install tmux on a machine where I don't have root access. I already compiled libevent and installed it in $HOME/.bin-libevent and now I want to compile tmux, but configure always ends with configure: error: "libevent not found", even though I tried to point to the libevent directory in the Makefile.am by modifying LDFLAGS and CPPFLAGS, but nothing seems to work.
How can I tell the system to look in my home dir for the libevent?
|
Try:
DIR="$HOME/.bin-libevent"
./configure CFLAGS="-I$DIR/include" LDFLAGS="-L$DIR/lib"
(I'm sure there must be a better way to configure library paths with autoconf. Usually there is a --with-libevent=dir option. But here, it seems there is no such option.)
| Why can't gcc find libevent when building tmux from source? |
1,349,361,022,000 |
I was wondering: when installing something, there's an easy way of double clicking an install executable file, and on the other hand, there is a way of building it from source.
The latter one, downloading a source bundle, is really cumbersome.
But what is the fundamental difference between these two methods?
|
All software are programs, which are also called source packages. So all source packages need to be built first, to run on your system.
The binary packages are one that are already build from source by someone with general features and parameters provided in the software so that a large number of users can install and use it.
Binary packages are easy to install.
But may not have all options from the upstream package.
So for installing from source, you need to build the source code yourself. That means you need to take care of the dependencies yourself. Also you need to be aware of all the features of the package so that you can build it accordingly.
Advantages of installing from source:
You can install the latest version and can always stay updated, whether it be a security patch or a new feature.
Allows you to trim down the features while installing so as to suit your needs.
Similarly you can add some features which may not be provided in the binary.
Install it in a location you wish.
In case of some software you may provide your hardware specific info for a suitable installation.
In short installing from source gives you heavy customization option at the same time it takes a lot of effort, while installation from binary is easier but you may not be able to customize as you wish.
Update: Adding the argument related to security in the comments below. Yes it is true that while installing from binary you don't have the integrity of the source code. But then it depends as to where you have got the binary from. There are lots of trusted sources from where you can get the binary of any new project, the only negative is the time. It may take some time for the binary of the updates or even a new project to appear in our trusted repositories.
And above all things, about software security, I'd like to highlight this hilarious page at bell-labs provided by Joe in the below comments.
| What is the difference between building from source and using an install package? |
1,349,361,022,000 |
I wish to install OpenVPN on OpenBSD 5.5 using OpenVPN source tarball.
According to the instructions here, I have to install lzo and
add CFLAGS="-I/usr/local/include" LDFLAGS="-L/usr/local/lib"
directives to "configure", since gcc will not find them otherwise.
I have googled extensively for guide on how to do the above on OpenBSD but there is none.
This is what I plan to do:
Untar the source tarball to a freshly created directory
Issue the command
./configure CFLAGS="-I/usr/local/include" LDFLAGS="-L/usr/local/lib"
Issue the command make
Issue the command make install
Which of the following syntax is correct?
./configure CFLAGS="-I/usr/local/include" LDFLAGS="-L/usr/local/lib"
or
./configure --CFLAGS="-I/usr/local/include" LDFLAGS="-L/usr/local/lib"
or
./configure --CFLAGS="-I/usr/local/include" --LDFLAGS="-L/usr/local/lib"
|
The correct way is:
./configure CFLAGS="-I/usr/local/include" LDFLAGS="-L/usr/local/lib"
but this may not work with all configure scripts. It's probably better to set environment variables such as CPATH and LIBRARY_PATH (see gcc man page).
An example:
export CPATH=/usr/local/include
export LIBRARY_PATH=/usr/local/lib
export LD_LIBRARY_PATH=/usr/local/lib
in your .profile, for instance. The LD_LIBRARY_PATH can be needed in case of shared libraries if a run path is not used (this depends on the OS, the build tools and the options that are used, but it shouldn't hurt).
| What is the correct syntax to add CFLAGS and LDFLAGS to "configure"? |
1,349,361,022,000 |
Let's say I work for a large services organisation outside the US/UK. We use UNIX and Linux servers extensively.
Reading through this article it mentions that it would be easy to insert a backdoor into a C compiler, then any code compiled with that compiler would also contain a backdoor. Now given recent leaks regarding the NSA/GCHQ's mandate to put backdoors/weaknesses in all encryption methods, hardware and software, the compiler is now a critical point of failure. Potentially all standard UNIX/Linix distributions could be compromised. We cannot afford to have our systems, data and our customers data compromised by rogue governments.
Given this information, I would like to build a trusted compiler from scratch, then I have a secure base to build on so I can build the Operating System and applications from source code using that compiler.
Question
What is the correct (and secure way) to go about compiling a compiler from source code (a seemingly chicken-egg scenario) then compiling a trusted Unix/Linux distribution from scratch?
You can assume I or others have the ability to read and understand source code for security flaws, so source code will be vetted first before compiling. What I am really after is a working guide to produce this compiler from scratch securely and can be used to compile the kernel, other parts of the OS and applications.
The security stack must start at the base level if we are to have any confidence in the operating system or applications running on that stack. Yes I understand there may be hardware backdoors which may insert some microcode into the compiler as it's being built. Not much we can do about that for the moment except maybe use chips not designed in the US. Let's get this layer sorted for a start and assume I could build it on an old computer potentially before any backdoors were inserted.
As Bruce Schneier says: "To the engineers, I say this: we built the internet, and some of us have helped to subvert it. Now, those of us who love liberty have to fix it."
Extra links:
http://nytimes.com/2013/09/06/us/nsa-foils-much-internet-encryption.html?pagewanted=all&_r=0
http://theguardian.com/commentisfree/2013/sep/05/government-betrayed-internet-nsa-spying
|
AFAIK the only way to be completely sure of security would be to write a compiler in assembly language (or modifying the disk directly yourself). Only then can you ensure that your compiler isn't inserting a backdoor - this works because you're actually eliminating the compiler completely.
From there, you may use your from-scratch compiler to bootstrap e.g. the GNU toolchain. Then you could use your custom toolchain to compile a Linux From Scratch system.
Note that to make things easier on yourself, you could have a second intermediary compiler, written in C (or whatever other language). So you would write compiler A in assembly, then rewrite that compiler in C/C++/Python/Brainfuck/whatever to get compiler B, which you would compile using compiler A. Then you would use compiler B to compile gcc and friends.
| How to compile the C compiler from scratch, then compile Unix/Linux from scratch |
1,349,361,022,000 |
./configure always checks whether the build environment is sane...
I can't help but wonder what exactly a insane build environment is. What errors can this check raise?
|
This comes from automake, specifically from its AM_SANITY_CHECK macro, which is called from AM_INIT_AUTOMAKE, which is normally called early in configure.ac. The gist of this macro is:
Check that the path to the source directory doesn't contain certain “unsafe” characters which can be hard to properly include in shell scripts makefiles.
Check that ls appears to work.
Check that a new file created in the build directory is newer than the configure file. If it isn't (typically because the clock on the build system is not set correctly), the build process is likely to fail because build processes usually rely on generated files having a more recent timestamp than the source files they are generated from.
| ./configure: What is an insane build environment? |
1,349,361,022,000 |
This is an issue that really limits my enjoyment of Linux. If the application isn't on a repository or if it doesn't have an installer script, then I really struggle where and how to install an application from source.
Comparatively to Windows, it's easy. You're (pretty much) required to use an installer application that does all of the work in a Wizard. With Linux... not so much.
So, do you have any tips or instructions on this or are there any websites that explicitly explain how, why and where to install Linux programs from source?
|
Normally, the project will have a website with instructions for how to build and install it. Google for that first.
For the most part you will do either:
Download a tarball (tar.gz or tar.bz2 file), which is a release of a specific version of the source code
Extract the tarball with a command like tar zxvf myapp.tar.gz for a gzipped tarball or tar jxvf myapp.tar.bz2 for a bzipped tarball
cd into the directory created above
run ./configure && make && sudo make install
Or:
Use git or svn or whatever to pull the latest source code from their official source repository
cd into the directory created above
run ./autogen.sh && make && sudo make install
Both configure and autogen.sh will accept a --prefix argument to specify where the software is installed. I recommend checking out Where should I put software I compile myself? for advice on the best place to install custom-built software.
| How to compile and install programs from source |
1,349,361,022,000 |
I'm trying to install a Debian package from source (via git). I downloaded the
package, changed to the package’s directory and ran ./configure command but
it returned bash: ./configure: No such file or directory. What can be the
problem? A configure.ac file is located in the program folder.
./configure
make
sudo make install
|
If the file is called configure.ac,
do $> autoconf
Depends:
M4, Automake
If you're not sure what to do,
try $> cat readme
They must mean that you use "autoconf" to generate an executable "configure" file.
So the order is:
$> autoconf
$> ./configure
$> make
$> make install
| Can not run configure command: "No such file or directory" |
1,349,361,022,000 |
I am trying to upgrade apache 2.2.15 to 2.2.27. While running config.nice taken from apache2.2.15/build I am getting following error:
checking whether the C compiler works... no
configure: error: in `/home/vkuser/httpd-2.2.27/srclib/apr':
configure: error: C compiler cannot create executables
I have tried to search online but no luck. I have also tested out c compiler by running a small test.c script and it runs fine. There were few solution given online like installing 'kernel-devel' package but it did not resolve issue. How can I get this to work?
Following is the config.log generated:
This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.
It was created by configure, which was
generated by GNU Autoconf 2.67. Invocation command line was
$ ./configure --prefix=/opt/myapp/apache2.2 --with-mpm=worker --enable-static-support --enable-ssl=static --enable-modules=most --disable-authndbd --disable-authn-dbm --disable-dbd --enable-static-logresolve --enable-static-rotatelogs --enable-proxy=static --enable-proxyconnect=static --enable-proxy-ftp=static --enable-proxy-http=static --enable-rewrite=static --enable-so=static --with-ssl=/opt/myapp/apache2.2/openssl --host=x86_32-unknown-linux-gnu host_alias=x86_32-unknown-linux-gnu CFLAGS=-m32 LDFLAGS=-m32 --with-included-apr
## --------- ##
## Platform. ##
## --------- ##
hostname = dmcpq-000
uname -m = x86_64
uname -r = 2.6.18-348.12.1.el5
uname -s = Linux
uname -v = #1 SMP Mon Jul 1 17:54:12 EDT 2013
/usr/bin/uname -p = unknown
/bin/uname -X = unknown
/bin/arch = x86_64
/usr/bin/arch -k = unknown
/usr/convex/getsysinfo = unknown
/usr/bin/hostinfo = unknown
/bin/machine = unknown
/usr/bin/oslevel = unknown
/bin/universe = unknown
PATH: /opt/myapp/Entrust/GetAccess/Runtime/Apache22/bin
PATH: /usr/kerberos/sbin
PATH: /usr/kerberos/bin
PATH: /usr/local/sbin
PATH: /usr/local/bin
PATH: /sbin
PATH: /bin
PATH: /usr/sbin
PATH: /usr/bin
PATH: /root/bin
## ----------- ##
## Core tests. ##
## ----------- ##
configure:2793: checking for chosen layout
configure:2795: result: Apache
configure:3598: checking for working mkdir -p
configure:3614: result: yes
configure:3629: checking build system type
configure:3643: result: x86_64-unknown-linux-gnu
configure:3663: checking host system type
configure:3676: result: x86_32-unknown-linux-gnu
configure:3696: checking target system type
configure:3709: result: x86_32-unknown-linux-gnu
## ---------------- ##
## Cache variables. ##
## ---------------- ##
ac_cv_build=x86_64-unknown-linux-gnu
ac_cv_env_CC_set=
ac_cv_env_CC_value=
ac_cv_env_CFLAGS_set=set
ac_cv_env_CFLAGS_value=-m32
ac_cv_env_CPPFLAGS_set=
ac_cv_env_CPPFLAGS_value=
ac_cv_env_CPP_set=
ac_cv_env_CPP_value=
ac_cv_env_LDFLAGS_set=set
ac_cv_env_LDFLAGS_value=-m32
ac_cv_env_LIBS_set=
ac_cv_env_LIBS_value=
ac_cv_env_build_alias_set=
ac_cv_env_build_alias_value=
ac_cv_env_host_alias_set=set
ac_cv_env_host_alias_value=x86_32-unknown-linux-gnu
ac_cv_env_target_alias_set=
ac_cv_env_target_alias_value=
ac_cv_host=x86_32-unknown-linux-gnu
ac_cv_mkdir_p=yes
ac_cv_target=x86_32-unknown-linux-gnu
## ----------------- ##
## Output variables. ##
## ----------------- ##
APACHECTL_ULIMIT=''
APR_BINDIR=''
APR_CONFIG=''
APR_INCLUDEDIR=''
APR_VERSION=''
APU_BINDIR=''
APU_CONFIG=''
APU_INCLUDEDIR=''
APU_VERSION=''
AP_BUILD_SRCLIB_DIRS=''
AP_CLEAN_SRCLIB_DIRS=''
AP_LIBS=''
AWK=''
BUILTIN_LIBS=''
CC=''
CFLAGS='-m32'
CORE_IMPLIB=''
CORE_IMPLIB_FILE=''
CPP=''
CPPFLAGS=''
CRYPT_LIBS=''
CXX=''
CXXFLAGS=''
DEFS=''
DSO_MODULES=''
ECHO_C=''
ECHO_N='-n'
ECHO_T=''
EGREP=''
EXEEXT=''
EXTRA_CFLAGS=''
EXTRA_CPPFLAGS=''
EXTRA_CXXFLAGS=''
EXTRA_INCLUDES=''
EXTRA_LDFLAGS=''
EXTRA_LIBS=''
GREP=''
HTTPD_LDFLAGS=''
HTTPD_VERSION=''
INCLUDES=''
INSTALL=''
INSTALL_DSO=''
INSTALL_PROG_FLAGS=''
LDFLAGS='-m32'
LIBOBJS=''
LIBS=''
LIBTOOL=''
LN_S=''
LTCFLAGS=''
LTFLAGS=''
LTLIBOBJS=''
LT_LDFLAGS=''
LYNX_PATH=''
MKDEP=''
MKINSTALLDIRS=''
MK_IMPLIB=''
MODULE_CLEANDIRS=''
MODULE_DIRS=''
MOD_ACTIONS_LDADD=''
MOD_ALIAS_LDADD=''
MOD_ASIS_LDADD=''
MOD_AUTHNZ_LDAP_LDADD=''
MOD_AUTHN_ALIAS_LDADD=''
MOD_AUTHN_ANON_LDADD=''
MOD_AUTHN_DBD_LDADD=''
MOD_AUTHN_DBM_LDADD=''
MOD_AUTHN_DEFAULT_LDADD=''
MOD_AUTHN_FILE_LDADD=''
MOD_AUTHZ_DBM_LDADD=''
MOD_AUTHZ_DEFAULT_LDADD=''
MOD_AUTHZ_GROUPFILE_LDADD=''
MOD_AUTHZ_HOST_LDADD=''
MOD_AUTHZ_OWNER_LDADD=''
MOD_AUTHZ_USER_LDADD=''
MOD_AUTH_BASIC_LDADD=''
MOD_AUTH_DIGEST_LDADD=''
MOD_AUTOINDEX_LDADD=''
MOD_BUCKETEER_LDADD=''
MOD_CACHE_LDADD=''
MOD_CASE_FILTER_IN_LDADD=''
MOD_CASE_FILTER_LDADD=''
MOD_CERN_META_LDADD=''
MOD_CGID_LDADD=''
MOD_CGI_LDADD=''
MOD_CHARSET_LITE_LDADD=''
MOD_DAV_FS_LDADD=''
MOD_DAV_LDADD=''
MOD_DAV_LOCK_LDADD=''
MOD_DBD_LDADD=''
MOD_DEFLATE_LDADD=''
MOD_DIR_LDADD=''
MOD_DISK_CACHE_LDADD=''
MOD_DUMPIO_LDADD=''
MOD_ECHO_LDADD=''
MOD_ENV_LDADD=''
MOD_EXAMPLE_LDADD=''
MOD_EXPIRES_LDADD=''
MOD_EXT_FILTER_LDADD=''
MOD_FILE_CACHE_LDADD=''
MOD_FILTER_LDADD=''
MOD_HEADERS_LDADD=''
MOD_HTTP_LDADD=''
MOD_IDENT_LDADD=''
MOD_IMAGEMAP_LDADD=''
MOD_INCLUDE_LDADD=''
MOD_INFO_LDADD=''
MOD_ISAPI_LDADD=''
MOD_LDAP_LDADD=''
MOD_LOGIO_LDADD=''
MOD_LOG_CONFIG_LDADD=''
MOD_LOG_FORENSIC_LDADD=''
MOD_MEM_CACHE_LDADD=''
MOD_MIME_LDADD=''
MOD_MIME_MAGIC_LDADD=''
MOD_NEGOTIATION_LDADD=''
MOD_OPTIONAL_FN_EXPORT_LDADD=''
MOD_OPTIONAL_FN_IMPORT_LDADD=''
MOD_OPTIONAL_HOOK_EXPORT_LDADD=''
MOD_OPTIONAL_HOOK_IMPORT_LDADD=''
MOD_PROXY_AJP_LDADD=''
MOD_PROXY_BALANCER_LDADD=''
MOD_PROXY_CONNECT_LDADD=''
MOD_PROXY_FTP_LDADD=''
MOD_PROXY_HTTP_LDADD=''
MOD_PROXY_LDADD=''
MOD_PROXY_SCGI_LDADD=''
MOD_REQTIMEOUT_LDADD=''
MOD_REWRITE_LDADD=''
MOD_SETENVIF_LDADD=''
MOD_SO_LDADD=''
MOD_SPELING_LDADD=''
MOD_SSL_LDADD=''
MOD_STATUS_LDADD=''
MOD_SUBSTITUTE_LDADD=''
MOD_SUEXEC_LDADD=''
MOD_UNIQUE_ID_LDADD=''
MOD_USERDIR_LDADD=''
MOD_USERTRACK_LDADD=''
MOD_VERSION_LDADD=''
MOD_VHOST_ALIAS_LDADD=''
MPM_LIB=''
MPM_NAME=''
MPM_SUBDIR_NAME=''
NONPORTABLE_SUPPORT=''
NOTEST_CFLAGS=''
NOTEST_CPPFLAGS=''
NOTEST_CXXFLAGS=''
NOTEST_LDFLAGS=''
NOTEST_LIBS=''
OBJEXT=''
OS=''
OS_DIR=''
OS_SPECIFIC_VARS=''
PACKAGE_BUGREPORT=''
PACKAGE_NAME=''
PACKAGE_STRING=''
PACKAGE_TARNAME=''
PACKAGE_URL=''
PACKAGE_VERSION=''
PATH_SEPARATOR=':'
PCRE_CONFIG=''
PICFLAGS=''
PILDFLAGS=''
PKGCONFIG=''
PORT=''
POST_SHARED_CMDS=''
PRE_SHARED_CMDS=''
RANLIB=''
RM=''
RSYNC=''
SHELL='/bin/sh'
SHLIBPATH_VAR=''
SHLTCFLAGS=''
SH_LDFLAGS=''
SH_LIBS=''
SH_LIBTOOL=''
SSLPORT=''
SSL_LIBS=''
UTIL_LDFLAGS=''
ab_LTFLAGS=''
abs_srcdir=''
ac_ct_CC=''
ap_make_delimiter=''
ap_make_include=''
bindir='${exec_prefix}/bin'
build='x86_64-unknown-linux-gnu'
build_alias=''
build_cpu='x86_64'
build_os='linux-gnu'
build_vendor='unknown'
cgidir='${datadir}/cgi-bin'
checkgid_LTFLAGS=''
datadir='${prefix}'
datarootdir='${prefix}/share'
docdir='${datarootdir}/doc/${PACKAGE}'
dvidir='${docdir}'
errordir='${datadir}/error'
exec_prefix='${prefix}'
exp_bindir='/opt/myapp/apache2.2/bin'
exp_cgidir='/opt/myapp/apache2.2/cgi-bin'
exp_datadir='/opt/myapp/apache2.2'
exp_errordir='/opt/myapp/apache2.2/error'
exp_exec_prefix='/opt/myapp/apache2.2'
exp_htdocsdir='/opt/myapp/apache2.2/htdocs'
exp_iconsdir='/opt/myapp/apache2.2/icons'
exp_includedir='/opt/myapp/apache2.2/include'
exp_installbuilddir='/opt/myapp/apache2.2/build'
exp_libdir='/opt/myapp/apache2.2/lib'
exp_libexecdir='/opt/myapp/apache2.2/modules'
exp_localstatedir='/opt/myapp/apache2.2'
exp_logfiledir='/opt/myapp/apache2.2/logs'
exp_mandir='/opt/myapp/apache2.2/man'
exp_manualdir='/opt/myapp/apache2.2/manual'
exp_proxycachedir='/opt/myapp/apache2.2/proxy'
exp_runtimedir='/opt/myapp/apache2.2/logs'
exp_sbindir='/opt/myapp/apache2.2/bin'
exp_sysconfdir='/opt/myapp/apache2.2/conf'
host='x86_32-unknown-linux-gnu'
host_alias='x86_32-unknown-linux-gnu'
host_cpu='x86_32'
host_os='linux-gnu'
host_vendor='unknown'
htcacheclean_LTFLAGS=''
htdbm_LTFLAGS=''
htdigest_LTFLAGS=''
htdocsdir='${datadir}/htdocs'
htmldir='${docdir}'
htpasswd_LTFLAGS=''
httxt2dbm_LTFLAGS=''
iconsdir='${datadir}/icons'
includedir='${prefix}/include'
infodir='${datarootdir}/info'
installbuilddir='${datadir}/build'
libdir='${exec_prefix}/lib'
libexecdir='${exec_prefix}/modules'
localedir='${datarootdir}/locale'
localstatedir='${prefix}'
logfiledir='${localstatedir}/logs'
logresolve_LTFLAGS=''
mandir='${prefix}/man'
manualdir='${datadir}/manual'
nonssl_listen_stmt_1=''
nonssl_listen_stmt_2=''
oldincludedir='/usr/include'
other_targets=''
pdfdir='${docdir}'
perlbin=''
prefix='/opt/myapp/apache2.2'
progname=''
program_transform_name='s,x,x,'
proxycachedir='${localstatedir}/proxy'
psdir='${docdir}'
rel_bindir='bin'
rel_cgidir='cgi-bin'
rel_datadir=''
rel_errordir='error'
rel_exec_prefix=''
rel_htdocsdir='htdocs'
rel_iconsdir='icons'
rel_includedir='include'
rel_installbuilddir='build'
rel_libdir='lib'
rel_libexecdir='modules'
rel_localstatedir=''
rel_logfiledir='logs'
rel_mandir='man'
rel_manualdir='manual'
rel_proxycachedir='proxy'
rel_runtimedir='logs'
rel_sbindir='bin'
rel_sysconfdir='conf'
rotatelogs_LTFLAGS=''
runtimedir='${localstatedir}/logs'
sbindir='${exec_prefix}/bin'
shared_build=''
sharedstatedir='${prefix}/com'
sysconfdir='${prefix}/conf'
target='x86_32-unknown-linux-gnu'
target_alias=''
target_cpu='x86_32'
target_os='linux-gnu'
target_vendor='unknown'
configure: exit 1
|
From the output you've given, you are trying to compile a 32-bit build of apache on a 64 bit system. This is from the intput to configure here:
--host=x86_32-unknown-linux-gnu host_alias=x86_32-unknown-linux-gnu CFLAGS=-m32 LDFLAGS=-m32
Also see the output lines confirming this:
configure:3629: checking build system type
configure:3643: result: x86_64-unknown-linux-gnu
configure:3663: checking host system type
configure:3676: result: x86_32-unknown-linux-gnu
configure:3696: checking target system type
configure:3709: result: x86_32-unknown-linux-gnu
Here it is using a 64 bit build system but a 32 bit host/target. Further down we see:
ac_cv_env_CFLAGS_set=set
ac_cv_env_CFLAGS_value=-m32
This flag tells gcc to produce 32 bit objects. Your error that the C compiler cannot produce executable is likely caused by not having a 32 bit toolchain present.
Testing your ability to compile 32 bit objects
You can test this by compiling a small C example with the -m32 flag.
// Minimal C example
#include <stdio.h>
int main()
{
printf("This works\n");
return 0;
}
Compiling:
gcc -m32 -o m32test m32test.c
If this command fails, then you have a problem with your compiler being able to build 32 bit objects. The error messages emitted from the compiler may be helpful in remedying this.
Remedies
Build for a 64 bit target (by removing the configure options forcing a 32 bit build), or
Install a 32 bit compiler toolchain
| configure: error: C compiler cannot create executables |
1,349,361,022,000 |
Possible Duplicate:
What does a kernel source tree contain? Is this related to Linux kernel headers?
I know that if I want to compile my own Linux kernel I need the Linux kernel headers, but what exactly are they good for?
I found out that under /usr/src/ there seem to be dozens of C header files. But what is their purpose, aren't they included in the kernel sources directly?
|
The header files define an interface: they specify how the functions in the source file are defined.
They are used so that a compiler can check if the usage of a function is correct as the function signature (return value and parameters) is present in the header file.
For this task the actual implementation of the function is not necessary.
You could do the same with the complete kernel sources but you will install a lot of unnecessary files.
Example: if I want to use the function
int foo(double param);
in a program I do not need to know how the implementation of foo is, I just need to know that it accepts a single param (double) and returns an integer.
| What exactly are Linux kernel headers? [duplicate] |
1,349,361,022,000 |
I'd like to try using a kernel other than the one provided by my distro -- either from somewhere else, or as customized by me. Is this difficult or dangerous?
Where do I start?
|
Building a custom kernel can be time consuming -- mostly in the configuration, since modern computers can do the build in a matter of minutes -- but it is not particularly dangerous if you keep your current, working kernel, and make sure to leave that as an option via your bootloader (see step #6 below). This way, if your new one does not work, you can just reboot the old one.
In the following instructions, paths inside the source tree take the form [src]/whatever, where [src] is the directory you installed the source into, e.g. /usr/src/linux-3.13.3. You probably want to do this stuff su root as the source tree should remain secure in terms of write permissions (it should be owned by root).
While some of the steps are optional, you should read them anyway as they contain information necessary to understanding the rest of the process.
Download and unpack the source tarball.
These are available from kernel.org. The latest ones are listed on the front page, but if you look inside the /pub/ directory, you'll find an archive going all the way back to version 1.0. Unless you have special reason to do otherwise, you are best off just choosing the "Latest Stable". At the time of this writing, this is a 74 MB tar.xz file.
Once the tarball is downloaded, you need to unpack it somewhere. The normal place is in /usr/src. Place the file there and:
tar -xJf linux-X.X.X.tar.xz
Note that individual distros usually recommend you use one of their source packages instead of the vanilla tree. This contains distro specific patches, which may or may not matter to you. It will also match the kernel include headers used to compile some userspace tools, although they are most likely identical anyway.
In 15+ years of building custom kernels (mostly on Fedora/Debian/Ubuntu), I've never had a problem using the vanilla1 source. Doing that doesn't really make much difference, however, beyond the fact that if you want the absolute latest kernel, your distro probably has not packaged it yet. So the safest route is still to use the distro package, which should install into /usr/src. I prefer the latest stable so I can act as a guinea pig before it gets rolled out into the distros :)
Start with a basic configuration [optional].
You don't have to do this -- you can just dive right in and create a configuration from scratch. However, if you've never done that before, expect a lot of trial and error. This also means having to read through most of the options (there are hundreds). A better bet is to use your existing configuration, if available. If you used a distro source package, it probably already contains a [src]/.config file, so you can use that. Otherwise, check for a /proc/config.gz. This is an optional feature added in the 2.6 kernel. If it exists, copy that into the top level of the source tree and gunzip -c config.gz > .config.
If it doesn't exist, it maybe because this option was configured as a module. Try sudo modprobe configs, then check the /proc directory for config.gz again.
The distro config is not very ideal in the sense that it includes almost every possible hardware driver. This does not matter to the kernel's functionality much, since they are modules and most of them will never get used, but it very significantly increases the time required to build. It is also awkward in that it requires an initramfs to contain certain core modules (see step #4 below). However, it is probably a better starting point than the default.
Note that the configuration options shift and change from one kernel version to the next, and when you run one of the make config programs below your .config will first be parsed and updated to match the new version. If the configuration is from a vastly older version, this may lead to strange results, so pay attention when you do the configuration. AFAIK it won't work at all the other way around (using a config from a newer version).
Create a .configuration.
[src]/.config is a text file used to configure the kernel. Don't edit this file directly. Changing options is often not a simple matter of replacing a Y with an N, etc; there is usually a set of interdependencies and branching possibilities. Instead, you want to use one of the config targets from the kernel makefile (meaning, enter make _____ on the command line from the top level source directory):
make config is the most basic but probably not to most people's taste. It is a sequence of questions -- a lot of questions -- and if you change your mind you have to start again.
make oldconfig is like make config except, if you already have a .config from a previous version, will skip questions except those pertaining to new options. There can still be a lot of those and most of them will be irrelevant to you so again, I don't recommend it.
make menuconfig is my (and I think most other's) preferred method. It builds and executes a TUI interface (colored menus that will work on a terminal). This requires you have the -dev package for ncurses installed. It is fairly self-explanatory, except for the seach which is accessible via /; the F1 "help" provides an explanation for the current option. There is an alternate version, make nconfig, with a few extra features, wherein F2 "syminfo" is the equivalent of menuconfig's F1.
make xconfig is a full GUI interface. This requires qmake and the -dev package for Qt be installed, as again, it's a program that is compiled and built. If you were not using these previously, that may be a substantial download. The reason I prefer menuconfig to the GUI version is that option hierarchies are presented using successive screens in the former but open accordion-like in the latter.
One of the first things you should (but don't have to) do is to add a "Local version" string (under General Setup). The reason for this is mentioned in #5 below.
"Labyrinthine" is a good way to describe the option hierarchy, and getting into detail with it is well beyond the scope of a Q&A like this one. If you want to sit down and go through everything, set aside hours. Greg Kroah-Hartman (long time lead dev for the linux kernel) has a free online book about the kernel (see References below) which contains a chapter about configuration, although it was written in 2006. My advice is to start with a reasonable base from your current distro kernel (as per #2) and then go through it and uncheck all the things you know you don't need. You'll also probably want to change some of the "module" options to "built-in", which brings us to my next point...
About initramfs [optional]
An "initramfs" is a compressed filesystem built into the kernel and/or loaded at boot time. Its primary purpose is to include modules that the kernel will need before it can access those in /lib/modules on the root filesystem -- e.g., drivers for the device containing that filesystem. Distros always use these partially because the drivers are mutually incompatible, and so cannot be all built into the kernel. Instead, ones appropriate to the current system are selected from inside the initramfs.
This works well and does not represent any kind of disadvantage, but it is probably an unnecessary complication when building your own kernel.2 The catch is, if you don't use an initramfs, you need to make sure the drivers for your root filesystem (and the device it's on) are built into the kernel. In menuconfig, this is the difference between an M (= module) option and a * (= built-in) option. If you don't get this right, the system will fail early on in the boot process. So, e.g., if you have a SATA harddisk and an ext4 root filesystem, you need drivers for those built-in. [If anyone can think of anything else that's a must-have, leave a comment and I'll incorporate that here].
If you do want to use an initramfs, you'll have to select the appropriate options in General Setup. There's a skeleton guide to creating one built into the kernel in [src]/Documentation/filesystems/ramfs-rootfs-initramfs.txt, but note that the distros don't do this; they use an external gzipped cpio file. However, that doc does contain a discussion of what should go in the initramfs (see "Contents of initramfs").
Build and install the kernel.
The next step is easy. To make the kernel, just run make in the [src] directory. If you are on a multi-core system, you can add -j N to speed things up, where N is the number of cores you want to dedicate + 1. There is no test or check. Once that's done, you can make modules. On a fast box, all this should take < 10 minutes.
If all goes well, make INSTALL_MOD_STRIP=1 modules_install. This will create a directory in /lib/modules matching the version number of the kernel plus the "Local version" string mentioned in step 3, if any. If you did not use a "Local version" string, be careful if you already have a kernel of the same version that you depend upon, because these modules will replace those.3 INSTALL_MOD_STRIP=1 is optional, for the significance see here.
You can then make install to install the kernel to a default location. My recommendation, though, is to do it yourself to ensure no existing files get overwritten. Look in [src]/arch/[ARCH]/boot for a file named bzImage4, where [ARCH] is x86 if you are on an x86 or x86-64 machine (and something else if you are on something else). Copy that into /boot and rename it to something more specific and informative (it doesn't matter what). Do the same thing with [src]/System.map, but rename it according to the following scheme:
System.map-[VERSION]
Here, [VERSION] is exactly the same as the name of the directory in /lib/modules created by make modules_install, which will include the "Local version" string, e.g., System.map-3.13.3-mykernel.
Configure the GRUB 2 bootloader.
If you are not using grub (the majority of linux desktop users are), this obviously doesn't apply to you. You should have a /etc/grub.d/40_custom file with not much in it. If not, create it owned by root and chmod 755 (it must be executable). To that add:
menuentry 'My new kernel, or whatever' {
set root='hd0,1'
linux /boot/[name-of-kernel] root=/dev/sda1 [other kernel options]
}
If you are using an initramfs, you should also have a last line initrd /path/to/initramfs. Beware the set root= line. The example presumes grub was installed onto the first partition of the first hard drive (hd0,1). If you have multiple drives, you might want to use the partition UUID instead and replace that line with:
search --no-floppy --fs-uuid --set=root [the UUID of the partition]
Unless grub is not on your root filesystem, this should also correspond to the root= directive on the linux line, which indicates your root filesystem (the one with /sbin/init and /lib/modules). The UUID version of that is root=UUID=[the UUID].
You can look at your existing /boot/grub2/grub.cfg for a clue about the device name. Here's a brief guide to such under grub 2. Once you are happy, run grub2-mkconfig -o /boot/grub2/grub.cfg (but back up your current grub.cfg first). You may then want to edit that file and move your entry to the top. It should still contain a listing for your old (running) kernel, and your distro may have a mechanism which duplicated an entry for the new kernel automatically (because it was found in /boot; Fedora does this, hence, using a distinct title with menuentry is a good idea). You can remove that later if all goes well.
You can also just insert the menuentry into grub.cfg directly, but some distros will overwrite this when their kernel is updated (whereas using /etc/grub.d/ will keep it incorporated).
That's it. All you need to do now is reboot. If it doesn't work, try and deduce the problem from the screen output, reboot choosing an old kernel, and go back to step 3 (except use the .config you already have and tweak that). It may be a good idea to make clean (or make mrproper) between attempts but make sure you copy [src]/.config to some backup first, because that will get erased. This helps to ensure that objects used in the build process are not stale.
Regarding kernel headers et. al.
One thing you should likely do is symlink (ln -s -i) /lib/modules/X.X.X/source and /lib/modules/X.X.X/build to the /usr/src directory where the source tree is (keep that). This is necessary so that some userspace tools (and third party driver installers) can access the source for the running kernel.
An issue related to this are .h files in /usr/include, etc. These change very gradually, and are backward compatible. You have two choices:
Leave the ones used by your distro. If you update the whole system regularly, the distro will install new ones periodically anyway, so this is the "least hassle" option.
Use make headers_install.
Since they are backward compatible (meaning "a program built against a C library using older kernel headers should run on a newer kernel"), you don't have to get too fussy about this. The only potential issue would be if you build a custom kernel and keep it for a while, during which time the distro updates the "kernel-headers" package to a newer version than used to build your kernel, and there turns out to be some incompatibility (which would only apply to software subsequently compiled from source).
References
Here are some resources:
[src]/README includes a brief guide to building and installing.
The [src]/Documentation directory contains a lot of information that may be helpful in configuration.
Much of Greg K-H's book Linux Kernel in a Nutshell (available there for free as a series of PDF's) revolves around building the kernel.
Grub 2 has an online manual.
1. "Vanilla" refers to the original, unadulterated official source as found at kernel.org. Most distros take this vanilla source and add some minor customizations.
2. Note that there are circumstances that require an initramfs because some userspace is needed in order to mount the root filesystem -- for example, if it is encrypted, or spread across a complex RAID array.
3. It won't remove modules that are already there if you didn't build them, however, which means you can add a module later by simply modifying your configuration and running make modules_install again. Note that building some modules may require changes to the kernel itself, in which case you also have to replace the kernel. You'll be able to tell when you try to use modprobe to insert the module.
4. This file may be named something different if you used a non-standard compression option. I'm not sure what all the possibilities are.
| Configuring, compiling and installing a custom Linux kernel |
1,349,361,022,000 |
Sometimes in the sources of projects I see "*.in" files. For example, a bunch of "Makefile.in"s. What are they for and/or what does the ".in" part mean? I assume that this has something to do with autoconf or make or something like those, but I'm not sure.
I've tried searching for ".in file extension", "autoconf .in file extension", "autoconf .in", "autoconf dot in", and other variants, with no luck.
|
it's just a convention that signifies the given file is for input; in my experience, these files tend to be a sort of generic template from which a specific output file or script results.
| What are .in files? |
1,349,361,022,000 |
I understand that source based distributions like Gentoo or Slackware do not need *-dev versions of programs. They include the source code as well as header files for compiling everything locally.
But I never saw *-dev packages in Arch Linux, although it is package based. I ran across lots of *-dev packages in other distributions.
|
The -dev packages usually contain header-files, examples, documentation and such, which are not needed to just running the program (or use a library as a dependency). They are left out to save space.
ArchLinux usually just ships these files with the package itself. This costs a bit more disk space for the installation but reduces the number packages you have to manage.
| Why are there no -dev packages in Arch Linux? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.