date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,380,717,179,000
I have a dataset that has a large amount of data in it: ID Number: A00001 Name: John Smith Address: 123 Any Street City: AnyTown State: Ohio Zip: 12345 ID Number: A00002 Name: Jane Doe Address: 123 Any Street City: AnyTown State: Nebraska Zip: 12346 ID Number: C00003 Name: Jim Shields Address: 123 Any Street City: AnyTown State: Alaska Zip: 12347 ID Number: D11111 Name: Mary Ellis Address: 123 Any Street City: AnyTown State: Nevada Zip: 12348 and I want to pull data out and separate it so it appears like this: ID Number: A00001 Name: John Smith Zip: 12345 ========================= ID Number: A00002 Name: Jane Doe Zip: 12346 ========================= ID Number: C00003 Name: Jim Shields Zip: 12347 ========================= ID Number: D11111 Name: Mary Ellis Zip: 12348 ========================= I have tried about every grep and egrep option I could find, but the closest I could get was putting a blank line (new line) between every line of output.
grep is a pattern matching tool, not a text reformatting tool. Use something like sed, awk, or perl instead. For example: $ awk '/^(ID Number|Name|Zip):/; /^[[:blank:]]*$/ { print "=========================" }' ID Number: A00001 Name: John Smith Zip: 12345 ========================= ID Number: A00002 Name: Jane Doe Zip: 12346 ========================= ID Number: C00003 Name: Jim Shields Zip: 12347 ========================= ID Number: D11111 Name: Mary Ellis Zip: 12348 The [[:blank:]]* is to match any lines that look like they're empty but actually contain horizontal space like spaces or tabs....which is more common than you'd think because it's something that's hard to see with your eyes alone. or, with perl: perl -l -n -e 'print if /^(ID Number|Name|Zip):/; print "=" x 25 if /^\h*$/' input.txt or with sed. First, if you have GNU sed or some other sed that understands perl RE's \h for "horizontal space": sed -n -E -e '/^(ID Number|Name|Zip):/p; s/^\h*$/=========================/p' input.txt Otherwise, with any sed: sed -n -E -e '/^(ID Number|Name|Zip):/p; s/^[[:blank:]]*$/=========================/p' input.txt
Is there a way to format output with grep or egrep with a separator between instances of the output?
1,380,717,179,000
I want to send output of a shell script, including user-entered text, to the terminal and a logfile. I thought some combination of tee and exec might do it, but I’ve had no luck so far. I know tee by itself can echo and capture what the user enters in the terminal: $ tee logfile Hello (I entered this at runtime) Hello (I entered this at runtime) ^C $ cat logfile Hello (I entered this at runtime) But I need to see (on both terminal and in the logfile) what the user enters in response to commands invoked within the shell script. tee doesn’t seem to be able to do that consistently. For example: $ read message 2>&1 | tee logfile Hello (I entered this at runtime) $ cat logfile Nothing was captured there. I expected to see Hello (I entered this at runtime) in the file just like before. I also tried combining tee with exec in the shell script like so: $ cat test.bash #!bin/bash # Note: in this simplified version of this file, I’m not looking at $1, $2, or anything else passed in, but will need to eventually rm -f logfile exec &> >(tee -a logfile) echo “Say \”Hello\”” 2>&1 read -p “> “ 2>&1 Unfortunately, adding exec did not help: $ ./test.bash Say “Hello” > Hello (I entered this at runtime) $ cat logfile Say “Hello” > As you can see, it captured the output of the echo command and the read command, but not what I entered into the terminal in response to the read command. Is there a way to do it? I know the script command (“make typescript of terminal session”) can capture everything on the screen and put it in a logfile. But the script command can’t be invoked in a useful way from within a shell script. (Can it?) script needs to be invoked first, and then the user has to invoke the desired shell script. But I want the user to only have to invoke one command, with its parameters, and then have the command take care of running everything else and logging everything. Then there’s all that “extra” information (e.g. color codes, backspaces) script captures that makes it hard to read the resulting logfile in an arbitrary text editor. I just want to see the “human-readable” characters in the logfile. And I don’t want to see if the user corrected a spelling error. I just want to see that they had “Hello” on the screen when they finished editing and hit Enter. Although I suppose the extra information could be stripped out after capture.
The script implementation from util-linux at least is scriptable. You could do for instance: SHELL=/bin/sh script -qec ' # any sh code here echo Whatever cat # user input' file.log And file.log will capture all that is written to the terminal, including the echo of what you type. That also includes the transformations performed by the tty line discipline like the conversion of LF to CRLF. script also adds some Script started on 2021-09-29 14:58:59+01:00 [TERM="screen.xterm-256color" TTY="/dev/pts/8" COLUMNS="191" LINES="54"] header and: Script done on 2021-09-29 14:58:59+01:00 [COMMAND_EXIT_CODE="0"] footer which from a shell with ksh-style process substitution support, you can remove with: SHELL=/bin/sh script -qec '...' >(sed '1d;$d' > file.log) Or, as suggested by @zevzek, tell script to write the log including header/footer to /dev/null, but redirect script's output (which with -q doesn't have header nor footer) to tee to do the logging. (set -o pipefail; SHELL=/bin/sh script -qec '...' /dev/null | tee file.log) Or with zsh with multi_ios on (as it is by default): SHELL=/bin/sh script -qec '...' /dev/null >&1 > file.log To disable the output post-processing of the tty line discipline of the pseudo-tty started by script, you could disable it there and at least reenable the NL -> CRNL conversion on the host tty with something like: SHELL=/bin/sh HOST_TTY="$(tty)" script -qec ' stty -opost && stty < "$HOST_TTY" opost onlcr || exit ...' file.log (assuming the commands in ... do not restore that output processing).
Is there a way to send all shell script output to both the terminal and a logfile, *plus* any text entered by the user?
1,380,717,179,000
I have a series 5 of commands working on several files located in several sudirectories. Suppose I want to redirect the output of every command as a new column in the same file, how can I do that. Kindly help me. For eg. in the series of commands below, I would like to redirect the output of command 1 into 1st column of pop.txt, the output of command 2 into the second column of pop.txt and the output of command 3 into the 3d column of pop.txt command 1 sed -n -e '1,33p' myfile | awk '{ sum += $3 } END { print sum }' >> ../pop.txt command 2 sed -n -e '34,40p' myfile | awk '{ sum += $3 } END { print sum }' >> ../pop.txt command 3 sed -n -e '41,49p' myfile | awk '{ sum += $3 } END { print sum }' >> ../pop.txt Kindly let me know if I am not clear. Thank you
If your shell supports process substitution, you could use the paste command as follows: paste \ <(sed -n -e '1,33p' myfile | awk '{ sum += $3 } END { print sum }') \ <(sed -n -e '34,40p' myfile | awk '{ sum += $3 } END { print sum }') \ <(sed -n -e '41,49p' myfile | awk '{ sum += $3 } END { print sum }') >> ../pop.txt However processing the same file multiple times in this way is inefficient - you could instead use a single awk command like awk '{sum+=$3} NR==33{s1=sum; sum=0} NR==40{s2=sum; sum=0} NR==49{print s1,s2,sum; exit}' >> ../pop.txt (responding to follow-up question in comments) to add the PWD as the first column, you could do awk -v cwd="$(basename "$PWD")" ' {sum+=$3} NR==33{s1=sum; sum=0} NR==40{s2=sum; sum=0} NR==49{print cwd,s1,s2,sum; exit} ' BTW avoid constructs like for f in `ls -d ./*/` for the reasons discussed here Bash Pitfall #1
Redirect outputs to different columns of the same file
1,380,717,179,000
I can't seem to figure out how I can put a fdisk command from a bash script in a multiline variable. Here is my code hdds="$(sudo fdisk -l | grep "Disk /dev/sd" | awk '{print$2}' | sed 's/://g')" When I execute this bash script it puts everything on a single line like /dev/sda /dev/sdb When I execute this command outside of the bash script it works like it should sudo fdisk -l | grep "Disk /dev/sd" | awk '{print$2}' | sed 's/://g' | wc -l Where the output is 2. I have tried putting everything in quotes, without quotes and what not but nothing seems to work.
are you calling the variable with echo, by any chance? If so, put it in double quotes, like this: echo "$hdds" or use printf: printf "%s" "$hdds"
Output fdisk bash command to multiline variable
1,380,717,179,000
I was checking what exactly each field is in the output of ls -l. The example in this post solves my question. But again I'm wondering what type of each field is. The strings are obvious. But how about the numbers? Like 10, 2048, are they integers or strings? Is there any way I can check the type of each field?
The text produced by commands in the terminal are text strings. Viewing raw binary data in the terminal, without converting it to some text representation, generally has a tendency to produce "garbled output" (for example, when running cat on a compressed file) and may in some cases even put the terminal in a unusable state. The numbers in the output of ls -l are text strings. To read text as integers into e.g. a C program, the program would have to read them as text strings and later convert them to integers using e.g. strtol(), or use scanf() (or equivalent) with the correct format string. Shell tools that read integers, such as awk for example, generally already do this conversion behind the scene according to the rules implemented by the tool. Side note: A C program should not parse the numbers from the output of ls -l. It should call stat() or lstat() on the files returned by readdir().
How can I know the type of each element in the output of ls -l?
1,380,717,179,000
I am using a bash wrapper script around rsync so I run with --dry-run option and then grep on it's output to check for "deleting", so I can make sure I am not losing data by some mistake. (I have not been able to find deleted files on the log output via --log-file but that's a separate question). However, even if I am taking this extra safety step, I have still been able to mess up. I accidentally swapped the usual source and destination folders on the rsync command, and some files NOT present on destination folder where deleted from the source folder. I indeed saw the files being marked as deleted on the wrapper script output, but I missed the mistake because the path names are relative, and I assumed they were at the other (usually destination) end I tried with --relative option to rsync as suggested in a similar question below, but it does not have the desired effect. Absolute path in rsync dry-run statistics (I am creating a new question as I do not have yet the reputation to comment on that one, and I do not think I should be asking on an answer, please correct me if I am wrong) Any idea on how I can achieve this via some rsync command line option?
I've made the same mistake, and searched for an answer to this question too. I haven't found a way to change rsync's basic behavior in the way you've described. I work around it by creating a unique file in the source folder before I do my dry-run. If I see that file show up in rsync's log marked for deletion, I know something's wrong.
Absolute path names on rsync output
1,380,717,179,000
I executed the following code in my Bash console in an Ubuntu 16.04 environment: cat <<-'DWA' > /opt/dwa.sh DWA() { test='test' read domain find /var/www/html/ -exec cp /var/www/html/${domain} /var/www/html/${test} {} \; sed -i 's/${domain}/${test}'/g /var/www/html/test/wp-config.php sed -i 's/${domain}/${test}'/g /var/www/html/test/wp-config.php mysql -u root -p << MYSQL create user '${test}'@'localhost' identified by '${psw}'; create database ${test}; GRANT ALL PRIVILEGES ON ${test}.* TO ${test}@localhost; MYSQL } DWA DWA Everything was redirected as I desired besides the code in the last row (the last DWA which serves as a function call). Why was everything besides the last DWA function call copied but only this stream wasn't? Maybe some conflict with the DWA before the last one?
The last DWA is being removed because you are using this as your delimiter. The delimiters tell your shell everything between these matching strings is part of my here doc. The delimiters are not part of the doc and are therefore stripped when the here doc is read. The reason the DWA prior to that remained is because the delimiter must be at the start of the line. I typically see people use EOF or EOL but this string can be whatever you want, so long as it is unique and does not appear within your document. I recommend modifying to this: cat <<-'EOF' > /opt/dwa.sh #!/bin/bash DWA() { test='test' read domain find /var/www/html/${domain} -exec cp /var/www/html/${domain} /var/www/html/${test} {} \; sed -i 's/${domain}/${test}'/g /var/www/html/test/wp-config.php sed -i 's/${domain}/${test}'/g /var/www/html/test/wp-config.php mysql -u root -p << MYSQL create user '${test}'@'localhost' identified by '${psw}'; create database ${test}; GRANT ALL PRIVILEGES ON ${test}.* TO ${test}@localhost; MYSQL } DWA EOF If you do actually want those variables to expand prior to being sent to dwa.sh you should unquote EOF I find this page to be a very concise resource for here documents: http://tldp.org/LDP/abs/html/here-docs.html
cat heredocument copied everything besides function call
1,380,717,179,000
I have a bashscript, which executes a command and calculates a pair of values, which output can look like this. a,b (10.0000000000, 10.0000000000) -> volt (2088133.7088034691, -222653.3238934391) And in case of invalid parameters or errors the program can show different error-messages. Is there a safe way to parse the two volt-values and store them in two variables in a bash script?
Depends on how reliable the output is and what happens when you get the "different error messages", ie., how that would have to be handled. A basic approach, with what you have above, you could use awk: awk -F"[)(, ]" '{printf "var1=%s\nvar2=%s\n", $11,$13}' var1=2088133.7088034691 var2=-222653.3238934391 A "safe way" would depend on what those error messages do to the output... A more robust approach would be to use awk's built-in NF variable to calculate the relevant fields: awk -F"[)(, ]" '{printf "var1=%s\nvar2=%s\n", $(NF-3),$(NF-1)}'
Get two values from command output in a bash script
1,380,717,179,000
I am trying to capture mysql traffic and pass those traffic to strings command as follows: tcpdump -i any -s 0 -l -w - dst port 3306 | strings This is working as expected and printing all mysql queries like select * from mytables show databases But when i am trying to redirect the output to a file, its not printing the output to /tmp/out file: tcpdump -i any -s 0 -l -w - dst port 3306 | strings > /tmp/out Can someone explain me the behaviour of above command and why it is not redirecting the output to file.
I got the solution: Actually strings command is buffering. I disabled the buffering by using stdbuf -i0 -o0 -e0 command So after changing the whole command to the following, output started going to /tmp/final file. tcpdump -i any -s 0 -l -w - dst port 3306 | stdbuf -i0 -o0 -e0 strings > /tmp/final References stdbuf man page buffering in standard streams linux stdbuf - line-buffered stdin option does not exist
strange behaviour of strings command [duplicate]
1,380,717,179,000
When I run top I see a screen similar to this, PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 654 root 20 0 25.5g 147780 70084 S 3.7 0.9 72:08.69 Xorg 1687 me 20 0 9165008 1.4g 268488 S 1.3 8.9 163:06.01 qutebrowser 316 root -51 0 0 0 0 S 0.7 0.0 1:09.86 irq/51-DELL095A:00 683 me 20 0 810360 58212 41792 S 0.7 0.4 12:35.61 picom 702 me 20 0 246876 18032 14616 S 0.7 0.1 26:04.86 i3bar 827 me 20 0 665528 16316 9980 S 0.7 0.1 32:41.79 conky 2874827 me 20 0 6008688 209960 123060 S 0.7 1.3 0:10.84 QtWebEngineProc 675 me 20 0 175796 24656 15216 S 0.3 0.2 0:14.27 i3 771 me 9 -11 1022184 21952 12056 S 0.3 0.1 82:23.43 pulseaudio but the program is interactive, so I see the screen changing as time passes (which is ok because the info does change over time). Furthermore, top | grep whatever doesn't seem to return ever. What if I want a snapshot of that state for sending it to a text processing utility? A comment suggested to go for top -b -n 1, but if I pipe that into some other program (or, for simplicity, if I > some-file), I see that it also has kind of heading, top - 21:35:47 up 5 days, 7:24, 1 user, load average: 0.16, 0.41, 0.37 Tasks: 243 total, 1 running, 241 sleeping, 1 stopped, 0 zombie %Cpu(s): 2.3 us, 3.1 sy, 0.0 ni, 93.8 id, 0.0 wa, 0.0 hi, 0.8 si, 0.0 st MiB Mem : 15787.2 total, 1840.9 free, 5693.1 used, 8253.3 buff/cache MiB Swap: 16384.0 total, 16375.2 free, 8.8 used. 6256.7 avail Mem It would be nice to have a clean tabular report of all processes running, and nothing more. Should I use an utility other than top? Is maybe ps -e what I should reach for? The scenario in which I plan to use this solution is for a PID picker for a debugger: the user would write the name of the process, and the PID picker would filter the list of running processes based on that; then when the user selects the desired process, the corresponding PID is used to attach the debugger.
I have been pretty successful with the ps -o output format field names. The output specifiers are mentioned a couple of times early in the Linux ps(1) man page: First a short ways into the EXAMPLES section: To see every process with a user-defined format: ps -eo pid,tid,class,rtprio,ni,pri,psr,pcpu,stat,wchan:14,comm ps axo stat,euid,ruid,tty,tpgid,sess,pgrp,ppid,pid,pcpu,comm ps -Ao pid,tt,user,fname,tmout,f,wchan The man page's OUTPUT FORMAT CONTROL section has the detail for the option: -o format User-defined format. format is a single argument in the form of a blank-separated or comma-separated list, which offers a way to specify individual output columns. The recognized keywords are described in the STANDARD FORMAT SPECIFIERS section below. Headers may be renamed (ps -o pid,ruser=RealUser -o comm=Command) as desired. If all column headers are empty (ps -o pid= -o comm=) then the header line will not be output. Examples: Recently I used this as a lightweight way to get the resident set size (RAM usage) of processes for monitoring purposes: ps -eo pid=,rss=,comm= Example output: 3874689 9104 sshd 3875080 8788 sshd Replacing comm= with args= produces: 3874689 9104 sshd: userone [priv] 3875080 8788 sshd: usertwo [priv] I.e., args produces a command name with arguments, which is probably more expected from ps. It's a good idea to read the man page in more detail about these options, including the STANDARD FORMAT SPECIFIERS section referenced in the -o description. For clarity, I didn't copy the whole description above. It's also a good idea to experiment with them and make sure you understand the output format, including separator characters. Other uses: I have used ps -o with the %cpu and %mem specifiers as well as etime, etimes, bsdstart, and lstart. It can be very useful when you want the process listing features of ps but want simpler output to parse in your script/code.
How can I obtain a full tabular list of the running process on my system (with their command name and PID at least) for text processing consumption?
1,380,717,179,000
The application in question is DaVinci Resolve. I start it from terminal. And when I close it, the message "Socket Disconnected" from the app is written to the terminal output. Then the bash prompt appears as normal. I start typing the new command, and suddenly another message appears in terminal "Socket Disconnected". And this interferes with the input I made. It looks like this: [andrew@unihost ~]$ davinci-resolve ... # Now I exit the application. Host 'Fusion' Removed FusionScript Server [37457] Terminated Socket disconnected [andrew@unihost ~]$ ls ls Socket disconnected wtf!!!^C [andrew@unihost ~]$ Video demo: https://youtu.be/arcCOjrN7kw Why this is happening and is there a way to prevent this? My guess is that there is subprocess of the main process, which still is alive even after main process dies. I have found this answer. Is that the developer's fault? Can I workaround this somehow (maybe nohup for child process)?
Preliminary note I haven't tested davinci-resolve at all. This answer is designed to be generic. Analysis Your shell waits for davinci-resolve to exit before it puts itself back in the foreground and prints the prompt. Apparently some child (or further descendant) of the main davinci-resolve prints the unwanted message after the main process exits and the shell reacts. Solution A solution may be as easy as: davinci-resolve | cat The trick is cat won't exit until all processes writing to the pipe close their end of the pipe. The troublesome child probably inherits the stdout from the main davinci-resolve, so cat will wait for it. Normally this will work even if the unwanted message is printed to stderr or /dev/tty (i.e. it bypasses our cat). What matters is the child keeps the pipe open, even if it's printing to elsewhere. There are disadvantages: The exit status of the entire pipe will come from cat, not from davinci-resolve. In some shells you can do something about it. stdout and stderr from davinci-resolve (and its descendants) will lose sync because the former goes via cat and the latter doesn't. If you Ctrl+c then you will kill the cat, possibly before the other processes finish printing, so you may miss some output you do want to see. Additionally if the troublesome message gets printed to stderr then it will be printed anyway, possibly after you see the prompt. You can make the cat immune to Ctrl+c though: davinci-resolve | sh -c 'trap "" INT; exec cat' The troublesome process may close or redirect its stdout early and still print to stderr. In this case cat will not wait for it. The troublesome process may be designed to remain and the unwanted message does not mean the process exits. If the process remains and keeps the pipe open then our cat will remain; you obviously don't want this. It seems unlikely any descendant of davinci-resolve remains (unless there's a bug), but in general it may happen. For some of these reasons you may want to pass stdout and stderr via cat. Making the cat immune to Ctrl+c is still a good idea: davinci-resolve 2>&1 | sh -c 'trap "" INT; exec cat' Note now you cannot tell apart stderr of davinci-resolve (and its descendants) from stdout, they both go via cat and its stdout. It shouldn't be a problem, as you wanted them to mix in the terminal anyway. If you ever want to redirect or capture them separately then you should drop our contraption and start from scratch. It may be the troublesome process closes or redirects its stdout and stderr early, and it prints the unwanted message directly to /dev/tty (example). In this case our cat cannot help. Shell function You can implement our solution as a shell function: davinci-resolve() { command davinci-resolve "$@" 2>&1 | sh -c 'trap "" INT; exec cat' } The function supports passing arguments to davinci-resolve, but its exit status comes from cat, not from davinci-resolve process (if it's a problem then see the already given link for ideas).
How to prevent child process to interfere with bash prompt?
1,380,717,179,000
sda |-sda1 ntfs Recovery F49E60439E5FFD0C |-sda2 vfat 0E62-991B 61.6M 36% /boot/efi |-sda3 |-sda4 ntfs 5EEC0B13EC0AE55D |-sda5 ext4 Home f142204b-a3c1-4ed4-b255-944659cef7bd 239.8G 59% /home |-sda6 swap 97399d72-dc57-4f90-a8c7-e9409582ccd9 [SWAP] |-sda7 ext4 Backup da05c9bb-d9f3-44dd-872d-ee106d971561 67.4G 44% /mnt/Backup |-sda8 ext4 093e80bf-e5f4-4b61-9f6c-26981b9710ab 29.9G 49% / `-sda9 ext4 SwapPossible 115a401a-2c85-4b54-9d04-9a7051193249 sdb |-sdb1 ntfs PortableData 4E1AEA7B1AEA6007 12.8G 98% /media/nikh/PortableData `-sdb2 ext4 PortableBackup bcc13a36-eae7-4c36-a9b1-98e641d41fb4 256.8G 14% /media/nikh/PortableBackup1 My question is: Why sda9 and sdb2 starts with a backquote, while others starts with a pipe character? Does it have some special meaning?
It's just a layout-thing. It makes it easier for the user to see that sda9 and sdb2 are the last ones in their respective lists. Nothing to worry about.
Output of lsblk includes both pipes and back ticks
1,380,717,179,000
I was running a lengthy scientific simulation (which took almost a week to run) on my Linux workstation with the command which looked like this: time ./simulation So besides getting the output files from the simulation my aim was to also get the exact time it took to run this simulation. However, unfortunately, I ran the command ls -la before copying the time output, but my terminal window only shows a limited number of lines. So I have now changed the number of lines to unlimited in the terminal settings, but I still can't scroll up to see the time information. Is there a way to see that information without having to re-run the simulation?
You can no longer access those lines in mate-terminal. Increasing the number of scrollback lines doesn't help either: The terminal doesn't remember all the lines and reveals only the configured amount; it remembers only the configured amount. That being said, if the given terminal tab is still open, there's still some chance that the data wasn't actually overwritten and is recoverable through deep investigation; similarly to how deleted files can still be recovered from disk if they weren't actually overwritten. The chance of succeeding decreases by every additional line that was scrolled out, i.e. if the data scrolled out by 5 lines, it's most likely recoverable, if it scrolled out by millions of lines then most likely it was overwritten. (If you closed the given terminal tab, it's hopeless to recover the data: it's stored on the disk in an encrypted file, and the encryption key is zeroed out in the memory when the terminal is closed.) Such an investigation requires understanding VTE's scrollback handling, digging into the memory and open files of the terminal process, and carefully examining these data. It would probably take days of heavy work, with no guarantee. Having access to the entire raw drive slightly further increases the chance, although significantly increases the required time, and makes it problematic to do it remotely. If it was about some highly critical data (e.g. password to your bitcoin wallet containing your life savings), you could start studying VTE's internals and do this investigation, or hire someone (e.g. me) to do this for you. Given that it's "just" a week of running something, it's highly unlikely to be worth it for you, it's cheaper just to re-run the thing. (Note: I wrote most of the code handling VTE's (i.e. mate-terminal's) scrollback buffer.)
How to see previous output in terminal window?
1,380,717,179,000
i have script that filter several text file using grep and awk in loop, my issue is when create output to each file after filtered, this my script: grep_multi.sh path=$(find /home/folder/file/source -iname "Tracert*" ) for i in "$path" do grep -E '^%%.*.%%$'\|'IPv4 type' $i | awk '/%%TRACERT:/ {sfx = $0; next} {print $1","$2","$3","$4","$5","$6","$7" "$8","sfx}' > filter.result.$i done when running the script i got error like this: ./grep_multi.sh: line 5: filter.result.$i: ambiguous redirect this for variabel $path $ find /home/folder/file/source -iname "Tracert*" /home/folder/file/source/Tracert_1.txt /home/folder/file/source/Tracert_2.txt /home/folder/file/source/Tracert_3.txt /home/folder/file/source/Tracert_4.txt /home/folder/file/source/Tracert_5.txt /home/folder/file/source/Tracert_6.txt /home/folder/file/source/Tracert_7.txt /home/folder/file/source/Tracert_8.txt tracert_1.txt O&M #108 %%TRACERT: IPTYPE=IPv4, LOCALIP4="10.10.10.10", PEERIP4="10.10.10.10", MAXHOP=15;%% RETCODE = 0 Operation succeeded The result is as follows ------------------------ Record index Response number First response time(ms) Second response time(ms) Third response time(ms) IP type Peer IP address 1 3 1 1 1 IPv4 type 10.10.10.10 2 3 1 1 1 IPv4 type 10.10.10.10 3 0 NULL NULL NULL IPv4 type Timeout 4 0 NULL NULL NULL IPv4 type Timeout 5 3 1 1 1 IPv4 type 10.10.10.10 6 3 1 1 1 IPv4 type 10.10.10.10 7 3 1 1 1 IPv4 type 10.10.10.10
By quoting the variable "$path" you are causing the loop to run once, with $i expanding to the whole list of paths. So your redirection ends up something like > filter.result./home/folder/file/source/Tracert_1.txt /home/folder/file/source/Tracert_2.txt ... which is "ambiguous". See this somewhat related question Why is looping over find's output bad practice? You don't really need a shell loop - and you don't need grep either. You can select the IPv4 lines and redirect to a file whose name is derived from the current FILENAME all using awk: awk ' /%%TRACERT:/ {sfx = $0; next} /IPv4 type/ {print $1","$2","$3","$4","$5","$6","$7" "$8","sfx > "filter.result." FILENAME} ' tracert_*.txt For local files, this will produce outputs like: $ head filter.result* ==> filter.result.tracert_1.txt <== 1,3,1,1,1,IPv4,type 10.10.10.10,%%TRACERT: IPTYPE=IPv4, LOCALIP4="10.10.10.10", PEERIP4="10.10.10.10", MAXHOP=15;%% 2,3,1,1,1,IPv4,type 10.10.10.10,%%TRACERT: IPTYPE=IPv4, LOCALIP4="10.10.10.10", PEERIP4="10.10.10.10", MAXHOP=15;%% 3,0,NULL,NULL,NULL,IPv4,type Timeout,%%TRACERT: IPTYPE=IPv4, LOCALIP4="10.10.10.10", PEERIP4="10.10.10.10", MAXHOP=15;%% 4,0,NULL,NULL,NULL,IPv4,type Timeout,%%TRACERT: IPTYPE=IPv4, LOCALIP4="10.10.10.10", PEERIP4="10.10.10.10", MAXHOP=15;%% 5,3,1,1,1,IPv4,type 10.10.10.10,%%TRACERT: IPTYPE=IPv4, LOCALIP4="10.10.10.10", PEERIP4="10.10.10.10", MAXHOP=15;%% 6,3,1,1,1,IPv4,type 10.10.10.10,%%TRACERT: IPTYPE=IPv4, LOCALIP4="10.10.10.10", PEERIP4="10.10.10.10", MAXHOP=15;%% 7,3,1,1,1,IPv4,type 10.10.10.10,%%TRACERT: IPTYPE=IPv4, LOCALIP4="10.10.10.10", PEERIP4="10.10.10.10", MAXHOP=15;%% ==> filter.result.tracert_2.txt <== 1,3,1,1,1,IPv4,type 10.10.10.10,%%TRACERT: IPTYPE=IPv4, LOCALIP4="10.10.10.10", PEERIP4="10.10.10.10", MAXHOP=15;%% 2,3,1,1,1,IPv4,type 10.10.10.10,%%TRACERT: IPTYPE=IPv4, LOCALIP4="10.10.10.10", PEERIP4="10.10.10.10", MAXHOP=15;%% 3,0,NULL,NULL,NULL,IPv4,type Timeout,%%TRACERT: IPTYPE=IPv4, LOCALIP4="10.10.10.10", PEERIP4="10.10.10.10", MAXHOP=15;%% 4,0,NULL,NULL,NULL,IPv4,type Timeout,%%TRACERT: IPTYPE=IPv4, LOCALIP4="10.10.10.10", PEERIP4="10.10.10.10", MAXHOP=15;%% 5,3,1,1,1,IPv4,type 10.10.10.10,%%TRACERT: IPTYPE=IPv4, LOCALIP4="10.10.10.10", PEERIP4="10.10.10.10", MAXHOP=15;%% 6,3,1,1,1,IPv4,type 10.10.10.10,%%TRACERT: IPTYPE=IPv4, LOCALIP4="10.10.10.10", PEERIP4="10.10.10.10", MAXHOP=15;%% 7,3,1,1,1,IPv4,type 10.10.10.10,%%TRACERT: IPTYPE=IPv4, LOCALIP4="10.10.10.10", PEERIP4="10.10.10.10", MAXHOP=15;%% To use it with find, you could do something like: find /home/folder/file/source -iname "Tracert*" -execdir awk ' /%%TRACERT:/ {sfx = $0; outfile = "filter.result." substr(FILENAME,3); next} /IPv4 type/ {print $1","$2","$3","$4","$5","$6","$7" "$8","sfx > outfile} ' {} + which will place the output files in the same directories where the input files are found. If the files are actually in a single directory, it would be simpler to cd there and then use the "local" awk command.
grep multi text file input and create diferent output each file
1,380,717,179,000
In a directory, after running ls -d -- 0.*_*.txt | sort -t. -k1, I obtained output filenames such as 0.230_0.203059.txt How can I further writing each of them into two columns like 0.230 0.203059 into a txt file?
There is some aversion against parsing the output of ls (see https://mywiki.wooledge.org/ParsingLs among others), but that aside: ls -d -- 0.*_*.txt I don't know why you put the -- there; the -d just prevents you from descending into directories. In your set-up, you will probably need neither. The ls will generate filenames such as 0.230_0.203059.txt, so with the txt. You will need to strip that too. ls 0.*_*.txt | sort -t. -k1 | sed 's/.txt//;s/_/ /' > outfile.txt should do that.
Grep: separate output into two column and save them into txt file
1,380,717,179,000
I have a binary that I will be running multiple times in parallel, each instance executed with different input from the command line. I wanted htop to list only these processes so that I can compare the usage of memory based on the cli inputs. I tried [htop -p ] but this lists only one process even if I give muliple process ids as the input. Is there any way to get the output with input being multiple process IDs or with the part of the process name. Example as I hope to see in htop: PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 356 root 20 0 52952 7980 6632 S 0.0 0.8 0:00.00 ./test 1 357 root 20 0 2356 416 352 S 0.0 0.8 0:00.00 ./test 2 358 root 20 0 2356 332 268 S 0.0 0.8 0:00.00 ./test 3 Many thanks!
From man htop: F4, \ Incremental process filtering: type in part of a process command line and only processes whose names match will be shown. To cancel filtering, enter the Filter option again and press Esc. So, once you start htop, type \test and press Enter to filter in only commands containing test.
htop - See/Filter all the instances of a binary
1,380,717,179,000
I have an out from xmllint and egrep that I would like to print two fields next to each other. e.g. (xmlinput) xmllint --format | egrep reference\|sourcefile <reference>ItemX</reference> <sourcefile>://filepath/blah/blah/</sourcefile> <reference>ItemY</reference> <sourcefile>://filepath/blah/blah/</sourcefile> . . <reference>ItemW</reference> <sourcefile>://filepath/blah/blah/</sourcefile> Is there a way to output the reference and the sourcefile elements next to each other? e.g. (xmlinput) xmllint --format | egrep reference\|sourcefile <reference>ItemX</reference><sourcefile>://filepath/blah/blah/</sourcefile> <reference>ItemY</reference><sourcefile>://filepath/blah/blah/</sourcefile> . . <reference>ItemW</reference><sourcefile>://filepath/blah/blah/</sourcefile>
[your command] | paste -d '' - - will join consecutive lines.
Bash output two results next to each other
1,380,717,179,000
I have a script that connect to a Network device. It runs some commands and basically I want to output the SSH commands that are being executed. To make myself clearer, I connect to one device, execute the commands "execute fmscript showlog " and it shows some output. The thing is in the log file where the output is redirected, I can't see the "execute fmscript etc." command printed into the file, just the results. sshpass -p 12345678 ssh [email protected] 'execute fmscript showlog FG300D-1' > output.txt And this is the result: FMG-VM64-KVM # Starting log (Run on device) FG300D-1 $ fnsysctl ps -A PID UID GID STATE CMD On the first line you can see FMG-VM64# What I want to see is "FMG-VM64#execute fmscript showlog FG300D-1". Any way of doing this? I tried with tee with no success.
How about CMD='execute fmscript showlog FG300D-1' echo $CMD > output.txt sshpass -p 12345678 ssh [email protected] "$CMD" >> output.txt
How to output SSH commands that are being sent to a server
1,380,717,179,000
I am working on a project to calculate my overtime at work with a shell script. I have two inputs and want to find if my number is over 800 = 8 hours; if it is bigger, then it has to print out the result to my text file. It has to print out my difference. if [ $var1 -gt 800 ]; then `expr $var1-800` echo Overtime: "" >> $path and then I'm lost because I don't how to print out the result of my calculation.
Try this using modern bash (don't use backticks or expr) : if ((var1 > 800)); then overtime=$((var1 - 800)) # variable assignation with the arithmetic echo "Overtime: $overtime" fi Or simply : if ((var1 > 800)); then overtime="Overtime: $((var1 - 800))" # concatenation of string + arithmetic fi Check bash arithmetic
Echo calculation to text file [duplicate]
1,380,717,179,000
When I run curl https://example.com -o example.html 2>&1 | grep -P "\d.*\d" I get this output (in red): 100 1270 100 1270 0 0 318 0 0:00:04 0:00:04 0:00:00 318 What I want is just to show the last line of cURL's progress bar as it updates (I actually only want certain columns to be shown, but showing the whole line is fine) Wanted output (at time t = 1 sec): user@comp:~/Desktop/$ curl https://example.com -o example.html 2>&1 | grep -P "\d.*\d" 25 1270 25 318 0 0 318 0 0:00:01 0:00:01 0:00:03 318 Wanted output (at time t = 2 sec): user@comp:~/Desktop/$ curl https://example.com -o example.html 2>&1 | grep -P "\d.*\d" 50 1270 50 636 0 0 318 0 0:00:02 0:00:02 0:00:02 318 Wanted output (at time t = 3 sec): user@comp:~/Desktop/$ curl https://example.com -o example.html 2>&1 | grep -P "\d.*\d" 75 1270 75 954 0 0 318 0 0:00:03 0:00:03 0:00:01 318 Wanted output (at time t = 4 sec): user@comp:~/Desktop/$ curl https://example.com -o example.html 2>&1 | grep -P "\d.*\d" 100 1270 100 1270 0 0 317 0 0:00:04 0:00:04 0:00:00 0 I've tried using watch with cURL and grep, but it still doesn't work (no output) watch --no-title --interval 1 "curl http://example.com -o test2.html" | grep -P '\d.*\d'
You cannot directly | grep the output of curl progress because of pipe buffering. Some great insights on this issue is available in this previous question: Turn off buffering in pipe As for a solution, using stdbuf and tr: curl https://example.com -o example.html 2>&1 | stdbuf -oL tr -s '\r' '\n' | while read i; do echo -en "\r$i "; done With: stdbuf -oL: stdout of tr will be line buffered for proper processing by the while loop tr -s '\r' '\n': replace carriage return with newline from curl output while read i; do echo -en "\r$i "; done: simple bash solution for a progress line
Grep cURL output
1,380,717,179,000
Disclaimer: This is a more general question of the one I asked on biostars.org about parallel and writing to file. When I run a program (obisplit from obitools package) sequentially, it reads one file and creates a number of files based on some criterion (not important here) in the original file: input_file.fastq |____ output_01.fastq |____ output_02.fastq |____ output_03.fastq However, when I split the input file and run them in parallel (version from ubuntu repo: 20141022), find . * | grep -P "^input_file" | parallel -j+3 obisplit -p output_{/.}_ -t variable_to_split_on {/} I would expect to get files input_file_a.fastq |____ output_input_file_a_01.fastq |____ output_input_file_a_02.fastq |____ output_input_file_a_03.fastq input_file_b.fastq |____ output_input_file_b_01.fastq |____ output_input_file_b_02.fastq |____ output_input_file_b_03.fastq input_file_c.fastq |____ output_input_file_c_01.fastq |____ output_input_file_c_02.fastq |____ output_input_file_c_03.fastq but the output is only printed to console. Is there something inherent in parallel which causes this printing to console or could this be the way obisplit is behaving for whatever reason? Is there a way to convince each core commandeered by parallel to print to a specific file instead of the console?
It sound as if obisplit behaves differently if output is redirected. You can ask GNU Parallel to output to files: seq 10 | parallel --results output_{} echo this is input {} >/dev/null (or if your version is older: seq 10 | parallel echo this is input {} '>' output_{} ) It will create output_#,output_#.err,output_#.seq.
why is this parallel process not writing output to files but printing to console instead?
1,380,717,179,000
I have a stream of info from a serial input (GPS Antennae) and wish to output that info into a text file on every input (every second in this case) but instead of appending it to the end of the file as > would do after the initial overwrite I would like it to overwrite it every second so only the latest info is displayed. I have tried \r which achieves the effect in bash but not the output file. cat /dev/ttyACM0 | grep --line-buffered -E "GNGGA" | awk 'BEGIN {FS=","};{printf "%s%s\t\t%s%s\t\t%s%s\t%s%s","Time= ",$2,"Lat= ",$3,"Lon= " ,$5,"Alt= " ,$10; fflush(stdout) }' > somefiles.txt This includes the initial input, a grep to focus on one line and awk to get the specific parts of the info I need, they don't affect the overwrite issue as far as I know. Time= 155325.00 Lat= 7428.77433 Lon= 82845.15963 Alt= 21.5 This is the output that starts by overwriting the somefiles.txt but then appends until you stop and run the command again. So is there a way to make only the latest input appear as one line in the text file? Thanks
You can print or printf straight to a file within awk, and close it after every write. That would make awk reopen and truncate it on the next print. awk -vfile=test.out '{print $0 > file; close(file)}' (Strictly speaking you get a race condition here, another process might try to read the file just between the truncate and write, so it would seem empty (or worse but less likely, partial).)
How to overwrite output file on every input update
1,380,717,179,000
I'd like to print command output along with its input. For example for such call as echo "Hello world" | wc -c I want the following output: 12,Hello world Is there any way to do this using standard Unix (or GNU) tools?
tee and paste solution: echo "Hello world" | tee >(wc -c) | tac | paste -s -d, - 12,Hello world
Combine command output along with the input [duplicate]
1,380,717,179,000
I've installed some utilities from the CLI and got quite a long verbose output describe what was installed directly, what needed some dependencies, what is no longer needed to be installed, etc. Is there a way to grep something from this last command ? A very certain word I need. Thanks,
I needed something that does that after I ran the installation command, and not for to-come installation commands. While I don't know a command to do it after the instillation command has been executed, what I did was to copy the output from Bash itself, into a text editor like Vi or Nano, and then search for all instances of the desired phrase.
Grep something specific of the results of last execution?
1,380,717,179,000
I have the following scenario. I have a Perl script that takes an ID and looks up some arguments from a DB. lets say look_up_args.pl 234 prints the following abc 123 "something with spaces" I have another shell script script.sh that does the following some_command --param1 $1 --param2 $2 --par3 "$3" ... what I am trying to do is to call the script with the arguments I have tried the following 2 methods ./script.sh `./look_up_args.pl 234` ./script.sh $(./look_up_args.pl 234) still whenever I run the script.sh, $3 seems to contain only "something" causing my script to fail. I am looking for away to pass the quoted string with out any form of shell expansion/etc... The third parameter may contain other special bash characters, but will always be quoted.
I wouldn't recommend this, but try: eval "./script.sh $(./look_up_args.pl 234)" This should work, but keep in mind that eval will evaluate whatever look_up_args.pl happens to output, meaning you leave yourself vulnerable to code injection. A better option would be what @thrig suggested in the comments: use a standardized data format to pass data between tools. Even a newline-delimited string would be a fine format for a shell-style processing pipeline.
How to pass arguments to a script that were generated by another script
1,454,934,370,000
I have a PHP script I'm running from Cron. I want to save its output to file, but also want to save the shell output to a different file. Ideally, I'd like to have this in one line. As such, I tried the following: script "/folder/log/file.errors."`date +"%Y-%m-%d.%H-%M-%S"`".txt" && /usr/bin/php /folder/file.php > "/folder/log/file.php."`date +"%Y-%m-%d.%H-%M-%S"`".txt" But it only runs the first command (before &&). Likewise, if I use ; instead of &&. When I run this as two separate commands, it works just fine: root:~# script "/folder/log/file.errors."`date +"%Y-%m-%d.%H-%M-%S"`".txt" root:~# /usr/bin/php /folder/file.php > "/folder/log/file.php."`date +"%Y-%m-%d.%H-%M-%S"`".txt" How can I join these two commands into one command/line? Also, when run via Cron, will it be necessary for me to run an exit command after the above code in order for script to properly save to file?
Per terdon's request, I am posting this comment as an answer, so that the question can be marked as "answered" Instead of relying on script logging, especially if this will eventually be a cron job, consider sending output and error messages to one or more designated files(s) in your php code. When you run it in cron, it will create a session log unless you divert it to something like >/dev/null 2>&1 directive. So, as a debug tool you can check that cron log
script Multiple Commands
1,454,934,370,000
When i run this command it still outputs the same message when nothing is in the waste bin directory, how could I get the command to output a different message when there are no files in the bin? #! /bin/bash #! listwaste - Lists the names of all the files in your waste bin and their size #! Daniel Foster 23-11-2015 echo "The files that are in the waste bin are:" ls -1 ~/bin/.waste/ I know this should be simple but I'm just starting out and I am guessing i should be using an if statement or something similar. Thanks for the help in advance.
Oneliner: trash() { ls -1 ~/bin/.waste; }; [[ $(trash | wc -l) -eq 0 ]] && echo no waste || echo -e "waste:\n$(trash)" Better formatted: trash() { ls -1 ~/bin/.waste; } [[ $(trash | wc -l) -eq 0 ]] && echo no waste || echo -e "waste:\n$(trash)" Nerd formatted: #!/bin/bash function trash() { ls -1 ~/bin/.waste } if [[ $(trash | wc -l) -eq 0 ]]; then echo 'There are no files in the waste bin.' else echo 'The files that are in the waste bin are:' trash fi All 3 examples are performing the exact same functionality, only formatted differently, depending on preference. If you want to actually be able to run the command listwaste, then put this in a script named listwaste, make sure you make it executable (chmod +x), and save the script in a directory that is listed in your $PATH. You can echo $PATH to see these directories that contain executables that you will be able to call directly from the shell.
How use an if statement to change the output message
1,454,934,370,000
I have a txt file with lines like below. Starting With: Parameters: {"raw_message"=>"MSH....... Ending With: </HL7Message>"} I'm wanting to grep and output to a file the words between raw and transformed which appears halfway through each line. The file appears like below Parameters: {"raw_message"=>"MSH....... "transformed_data".....</HL7Message>"} Parameters: {"raw_message"=>"MSH....... "transformed_data".....</HL7Message>"} Parameters: {"raw_message"=>"MSH....... "transformed_data".....</HL7Message>"} Parameters: {"raw_message"=>"MSH....... "transformed_data".....</HL7Message>"} Best case scenario The MSH following the >MSH begins the output but there are many MSH instances in each line. So i figured it could be logical to grep the between message and then rip the raw and transformed part out. raw_message"=>"MSH......preceding words followed by transformed Some possible words preceding transformed LAB\r", "transformed 00355", "transformed So I'd want MSH....LAB\r MSH....00355 Any assistance would be much appreciated! I tried: sed -n "/<raw>/,/<\/transformed>/p" HL7prod.txt > HL7prod2.txt Example Line Parameters: {"raw_message"=>"MSH|^~\\&||CDFGTL|||20144543000||ATG^A05|TLGTADM.1.13773085|P|2.1\rEVN|A08|11111111111|||MDFGQ8833^HLPS^GEGES^^^^\rPID|1||K11111111|K1111111|HOLVBVFS^LGDSA^^^^||19GHYSSD|F|^^^^^||^^^^^^^^|||||||K01045435547691\rPV1|1|P|K.ER^^||||LKIJK^Liujn^Jeggrs^H^^^MD|||ER||||||N||ER|||||||||||||||||||||DFLHL|ABD DFIN|PRE|||111111111||||||||\rZCS||^^^^||||00355", "transformed_data"=>"<HL7Message><MSH><MSH.1>|</MSH.1><MSH.2>^~\\&amp;</MSH.2><MSH.3><MSH.3.1>CDFLH</MSH.3.1></MSH.3><MSH.4><MSH.4.1>COCTL</MSH.4.1></MSH.4><MSH.5/><MSH.6/><MSH.7><MSH.7.1>201506331000</MSH.7.1></MSH.7><MSH.8/><MSH.9><MSH.9.1>ADT</MSH.9.1><MSH.9.2>A08</MSH.9.2></MSH.9><MSH.10><MSH.10.1>TLGGBGM.1.13773076</MSH.10.1></MSH.10><MSH.11><MSH.11.1>P</MSH.11.1></MSH.11><MSH.12><MSH.12.1>2.1</MSH.12.1></MSH.12></MSH><EVN><EVN.1><EVN.1.1>A08</EVN.1.1></EVN.1><EVN.2><EVN.2.1>201506125500</EVN.2.1></EVN.2><EVN.3/><EVN.4/><EVN.5><EVN.5.1>MDHYQ6633</EVN.5.1><EVN.5.2>LUJKL</EVN.5.2><EVN.5.3>JYTEDFG</EVN.5.3><EVN.5.4/><EVN.5.5/><EVN.5.6/><EVN.5.7/></EVN.5></EVN><PID><PID.1><PID.1.1>1</PID.1.1></PID.1><PID.2/><PID.3><PID.3.1>K0567432372</PID.3.1></PID.3><PID.4><PID.4.1>K5894336</PID.4.1></PID.4><PID.5><PID.5.1>HOLDFGEER</PID.5.1><PID.5.2>AAAAS</PID.5.2><PID.5.3/><PID.5.4/><PID.5.5/><PID.5.6/></PID.5><PID.6/><PID.7><PID.7.1>1111111111</PID.7.1></PID.7><PID.8><PID.8.1>F</PID.8.1></PID.8><PID.9><PID.9.1/><PID.9.2/><PID.9.3/><PID.9.4/><PID.9.5/><PID.9.6/></PID.9><PID.10/><PID.11><PID.11.1/><PID.11.2/><PID.11.3/><PID.11.4/><PID.11.5/><PID.11.6/><PID.11.7/><PID.11.8/><PID.11.9/></PID.11><PID.12/><PID.13><PID.13.1/></PID.13><PID.14/><PID.15/><PID.16/><PID.17/><PID.18><PID.18.1>K0101333333333</PID.18.1></PID.18></PID><PV1><PV1.1><PV1.1.1>1</PV1.1.1></PV1.1><PV1.2><PV1.2.1>P</PV1.2.1></PV1.2><PV1.3><PV1.3.1>K.ER</PV1.3.1><PV1.3.2/><PV1.3.3/></PV1.3><PV1.4/><PV1.5/><PV1.6/><PV1.7><PV1.7.1>JTOLOKS</PV1.7.1><PV1.7.2>Ldasfs</PV1.7.2><PV1.7.3>Jtuygikd</PV1.7.3><PV1.7.4>H</PV1.7.4><PV1.7.5/><PV1.7.6/><PV1.7.7>MD</PV1.7.7></PV1.7><PV1.8/><PV1.9/><PV1.10><PV1.10.1>ER</PV1.10.1></PV1.10><PV1.11/><PV1.12/><PV1.13/><PV1.14/><PV1.15/><PV1.16><PV1.16.1>N</PV1.16.1></PV1.16><PV1.17/><PV1.18><PV1.18.1>ER</PV1.18.1></PV1.18><PV1.19/><PV1.20/><PV1.21/><PV1.22/><PV1.23/><PV1.24/><PV1.25/><PV1.26/><PV1.27/><PV1.28/><PV1.29/><PV1.30/><PV1.31/><PV1.32/><PV1.33/><PV1.34/><PV1.35/><PV1.36/><PV1.37/><PV1.38/><PV1.39><PV1.39.1>COTOLA</PV1.39.1></PV1.39><PV1.40><PV1.40.1>ABD XXXX</PV1.40.1></PV1.40><PV1.41><PV1.41.1>PRE</PV1.41.1></PV1.41><PV1.42/><PV1.43/><PV1.44><PV1.44.1>111111111</PV1.44.1></PV1.44><PV1.45/><PV1.46/><PV1.47/><PV1.48/><PV1.49/><PV1.50/><PV1.51/><PV1.52/></PV1><ZCS><ZCS.1/><ZCS.2><ZCS.2.1/><ZCS.2.2/><ZCS.2.3/><ZCS.2.4/><ZCS.2.5/></ZCS.2><ZCS.3/><ZCS.4/><ZCS.5/><ZCS.6><ZCS.6.1>111111</ZCS.6.1></ZCS.6></ZCS><GT1><GT1.6><GT1.6.1/></GT1.6></GT1><ZRF><ZRF.1><ZRF.1.1>COTYUL</ZRF.1.1></ZRF.1><ZRF.2><ZRF.2.1>CDFTL</ZRF.2.1><ZRF.2.2>K.ER</ZRF.2.2></ZRF.2></ZRF></HL7Message>"} Would Want: MSH|^~\\&||CDFGTL|||20144543000||ATG^A05|TLGTADM.1.13773085|P|2.1\rEVN|A08|11111111111|||MDFGQ8833^HLPS^GEGES^^^^\rPID|1||K11111111|K1111111|HOLVBVFS^LGDSA^^^^||19GHYSSD|F|^^^^^||^^^^^^^^|||||||K01045435547691\rPV1|1|P|K.ER^^||||LKIJK^Liujn^Jeggrs^H^^^MD|||ER||||||N||ER|||||||||||||||||||||DFLHL|ABD DFIN|PRE|||25679506645657||||||||\rZCS||^^^^||||00355
If you want just the text between your patterns on each line, do the following: sed 's/.*raw\(.*\)transformed.*/\1/' \(.*\) remembers the text that is output using \1. Other stuff onthe line is not output.
How to grep between words in a log?
1,454,934,370,000
Okay I'm pretty new to Linux and I have setup a Raspberry Pi 2 with Raspbian and it's running a bunch of things for tracking aircraft as well as a receiver (RTL-SDR). It's all working great and I'm using Dump1090 for decoding/demodulating the signal. Dump1090 has an --interactive mode that displays a row/column layout in the terminal when started normally (not as a daemon) which is the signal the receiver is picking up from the plane. I have Dump1090 starting as a daemon now and I'd like to see the --interactive output when I start a session with a command or something. The daemon is being started with --interactive already. Obviously when I SSH in I don't see this output and I'd like to know if there's a way I can sort of "alt tab" to view its output. Is this not possible? Do I need to install something in particular on the OS in order to do this? Thanks in advance. Edit: I went with terdon's comment and using the advice from garethTheRed. Outputting to a file and accessing the file to see output works in the terminal fine so I will go with that. I also use this in conjunction with the web server that shares the data. I will mark garethTheRed's answer as accepted as it's the only one for coherency sake while keeping my explanation of also using terdon's advice which is a solution to my problem. Thanks for the help.
I haven't got dump1090 or a receiver to confirm any of this. But if you read further down in that github page you've linked you'll see that if you run: ./dump1090 --interactive --net it will start it's built-in web server. You can then connect to the Raspberry Pi's port 8080 with your web browser by entering it's IP address and port in the browser's address bar: http://<IP address of Rasberry Pi>:8080/ (don't forget that colon : before the 8080) Your browser should show your live traffic.
How to view a daemon's output in a session on Raspbian/Debian?
1,454,934,370,000
I have a access_log which size is neally 2GB. I want to analyze it from 2015:09:55 to 2015:09:57, but during this period, there have more than 500 items. So I want to output and download the access_log only from 2015:09:55 to 2015:09:57 in my computer and open it in my EmEditor. I am not good at SSH commond. I tried this not working [root@server ~]# cat /var/log/httpd/access_log | (grep "2015:09:55"||grep "2015:09:56||grep "2015:09:57") > /home/usr/log_access.txt
You can specify a range with regexes as delimiters with sed. As an example, all entries, that contain anything between 2015:09:56 and 2015:09:6(something): ssh host "sed -n '/2015:09:5[6-9]/,/2015:09:6/ p' /var/log/httpd/access_log" to capture the output in a local file, use redirection, that is: ssh host "sed -n '/2015:09:5[6-9]/,/2015:09:6/ p' /var/log/httpd/access_log" >log_snippet
output log file from some time to some time via ssh
1,454,934,370,000
I'm having trouble with sending shutdown -h 0 to a lxc Debian container (i.e. executing this command in the lxc) with with the python pexpect module (in a python script). In this module the user can "expect" (= wait for process output) a certain substring, amongst others EOF, which leads me to the question in order to be able to debug further why EOF isn't recognized in the output. I need to know what I can "expect" after termination of the process in order to wait for the process to end. I can't simply wait for the process because the pexpect module hides non-blocking functions for that. The pexpect module (see http://www.bx.psu.edu/~nate/pexpect/pexpect.html#pexpect.spawn.expect for details) wraps the reception of EOF in the read system call in a (duck)type and makes it usable in pexpect.expect (an encapsulation of possible output/feedback of a process). I've been wondering that because some processes like ls are expected to terminate with EOF, i.e. the pexpect sense of EOF (example at http://pexpect.sourceforge.net/pexpect.html).
EOF indicates that no further input is to be expected on a resource which possibly provides an endless amount of data (e.g. a stream). This situation is often expressed by writing a single character on the stream (to be defined by the underlying system (likely a OS or runtime environment )). As processes use streams for inter-process communication they need to indicate the limits of their output and sending processes need to inidicate the limits of their input using EOF. The underlying system will very certainly forward this input and output to its own process handling mechanisms making EOF avaialble for evaluation in the program/on the system. Note about pexpect use case in the question: shutil.pexpect doesn't seem to be suitable to copy files of a lxc container. It got stuck and the time offset of the pexpect output causes the confusion.
Do all Linux processes write EOF to stdout when they are terminating/have finished terminating?
1,454,934,370,000
I am trying to use part of the hostname output in Linux and use it in a file using sed. For example, I am using this command as hostname |tail -c 4 which shows me output as last 4 numbers and then use this output and substitute it with another text inside a file. Assuming part of the hostname command shows with tail -c 4 option, "1234" How can I take it this further and use it to replace it with another file. I can do this manually by: hostname |tail -c 4 ; sed -i 's/oldtext/1234/g' filename.txt but not sure how can I achieve this with script? Any ideas?
You could store the output of the first command in a variable $ var=$(hostname |tail -c 4) Once stored, you can then use it as the replacement with sed $ sed -i.bak "s/oldtext/$var/g" filename.txt
Using hostname output in linux with sed
1,454,934,370,000
I have a mail.log line and using sed and pipes I can extract the subject, the sender, and the recipient of the mail, echo "Jul 15 09:04:38 mail postfix/cleanup[36034]: 4A4E5600A5DE0: info: header Subject: The tittle of the message from localhost[127.0.0.1]; from=<sender01@mydomain> to=<recipient01@mydomain> proto=ESMTP helo=<mail.mydomain>" | sed -e 's/^.*Subject: //' -e 's/\]//' -e 's/from localhost//' -e 's/^.\];//' |sed -e 's/\[127.0.0.1; //' -e 's/proto=ESMTP helo=<mail.mydomain>//' I have the output The tittle of the message from=<sender01@mydomain> to=<recipient01@mydomain> my desired output is Jul 15 09:04:38 The tittle of the message from=<sender01@mydomain> to=<recipient01@mydomain> How to extract the date and added it to the output?
Ugly but put this at the beginning of the sed statement: -e 's/^\([[:alpha:]]+ [[:digit:]]+ [[:digit:]]+:[[:digit:]]+:[[:digit:]]+\).*Subject:\(.*\)/\1\2/' Or if you always know that ' mail postfix' will be in the text at that position you can just use: -e 's/^\(.*\) mail postfix.*Subject:\(.*\)/\1\2/' And other variations are possible. The key is to capture the date, skip over parts you don't care about, and again capture the remainder that you still need to process. To capture surround with \( and \) and to print what you've captured use \n where n is the position of a particular capture (first is 1, second is 2, etc.) And now that you know this you can probably figure out how to eliminate all of the separate directives (-e), use multiple capture groups, and get it down to a single sed expression.
print the date of a mail.log line
1,454,934,370,000
I try to install rar using yaourt, problem is I get +5k results and can't filter out the package containing rar. Neither |grep helps nor |head, the first hundreds of lines are lib'rar'ies. What could I do to get around this?
You use an AUR helper that actually works and is not fundamentally insecure: cower -s '^rar$' aur/rar 5.3.0-1 (668, 6.91) A command-line port of the rar compression utility
finding rar in yaourt
1,454,934,370,000
I had asked a question a while ago about how to pull specific info from a log file, and got some useful answers, but I don't quite have what I need yet. What I have so far: #!/bin/bash mkdir result cd $1 ls | while read -r file; do egrep "ERROR|FAIL|WARN" > ../errors-$file done cd .. mv errors-* result Here's what I want in total: the script scans through a directory of logs, grabs each line of text (some are multi-line) that includes ERROR|FAIL|WARN, and outputs them in a directory called "result" with each individual file being named based on their source file. It works when my target log directory only has one text file in it, but it doesn't work when my target directory has directories and log files at different levels within it. I know it's because the "../errors-$file" bit in the script, but I'm not sure how to change it. Any help would be great, and I apologize in advance if it's poorly phrased.
I'll poke at your loop itself, and try to walk through what it's actually doing, and then look at how you might achieve what you want. How you initiate your loop is overly complex: ls | while read -r file;. You're piping the results of ls to your while read, not a big deal, still works, though I would recommend a for loop more like: for file in ./* ; This will loop over the same files, but is a little more straight forward. Also the ls command can return output that is not just the filenames (ever notice that ls has different font colors for directories/executables?), and this can cause other commands grief. Your command: egrep "ERROR|FAIL|WARN" > ../errors-$file is trying to egrep for ERROR|FAIL|WARN in nothing, and redirecting output to ../errors-$file. The syntax for egrep is: egrep [search pattern] [location] So you're missing the location for egrep to look in. You can fix this by adding $file after your pattern, so it becomes: #!/bin/bash mkdir result cd $1 for file in $(find ./ -type f) do egrep "ERROR|FAIL|WARN" $file > ../result/errors-$(basename $file) echo "-------------------------------" >> ../result/errors-$(basename $file) egrep "denied" -B 1 $file >> ../result/errors-$(basename $file) done
Script that scans through logs, pulls specific data, and creates an output directory
1,454,934,370,000
I am writing a bash script for backup with log storage. I use my defined function as follows: logit () { echo " $1 $2 " >> /path/to/log } For logit 'Starting Backup' $(date +'%D %T') I get this output: Starting Backup 01/11/22 so the time is missing, apparently the stdout function has shortened it. With echo $(date +'%D %T') I also get the time in the stdout. I would also like to use my function for logs, e.g. logit 'DB-LOGS' $(cat /path/to/sql) results in DB-LOG mysqldump: Again, some stdout is missing here. What should I change or add to the function to get complete output?
Double quote the command substitution: logit 'Starting Backup' "$(date +'%D %T')" Without quotes, the result of $(date ...) goes through word splitting and filename globbing. There's no shortening: assuming $IFS is still set to its default value (which does contain the space character), the date and time are passed as separate arguments to the function, the time ends up in $3 which you don't use.
bash function does not show me date or cat arguments completely [duplicate]
1,454,934,370,000
I have a.txt,b.txt,c.txt. Each has different numbers as below: a.txt: 12 14 111 1 15 2 b.txt 12 18 22 23 1 2 c.txt 12 14 15 16 17 1200 The output should contain all the numbers from each file, but without any duplication. Is there a command to export such a thing into a text file? The actual text files include hundreds of rows.
You could do like this if there are more number of files, grep '' *.csv | cut -d: -f2 | sort -u > output.csv
how to export all numbers that are unique in a few text files into another file?
1,454,934,370,000
Say I type git help to learn about the merge command. I don't want to read all the output just the lines that contain merge and their surrounding lines. I thought this would be a common question but couldn't find it. I think grep can be used somehow to do this.
Yes, grep can do something like this: its -C option will show the context of a match. Thus git help | grep -C2 merge will show lines containing “merge”, with two lines of context above and below. I find it more convenient to use less: git help | less then search using /. git help won’t tell you much though, you’ll need git help merge which will open the relevant manpage for you. Some terminal emulators also allow searching after the fact; for example, GNOME Terminal has a Search menu, and you can press CtrlShiftF to start a search.
How can I search terminal output?
1,454,934,370,000
I have a command such that bar > /dev/null and I want to know the exit status of bar. I read some posts su about ${PIPESTATUS[0]} but this works when one pipes the output via | and I can't make it work with > instead. What am I missing?
> isn't a command. This means that bar will be the last command executed. You can check for failure with a standard if statement: if ! bar > /dev/null; then echo "bar command failed" fi You can also access its return code with $? if you are interested in something more than zero or non-zero: bar > /dev/null if [ "$?" -eq 45 ]; then echo "bar returned exit code 45" fi
exit status and no output
1,454,934,370,000
I am trying to assign an resolved IP address to variable: ip=$(ping -q -c1 -W1 google.com | grep -Eo "([0-9]+\.?){4}" | head -n 1) | echo $ip or ip=`resolveip google.com | head -n 1` | echo $ip Both echo returns empty output. Without assigning it to variables commands works good. What i am doing wrong?
What you want to achieve seems to be outputting the content of the shell variable $ip after assigning the output of the ping command to it - that means, you want to first assign command output to a variable, and then print the resulting variable content on the console. However, you are using the pipe (|) operator to link these two commands. This is the wrong operator for two reasons. First, that operator is used to redirect the output of a command to the input of another; however a variable assignment doesn't produce output, and the echo command doesn't read from stdin. While that isn't the root cause of your problem, you may run into problems in more complex situations when using the | operator in that non-intended way. More seriously (and likely the actual reason for the observed behavior), the commands on both ends of such a pipeline are started simultaneously so that the receiving command can process the output from the sending command "as it comes" (in particular, without needing to buffer the entire output). In your case, this means that the echo $ip command is executed immediately, and likely before the shell had time to fill it with the output from ping. So, the solution for both issues is to use the ; instead for simple sequential execution of two commands that only happen to be written on the same line: ip=$(ping -q -c1 -W1 google.com | grep -Eo "([0-9]+\.?){4}" | head -n 1) ; echo $ip
Assigned variable has empty output from nslookup and ping commands
1,454,934,370,000
How can I merge the output of two commands in one, using one command? command1 output: ID NAME1 COLUMN2 xxx-1 aaa bbb xxx-2 ccc ddd xxx-3 eee fff xxx-4 nnn mmm command2 output: COLUMN3 COLUMN4 ID kkk www xxx-3 kkk ppp xxx-1 kkk qqq xxx-4 lll ttt xxx-2 kkk rrr xxx-2 NOTE: command1 xxx-2 returns ccc (NAME1 field) Expected one command and substitution happening on the screen, no files involved. Expected result: COLUMN3 COLUMN4 NAME1 kkk www eee kkk ppp aaa kkk qqq nnn lll ttt ccc kkk rrr ccc Many Thanks. ** ** EDITED: Added 2 dumb scripts with the results of each command for testing. command1.sh > #!/bin/sh if [[ -z "$1" ]]; then echo 'ID NAME1 COLUMN2 3cc45fe6-gqee-321f-c143-w3d1d278912c aaa bbb bab bab 4a39466b-211d-48e2-a86b-db022c10fe59 ccc ddd ddd daa ddd adw45fe6-fqxe-261g-k172-a7d1x277112d eee fff fff f28894d0-cf40-4cff-a19a-a6893f88dd67 nnn mmm mamm mmm' elif [[ -n "$1" ]]; then if [[ "$1" == "3cc45fe6-gqee-321f-c143-w3d1d278912c" ]]; then echo "aaa" elif [[ "$1" == "4a39466b-211d-48e2-a86b-db022c10fe59" ]]; then echo "ccc" elif [[ "$1" == "adw45fe6-fqxe-261g-k172-a7d1x277112d" ]]; then echo "eee" elif [[ "$1" == "f28894d0-cf40-4cff-a19a-a6893f88dd67" ]]; then echo "nnn" else echo "Error from server (NotFound)" fi fi command2.sh > #!/bin/sh echo 'COLUMN3 COLUMN4 ID kkk www wwaaw www www adw45fe6-fqxe-261g-k172-a7d1x277112d kkk pppppppppppp paaapp ppp ppp 3cc45fe6-gqee-321f-c143-w3d1d278912c kkk qqq qqq qqqqqqqqqqqqqqq f28894d0-cf40-4cff-a19a-a6893f88dd67 lll tttttttttttt ttttttttt ttt ttt 4a39466b-211d-48e2-a86b-db022c10fe59 kkk rrrrrr rrrrrr rrrraarrrrr rrr 4a39466b-211d-48e2-a86b-db022c10fe59'
awk 'NR==FNR{a[$1]=$2; next} {$3=a[$3]} 1' <(command1) <(command2) might be what you're looking for. I tweaked the above and re-ran given the output the 2 command scripts you added produces: $ awk ' NR==FNR { map[$1]=$2; next } { key=$NF; sub(/[^[:space:]]+[[:space:]]*$/,""); print $0 map[key] } ' <(./command1.sh) <(./command2.sh) COLUMN3 COLUMN4 NAME1 kkk www wwaaw www www eee kkk pppppppppppp paaapp ppp ppp aaa kkk qqq qqq qqqqqqqqqqqqqqq nnn lll tttttttttttt ttttttttt ttt ttt ccc kkk rrrrrr rrrrrr rrrraarrrrr rrr ccc I wrote it such that the mapping will work and the output will retain the same spacing as the input whether the 3rd field of the first input stream or the 2nd field of the 2nd input stream has spaces or not or if the spaces are blanks or tabs or whether the fields are fixed-width or not. The only fields that cannot have spaces are fields 1 and 2 of input 1 and field 3 of input 2.
Substitute or merge output of two commands in one, using one line command
1,454,934,370,000
Having to deal with an environment where ack and so on is not available nor installable, this command try to limit only relevant files to find string through the C++ project : grep pattern --color -- /project/path/**/*.*([chCH]|cc|cxx|[ch]pp|py) This does the job. Now to bring a bit more commodity to that, the goal is to put that into a shell script. Let's say it's named wrapped_grep. Here is the content of wrapped_grep: #!/usr/bin/env bash shopt -s extglob # enable advanced pattern matching grep $1 --color -- /project/path/**/*.*([chCH]|cc|cxx|[ch]pp|py) But trying to launch wrapped_grep pattern don't provide any output, even when the equivalent direct grep query does find matches as expected. What is missing in this script to provide the same result as the direct grep invocation?
The extglob shell option enables the *([chCH]|cc|cxx|[ch]pp|py) part of your expression, but the **/ part requires the globstar option globstar If set, the pattern ** used in a pathname expansion con‐ text will match all files and zero or more directories and subdirectories. If the pattern is followed by a /, only directories and subdirectories match. So you likely need shopt -s extglob globstar
Shellscript `grep` execution not working as in the interactive shell
1,454,934,370,000
I used ls ~ on RHEL 7 but I got <F6>q as output! What does it mean? [user@server2 ~]$ ls /home/user/ <F6>q [user@server2 ~]$ ll total 4 -rw-r--r--. 1 root root 340 Sep 18 17:16 <F6>q [user@server2 ~]$ cat <F6>q -bash: F6: No such file or directory [user@server2 ~]$ touch test [user@server2 ~]$ ls <F6>q test [user@server2 ~]$ vim <F6>q -bash: F6: No such file or directory [user@server2 ~]$
It means that your file is named <F6>q. These are not undisplayable characters, as comment-answers and other actual answers suggest. You can see them displayed, right in front of you. ☺ In any case, <F6> is not any of the forms that ls emits for undisplayable characters. [user@server2 ~]$ cat <F6>q -bash: F6: No such file or directory [user@server2 ~]$ vim <F6>q -bash: F6: No such file or directory [user@server2 ~]$ You need to learn about shell syntax. You are running the cat and vim commands with their standard inputs redirected from the file F6 and their standard outputs redirected to the file q, with no actual command arguments. The former redirection fails, because there is no file named F6, your file rather being named <F6>q, and the latter redirection is consequently not attempted at all. Here is the same command, with whitespace showing how the shell is parsing it:[user@server2 ~]$ cat < F6 > q -bash: F6: No such file or directory [user@server2 ~]$ vim < F6 > q -bash: F6: No such file or directory [user@server2 ~]$ To pass a file name containing shell metacharacters to a command as-is, without the shell responding to the metacharacters, they must be quoted: vim '<F6>q' or escaped: vim \<F6\>q Given what happens in VIM when you press a function key in ex command input mode, it is fairly easy to accidentally generate files with names like these using VIM.
What does <F6>q mean in ls command output?
1,454,934,370,000
To edit the root crontab in Debian I do for example sudo crontab -e. To exit from the preferred text editor (Nano), I do CTLR+X. So far so good, but what if I want that each time I exit crontab, a text will be echoed into the console (into "stdout"). The purpose is to echo a reminder message like: If you haven't already, change p to your password in password[p] to your password! To make sure I'm clear here --- I desire that each time the user finished editing the crontab and then quited back to the console, the message will appear. Is there any way to do so in the current release of Bash?
You can assign the $EDITOR variable a script which first calls an editor and then produces the output: #! /bin/bash vim "$1" echo "foo bar baz" and use this call EDITOR=/path/to/script.sh crontab -e
Echo something to console each time you quit crontab
1,454,934,370,000
When we are trying to redirect an output from an production server to local system, there are some unwanted characters eg. ^[[032m. actually these are the color codes which is appearing while its redirected. When the same script is executed on the server without redirection some part of the output appears in colored format. If we open this file directly in the notepad or any other tool it shows come different character like below. So is there any possible solution to skip them to happen before or after redirection.
If you have GNU sed, you can use that to remove the color escapes from the stream: somecmd |sed -Ee 's/\x1b\[[0-9;]+m//g' > outputfile The sed command substitutes (s///) the escape character (\x1b), followed by an open bracket (\[), and any number of digits or semicolons ([0-9;]+) and a following m, with nothing.
unwanted output characters eg. `^[[032m` from a script
1,454,934,370,000
Literally, I want to print to any displayed line on the terminal. I remember we've learned this in university, but was ages ago. Is there a command for this? Like this: ___________ ___________ |blah | |blah | |bla | |blah | |randomtext | |blah | |xy | -----> |blah | |hjkl | |blah | |prompt> | |prompt> | |___________| |___________|
You can move the cursor to any X,Y co-ordinate with the tput cup command eg tput cup 10 3 will take you to line 10, column 3 (co-ordinates start at 0,0 as the top-left) so a simple script such as clear echo line 1 echo line 2 echo line 3 tput cup 1 5 echo another line tput cup 10 0 will result in output similar to line 1 line another line line 3 $ (where the $ is your prompt). The first tput command moves the cursor back up to the earlier line allowing the echo to overwrite what was already there.
How to modify a given line of the terminal?
1,454,934,370,000
I feel like the space between the numbers and the paths is too much and I believe that less space would make lines easier to follow. Is there an easy way reduce that space?
That's a TAB, you see a 7 column gap because your terminal has tab stops every 8 columns. You could change the tabstop spacing on the terminal with for instance: tabs 4 To set the tab stops every 4 columns instead of 8, or pipe the output to: expand -t4 To convert TABs to spaces with tabstops every 4 column. Or expand -t4,/8 To expand the tabs but with the first after the 4th column, and the other ones every 8 column as usual. Or convert the first TAB to one space (but beware it would misalign the output when you display more than 10 lines) by piping to: sed $'s/\t/ /'
Reduce space between numbers and paths in "dirs -v" output
1,526,039,478,000
I'm confused on how bash interprets blanks while executing a script. In the end, I want a script that is fed by a csv file containing software used by an organisation and it should return the CVEs for these programs, by calling a python script to fetch this information. My file would look this, but with over 500 lines: "CRYSTAL REPORTS 2008";"SAP";"reporting software";;"Mr. Smith" And is fed to my script like $ getcve.sh < test.csv When I run this small script I get strange results concerning the wordcount (which I wanted to use to loop through the python script's output to store in another file): Read from file Source: CVE-2010-2590 CVE-2010-3032 Words in variable: 2 CVE-2010-2590 CVE-2010-3032 Words processed: 1 However, when I hardcode "SAP CRYSTAL REPORTS 2008" in the script, the count changes to what I would expect: Hardcoded query Query: "SAP CRYSTAL REPORTS 2008" Source: CVE-2010-2590 CVE-2010-3032 Words in variable: 2 CVE-2010-2590 CVE-2010-3032 Words processed: 2 The script itself looks like this: #!/bin/bash clear echo "Hardcoded query" query='"SAP CRYSTAL REPORTS 2008"' echo "Query: "$query var2=$(python3 $HOME/cve-search/bin/search_fulltext.py -q "$query" | tr '\n' ' ') echo "Source: "$var2 i=0 echo "Words in variable: "$(echo "$var2"|wc -w) for cve in $var2 do echo $cve i=$[ $i+1 ] done echo "Words processed: "$i echo echo "Read from file" IFS_OLD=$IFS IFS=";" while read title firm desc version manager do query='"'$(echo $firm $title $version | tr -d '"')'"' var3=$(python3 $HOME/cve-search/bin/search_fulltext.py -q "$query" | tr '\n' ' ') echo "Source: "$var3 i=0 echo "Words in variable: "$(echo "$var3"|wc -w) for cve in $var3 do echo $cve i=$[ $i+1 ] done echo "Words processed: "$i done IFS=$IFS_OLD Is there a trick or method to get the same results as the hard coded query when reading from a file? I stumbled onto this by toying a little bit around (shell scripting is new for me) and this weird result is bothering me ^^" Thank you in advance for your help :)
Your problem comes from IFS=";" I think: this modification will have an impact on the for loop. Try: IFS_OLD=$IFS IFS=";" while read title firm desc version manager do query='"'$(echo $firm $title $version | tr -d '"')'"' var3=$(python3 $HOME/cve-search/bin/search_fulltext.py -q "$query" | tr '\n' ' ') echo "Source: "$var3 i=0 echo "Words in variable: "$(echo "$var3"|wc -w) IFS=" " for cve in $var3 do echo $cve i=$[ $i+1 ] done IFS=";" echo "Words processed: "$i done IFS=$IFS_OLD
Shell script - blanks are not always recognized as such?
1,526,039,478,000
Below is the output of uptime from Solaris where I am extracting the third last column: uptime 8:30pm up 23 day(s), 17:46, 5 users, load average: **2.79**, 1.79, 1.53 I always need to get the third last column highlighted in bold above i.e. 2.79 echo '8:30pm up 23 day(s), 17:46, 5 users, load average: 2.79, 1.79, 1.53' | awk '{ print substr($10, 1, length($10)-1) }' Output: 2.79 But at times it fails when uptime output has 18 hr(s) instead of 17:46 as seen below: echo '8:44pm up 23 day(s), 18 hr(s), 5 users, load average: 1.08, 1.12, 1.22' | awk '{ print substr($10, 1, length($10)-1) }' Output: average A simple solution could be searching for the columns from the last column minus 3 i.e 3rd column from last as the last three columns are always numerical and don't change. However, I m not sure how-to. Can you please suggest ?
Read man cut and do something like: uptime | cut -d, -f4 | cut -d: -f2
Unable to get the nth column data due to changing output of uptime command
1,526,039,478,000
I'm able to get the output of a failing lp command from a remotehost to my local script like below: until ssh -q root@remotehost 'lp -d Brother_HL_L2350DW_series /root/moht/Printed/`basename "$FILE"`' 2>&1 | tee /home/printererror.log do echo "Issue is: `cat /home/printererror.log`" sleep 230 done The issue is the until does not loop even if the lp command fails. If I change my until code and remove 2>&1 | tee /home/printererror.log like below then it works fine and starts looping for failing lp command. But like you see I'm unable to grab the error message after removing tee until ssh -q root@remotehost 'lp -d Brother_HL_L2350DW_series /root/moht/Printed/`basename "$FILE"`' I want both the until to loop for failing lp command while logging the respecting failing messages to local echo.
The until is considering the exit status of tee. Looking at your code it's not at all obvious why you should need tee, though, so I'd suggest you just remove it until ssh -q root@remotehost 'lp …' >/home/printererror.log 2>&1 do : … done
Get the output of a remote ssh to local
1,526,039,478,000
I want to print the following variables to the standard output file slurm-XXXXX.out produced by Slurm. Right now, I am generating a separate .info file for every job. echo "SLURM Job ID : ${SLURM_JOB_ID}" >> $SLURM_SUBMIT_DIR/$jobName.$JOBID.info echo "SLURM Job name : ${SLURM_JOB_NAME}" >> $SLURM_SUBMIT_DIR/$jobName.$JOBID.info echo "SLURM Node list : ${SLURM_JOB_NODELIST}" >> $SLURM_SUBMIT_DIR/$jobName.$JOBID.info Thanks.
Just echoing these variables without the redirection to your info file will output to .out file, like echo "SLURM Job ID : ${SLURM_JOB_ID}"
How do I print slurm variables to standard slurm output?
1,526,039,478,000
I'm currently doing some Bash scripting and have built a tool that returns log information. However, the script sometimes returns a lot of lines (depending on what is going on on the network) and Ctrl+S helps to read the printed lines by "freezing" the output. I've read that the Bash pauses flow-control (XOFF) when pressing that key combination until Ctrl+Q is pressed. Is there a way that I can print a message when pressing Ctrl+S in the Bash before it pauses? For example, when the user presses Ctrl+S a message like Stopped - press Ctrl+Q to proceed (or whatever) appears before the Bash pauses the output?
That behaviour has nothing to do with bash (or any shell for that matters). It's the terminal device driver that pauses output (stops sending data to the terminal) when it receives the ^S character (or whatever is set by stty stop) from the terminal. Applications started by your shell in that terminal won't see that ^S character even if they read from the terminal device. The shell is just another application that runs in the terminal and whose job is just to interpret command lines and start the corresponding command in separate processes. While the commands are running, the shell does nothing, it just waits for them to finish so it can prompt you for another command to enter. Actually, modern shells including bash disable that flow control process when they (their command line editor) interacts with the terminal device. You'll notice that when you press Ctrl+S at the bash prompt, and assuming it's in emacs mode, bash handles it as an incremental search widget invocation. Here for a message to be issued when you press Ctrl+S, it's the terminal device driver (line discipline) that you would need to modify. It feels a bit silly though to be sending something in a reply to the terminal sending "Please stop sending". Another approach could be to wrap your shell session into some pseudo-terminal wrapper that puts the host terminal device in raw mode and offers its own pseudo-terminal device to bash, intercepts those ^S character to write the message. That could be done with GNU screen for instance which is a terminal emulator within a terminal. Add to your ~/.screenrc: bindkey "\023" eval 'hstatus "Stopped - press Ctrl+Q to proceed"' xoff bindkey "\021" eval "hstatus screen" xon Here with the message issued in the hardstatus line of your terminal.
Print a message before the Bash pauses its output
1,526,039,478,000
I have a function that takes a filename. It then runs a command and filters the output (exclusive) between two patterns, then it outputs comma separated values with the filename output of the command. Here is the function and the expected output: get_cons_obs() { local line="${1}" "command" -i "${line}" 2>&1 \ | awk '/^ERROR$/{print "ERROR"} /^START$/{flag=1;next} /^END$/{flag=0} flag' \ | xargs printf "${line},%s\n" } file01,thing01 file01,thing02 file01,thing03 . . . Is it possible to combine awk command and the xargs printf command? I can't seem to append the "flagged" lines with the $line variable.
It sounds like you want to pass the shell variable line into awk so that you can print it when flag is non-zero Ex. awk -v line="$line" '. . . flag {printf "%s,%s\n", line, $0}' See also Use a shell variable in awk
Append text to flagged output of awk
1,526,039,478,000
I ran across a problem formatting the output during runtime while preserving the format of the code in some scripts. The below works for me, just wanna check that it's reasonable? It seems to dirty up the code a bit, I'd be interested in a better solution - I don't want to create an over complicated bash function that handles every possible use-case for the sake of formatting a few lines. I'd be interested in a portable solution that allows for clean code and predictable output. printf "\nLine one.... Line 2 - output on next line Line 3, output on its own newline" I noticed that printf picks up on the newlines automatically, including your indentation, which allows for easy formatting of output within the file, but can work against your formatting within your script, if you are working in an indented block - if [ $# -ne 1 ]; then printf "\nLine one... Line 2 - output on next line Line 3, output on its own newline" fi Had I indented Lines 2 and 3 appropriately within their block, printf would have picked up the spaces (tabs) and output them in my script message at runtime. Can I do something like this, splitting up lines but preserving my format both within my script and within my output? I'd like to keep my lines under 80 characters in width, within reason, but I want to also use printf formatting as normal, controlling newlines with \n instead of the lack of quotes picking up on my newlines automatically. Is multiple printf statements the only way to do this? Edit / solution code below this line Referring to l0b0's accepted answer below, I used the %b argument opposed to %s, and initialized the 'lines' variable with double quotes instead of single quotes. The %b arg allowed printf to parse the escape sequences within my lines, and double quotes seemed to allow for the passing of local variables I had created earlier to simplify colorizing output for success / error messages. RED=$(tput setaf 1) NORMAL=$(tput sgr0) lines=( "\nEnter 1 to configure vim with the Klips repository, any other value to exit." \ "The up-to-date .vimrc config can be found here: https://github.com/shaunrd0/klips/tree/master/configs" \ "${RED}Configuring Vim with this tool will update / upgrade your packages${NORMAL}\n") printf '%b\n' "${lines[@]}" read cChoice To clarify my indentation / spacing within this question - vim is configured to expand tab symbols to spaces with the below lines in .vimrc - " Double-quotes is a comment written to be read " Two Double-quotes ("") is commented out code and can be removed or added " Set tabwidth=2, adjust Vim shiftwidth to the same set tabstop=2 shiftwidth=2 " expandtab inserts spaces instead of tabs set expandtab
If you use Tab characters for indentation (all but extinct nowadays) you can use this trick in here documents: if … then cat <<- EOF first second EOF fi If you replace four spaces with tabs in that command it'll print the same as printf '%s\n' first second. That said, printf '%s\n' … is probably a much easier solution – that way each line is a separate argument to printf: $ lines=('Line one...' 'Line 2 - output on next line' 'Line 3, output on its own newline') $ printf '%s\n' "${lines[@]}" Line one... Line 2 - output on next line Line 3, output on its own newline
Clean formatting of output within bash scripts
1,526,039,478,000
While I type a command, this command will authenticate with a remote server and do a bunch of things. If pass, it will give me a bash shell with its environment. My goal is to save these log which prints out into my terminal for future inspection. Let's say my command called target. While I type target, it throws the following msg. Version: V1.94 Options: Date: Thu Jun 6 17:18:39 2019 OS: CentOS release 6.10 (Final) .... [DEV]target> Notice the last line implied I've entered the target shell environment. I've tried echo $(target) > output.log, but it will be stuck until I type CTRL+C. Environment: CentOS 6.10 Update Let's say I want to make this script automatically run when I activate my machine. However, it will still be stuck while executing the target command because it entered a shell and didn't leave. #!/bin/bash script -c target output.log FILENAMEME=output.log aws s3 cp "$FILENAME" s3://logs/
You may want to try script: script -c target output.log This would start the command target and save a transcript of the whole session, until target terminates, into the file output.log. If the command (your target) is several words long, quote the full command. See the manual for script (man script).
Echo shell log while entering and then exit
1,526,039,478,000
I have a LAMP environment with /var/www/html/x which is a MediaWiki website. I have a few more MediaWiki websites but I'd like to print the version of the x one to terminal. I need to do so I could know what the last version was before I manually update MediaWiki. Inside that dir, there's a file named RELEASE-NOTES-1.32 with the text: == MediaWiki 1.32 == === Changes since MediaWiki 1.32.0-rc.2 === MORE_TEXT........... Maybe I should just print line 3 as with awk 'NR>3' /var/www/html/x/RELEASE-NOTES-* but maybe there's a better way to know the full version of a given MediaWiki installation. What would be the best, most stable way to do this outputting?
If the wiki is up, the most robust method is to just ask (via the generator property of the siteinfo API, for example) - changes to that are subject to a deprecation policy, while any internal structure you rely on could change without warning. If that's not an option, you can try parsing out the value of $wgVersion from includes/DefaultSettings.php. E.g. ack '\$'"wgVersion\s*=\s*'([\w\d.-]+)';" --output='$1' mediawiki/includes/DefaultSettings.php Checking the release notes works as well, if you always use proper releases, and only care about the major version.
How to print the version of a specific MediaWiki installation to terminal
1,526,039,478,000
In a CentOS bash, (or, if there is a general way in other bashes, like Ubuntu, better), how can I differentiate commands typed by me with the output of the commands? I ask this because when I use a command who outputs a lot in the screen, it is hard to find where it starts. I want to, for example, decorate my commands with a bright color and the output with a darker color, or, indent the output by 4. Which may be like: [root@westerngun ~]# ps aux | grep myname <- brighter xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx <- darker xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Or: [root@westerngun ~]# ps aux | grep myname xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx <- indented by 4 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
For starters, bash is bash, no matter where it's running. The only important thing to note is which version. bash 4 has some new features not available in bash 3 for instance. That said, you can colorize up your prompt fairly easily by setting PS1 ("Prompt String 1") to set one command apart from another. For instance, when I log in on one of my home machines, I see this: When scrolling through my terminal history, I can just key off of the cyan text in my prompt to know when one command ends and another begins. For reference, my PS1 is as follows: \[\e[38;5;14m\]\u\[\e[38;5;8m\]@\[\e[38;5;6m\]\h\[\e[38;5;8m\]:\[\e[38;5;10m\]\w \e[31m${?##0}\n\[\e[$(((($?>0))*31))m\]\$\[\e[0m\] This shows my username, hostname, and CWD in a string which I could copy and paste into (e. g.) an scp command, followed by the exit code of the previous command if it was not zero.
Differentiate/decorate the command line input and output (with color or indentation)
1,526,039,478,000
So I am using getent to reverse lookup a domain name to an IP. I need this IP in the munin-node config. I have the following code, but it just prints the IP and doesn't append to the config file. HOSTIP= getent hosts google.nl | awk '{print $1}' echo "allow ^$HOSTIP" >> /etc/munin/munin-node.conf
Your command is wrong HOSTIP= getent The space between "=" and "getent" dont work in bash. And you need to put command inside a sub-shell "$()" HOSTIP=$(getent hosts google.nl | awk '{print $1}') echo $HOSTIP 2800:3f0:4001:801::2003
Appending a string to a file with variable content
1,526,039,478,000
Due to my laziness, I have written an extremely "messy" series of scripts in order to auto-initiate my openvpn. The configuration file I am using comes vpnbook.com/freevpn. To get the password I use: lynx --dump --nolist vpnbook.com/freevpn | grep -i password | sort -u | cut -b 18,19,20,21,22,23,24 The password is returned from the website. Then, I use an expect script to automatically login (the user name is always vpnbook, but the password changes depending on the week): #!/usr/bin/expect -f spawn openvpn /vpn/vpnbook-ca1-tcp80.ovpn ### my vpn configuration file ### expect "*?sername:*" send -- "vpnbook\r" expect "*?assword:*" ### This next line sends the password that changes by the week, which I... ###...unfortunately need to update manually (for lack of a better method): send -- "weekly-password\r" The problems I am running into when attempting to automatically update the password: 1) I can't call lynx directly from the expect environment. 2) Since the password changes, I am not certain how to replace the the unique pass-phrase from the previous week with the updated version in : send -- "unique-previous-password\r" 3) I am not certain how use the string output from the lynx function as an input variable for editing the password from the previous week (located in my expect script). Quite clearly, I am not "the brightest" programmer (nor am I the most efficient). However, at the end of the day, my only goal is to fully initialize my vpn by typing a single command (as I mentioned before, I am lazy). Any help would be appreciated, thanks!
In your expect/tcl script, you can use: send "$env(PASSWORD)\r" And call your expect script with with: PASSWORD=$(elinks -dump...) /path/to/your/expect/script Note that you can use cut -b18-24 for short.
How can I update a unique string in a shell-script with the output of a seperate function?
1,526,039,478,000
I have a command in my csh script that can give at least 2 lines of output but may be more. I am turning these lines into separate variables that I then want to pass to another command. How do I set my second command to loop through and run for each variable outputted from command 1? Below is my what I have that turns the output of command 1 into variables. set vars = `echo "command 1"` set numRows = $#vars if ($numRows < 2) then echo "ERROR! $numRows rows!" exit endif `echo '/command2 -L '$vars[1]'`
Note, I'm using bash, not csh, because I don't hate myself. But you can do all this in csh, you'd just have to translate. If you want to work in bash instead, simply run "bash --login" first and then you're working in bash. To do the sort of task you describe in a shell script, we use pipes and not loops like you would in a programming language. Don't get me wrong, there are looping structures in csh and bash, but for what you've described, we do it differently. If I had a command that produces multiple lines of output and I wanted those lines to be acted on one at a time by another command, I'd connect the two commands with a pipe | , like this maybe: cat file.txt | grep "some words" The grep command processes each line coming in from STDIN, which is linked by the pipe to STDOUT of the cat command. This is a trivial example, but it serves. another: echo 'one,two,three' | tr ',' '\n' That will replace all the commas with newlines, creating a three line output from the one line input. If I wanted to add an extension to the names of all files in a directory, I might do something like this: cd directory for filename in * do mv ${filename} ${filename}.extension done The * is file globbing pattern. File globbing is when a pattern on the command line is replaced with any filename in the current directory that matches the globbing pattern. The * means "anything"
How to loop through variables
1,526,039,478,000
Official docker is installed on this Mac El Capitan. While running a bash file, one of the commands is to start the docker daemon if it is not running: [[ $(docker-machine status) == "Stopped" ]] && docker-machine start eval $(docker-machine env) I am guessing it was not running because I got the below output which I was hoping the above code would handle gracefully. What needs to be done for that to happen? Starting "default"... (default) Check network to re-create if needed... (default) Waiting for an IP... Machine "default" was started. Waiting for SSH to be available... Detecting the provisioner... Started machines may have new IP addresses. You may need to re-run the `docker-machine env` command. Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host "192.168.99.100:2376": tls: DialWithDialer timed out You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'. Be advised that this will trigger a Docker daemon restart which will stop running containers. How ever running the script file the second time, all went well.
Docker on OS X (or macOS as it's now called) runs inside a Linux virtual machine, usually using VirtualBox as the hypervisor. So when you start docker using docker-machine start, it will take a little while for the virtual machine to and all of the services on it to start and become available. So to work around this, you could do something like the following: [[ $(docker-machine status) == "Stopped" ]] && docker-machine start sleep 10 eval $(docker-machine env) You may wish to adjust the value passed to sleep if this turns out to be too much or not enough time, as the amount of time it takes for your virtual machine to become available depends on the hardware on your OS X host as well as the virtual hardware allocated to the Docker guest.
shell script to ensure Docker Daemon is running - Official docker on OSx
1,526,039,478,000
I want the user to be able to name a file to be created, and then have the output of the script redirected to that file. The script will create a long listing of a directory and count the number of files in that directory. This is what I have so far: echo -n "Please enter the name of a file to be created: " read FILENAME touch FILENAME ls -l exampledir echo ls -l exampledir | wc -l I might not be using read correctly, I'm not too sure. Edit I figured most of it out, here is the new code: echo -n "Please enter the name of a file to be created: " read FILENAME ls -l Assign_7 > FILENAME echo >> FILENAME ls -l Assign_7 | wc -l >> FILENAME The only thing I can't figure out is how to get the file to be named what the user entered. It has the right stuff inside, but right now the file is always called FILENAME 2nd EDIT Figured it out. Just needed to add $ before each FILENAME
Quite simple: #/bin/bash echo -n "Please enter the name of a file to be created: " read FILENAME ls -l Assign_7 > $FILENAME echo >> $FILENAME ls -l Assign_7 | wc -l >> $FILENAME This is called Parameter Expansion, see man bash for complete explanation. Also, there is a bit shorter form that simplifies the script a bit and makes the bash to open an output file only once: #/bin/bash echo -n "Please enter the name of a file to be created: " read FILENAME exec > $FILENAME ls -l Assign_7 echo "" ls -l Assign_7 | wc -l This latter form does redirection of the current shell's stdout to a file by executing exec > $FILENAME command, thus the output of all subsequent commands the shell executes is also directed to that file.
Redirecting the output of a script into a file? [closed]
1,526,039,478,000
So my goal is to answer my other question I am working on. Which is Connect to Open Wifi. Currently I am getting close, as I know there is only one Wifi connection, I would like to export the results of sudo iwlist wlan0 scan | grep ESSID to a text file. The output currently is: ESSID: "MyNetworkSSID" # Which would end up being in the file What I want is a text file that says only "MyNetworkSSID"
Don't bother with grep. Pipe it directly to awk as follows: $ sudo iwlist wlan0 scan | awk -F ':' '/ESSID:/ {print $2;}' "BTWifi-with-FON" "BTHub5-FTQN" "BTWifi-X" "4GEEOnetouchY800z_2DEB" This carries out a regexp search for ESSID: and the splits that line on a colon (-F ':') after which it prints the second element of that split (print $2). Or, pipe it through perl: $ sudo iwlist wlan0 scan | perl -nle '/ESSID:(.*)$/ && print $1' This causes perl to run the command (-e) on each line of the input (-n) and adds a line feed at the end of each line (-l). The command is a regex which searches for ESSID: and captures the remaining line ((.*)$). On finding this match, it prints the capture (&& print $1).
sudo iwlist wlan0 scan | grep ESSID > essid.txt (How do I export without the word ESSID in txt file)
1,526,039,478,000
Is there a way to append my_computer.sh with the command on line 330 by only referencing the line number? or with the fewest keystrokes possible? 326 pip install -U pip 327 man pip 328 cat >> ~/my_computer.sh 329 pip -V 330 sudo pip install virtualenv 331 ls 332 vi .gitignore 333 ll 334 virtualenv ENV 335 ll 336 vi .gitignore 337 source bin/activate 338 cd ENV/ 339 source bin/activate 340 history
It depends on your shell and the supported features. With bash, e.g., you can type: !330 my_computer.sh
append file with history line number values
1,526,039,478,000
What I want to do is to ask the user to enter one line and send to standard ouput. I tried get the input to carry out a conditional question, but there's no way to expand a input (in this case is input 0, but can be output 1 or error 2). In this scenario I want to ask and read the user input from standard input and send to standard input/output, after get this and verify if have something (if is non-zero). The code looks like (maybe looks confused in this case you can skip and use the description above) #!/usr/bin/bash read>&0 if [[ -n $my_input_expanded ]] ; then echo "hello word" fi Here read make the role of ask the user the input (can be another command, but I don't know which can be here), and the variable $my_input_expaded was some operation like redirection of input 0> (I know this is just to file, but it's something like this), but instead redirection is expansion of input.
You really should study a beginner's guide rather then trying to guess at the syntax. Shell script is strange. Really. Here's a version that does what you've described: #!/bin/sh # read a line of text into the variable "$line" IFS= read -r line # check if the result contains any text if [ -n "$line" ] then # it does, so output a simple response including the given text echo "hello: $line" fi
Expand input, output or error
1,282,094,234,000
What is the Fedora equivalent of the Debian build-essential package?
The closest equivalent would probably be to install the below packages: sudo dnf install make automake gcc gcc-c++ kernel-devel However, if you don't care about exact equivalence and are ok with pulling in a lot of packages you can install all the development tools and libraries with the below command. sudo dnf groupinstall "Development Tools" "Development Libraries" On Fedora version older than 32 you will need the following: sudo dnf groupinstall @development-tools @development-libraries
What is the Fedora equivalent of the Debian build-essential package?
1,282,094,234,000
I'd like to compress and package everything, including files and folders in current directory, into a single ZIP file on Ubuntu. What would be the most convenient command for this (and name of the tool needed to be installed if any)? Edit: What if I need to exclude one folder or several files?
Install zip and use zip -r foo.zip . You can use the flags -0 (none) to -9 (best) to change compressionrate Excluding files can be done via the -x flag. From the man-page: -x files --exclude files Explicitly exclude the specified files, as in: zip -r foo foo -x \*.o which will include the contents of foo in foo.zip while excluding all the files that end in .o. The backslash avoids the shell filename substitution, so that the name matching is performed by zip at all directory levels. Also possible: zip -r foo foo [email protected] which will include the contents of foo in foo.zip while excluding all the files that match the patterns in the file exclude.lst. The long option forms of the above are zip -r foo foo --exclude \*.o and zip -r foo foo --exclude @exclude.lst Multiple patterns can be specified, as in: zip -r foo foo -x \*.o \*.c If there is no space between -x and the pattern, just one value is assumed (no list): zip -r foo foo -x\*.o See -i for more on include and exclude.
Zip everything in current directory
1,282,094,234,000
In the Debian family of OSes, dpkg --search /bin/ls gives: coreutils: /bin/ls That is, the file /bin/ls belongs to the Debian package named coreutils. (see this post if you are interested in a package containing a file that isn't installed) What is the Fedora equivalent?
You can use rpm -qf /bin/ls to figure out what package your installed version belongs to: [09:46:58] ~ $ rpm -qf /bin/ls coreutils-8.5-7.fc14.i686 [09:47:01] ~ $ Update: Per your comment, the following should work if you want only the name of the package (I just got a chance to test): [01:52:49] ~ $ rpm -qf /bin/ls --queryformat '%{NAME}\n' coreutils [01:52:52] ~ $ You can also use dnf provides /bin/ls to get a list of all available repository packages that will provide the file: # dnf provides /bin/ls Last metadata expiration check: 0:17:06 ago on Tue Jun 27 18:04:08 2017. coreutils-8.25-17.fc25.x86_64 : A set of basic GNU tools commonly used in shell scripts Repo : @System coreutils-8.25-17.fc25.x86_64 : A set of basic GNU tools commonly used in shell scripts Repo : updates coreutils-8.25-14.fc25.x86_64 : A set of basic GNU tools commonly used in shell scripts Repo : fedora
Which Fedora package does a specific file belong to?
1,282,094,234,000
I am currently looking for a website or a tool that would allow me to compare the package state of a particular software in different Linux distributions. For instance, which version of gimp is provided by Mint, Ubuntu, Debian Sid and Fedora 18? An immediate interest would be to be able to avoid reinventing the wheel when packaging software (for instance re-use patches from other distros).
whohas package (link) may help you. Example % whohas pidgin|grep "pidgin " MacPorts pidgin 2.10.6 https://trac.macports.org/browser/trunk/dports/net/pidgin/Portfile Slackware pidgin 2.7.11-i486-3sl slacky.eu Slackware pidgin 2.7.0-i486-1 salixos.org Slackware pidgin 2.7.0-i486-1 slackware.com OpenBSD pidgin 2.9.0-gtkspell 8.3M OpenBSD pidgin 2.9.0 8.3M 16-Aug-201 Mandriva pidgin 2.10.6-0.1.i586 http://sophie.zarb.org/rpms/a6ec6cd30f5fa024d14549eea375dba4 Fink pidgin 2.10.6-1 http://pdb.finkproject.org/pdb/package.php/pidgin FreeBSD pidgin 2.10.6 net-im http://www.freebsd.org/cgi/pds.cgi?ports/net-im/pidgin FreeBSD e17-module-everything-pidgin 20111128 x11-wm http://www.freebsd.org/cgi/pds.cgi?ports/x11-wm/e17-module-everything-pidgin NetBSD pidgin 2.10.6nb5 10M 2012-12-15 chat http://pkgsrc.se/chat/pidgin Ubuntu pidgin 1:2.10.0-0ubuntu2. 695K oneiric http://packages.ubuntu.com/oneiric/pidgin Ubuntu indicator-status-provider-pidgin 0.5.0-0ubuntu1 7K oneiric http://packages.ubuntu.com/oneiric/indicator-status-provider-pidgin Debian pidgin 2.7.3-1+squeeze3 706K stable http://packages.debian.org/squeeze/pidgin Debian pidgin 2.10.6-2 591K testing http://packages.debian.org/wheezy/pidgin Debian indicator-status-provider-pidgin 0.6.0-1 33K testing http://packages.debian.org/wheezy/indicator-status-provider-pidgin Source Mage funpidgin 2.5.0 test Source Mage funpidgin 2.5.0 stable Source Mage pidgin 2.10.6 test Source Mage pidgin 2.10.5 stable Gentoo pidgin 2.10.6 http://gentoo-portage.com/net-im/pidgin Gentoo pidgin 2.10.4 http://gentoo-portage.com/net-im/pidgin
Is there a tool/website to compare package status in different Linux distributions?
1,282,094,234,000
How can I write a simple derivation to package a program for nix and how can I create a PR to include it in nixpkgs? (I am writing this as I can't find simple explanations)
NB: this answer is not yet fully complete, but it's already a good starting point. I plan to add more language-specific stuff later (or maybe to create one question per language too keep this answer… """reasonably""" short). Here are a few references: Quickstart in the manual : we will go in more details here The Nix Pills (and the section 6 in particular) : they are great but take a bottom-up approach by first explaining all the internals of nix that I find confusing… you may not need to learn all of that to write your first derivation. Here we will take a top-bottom approach, i.e. give the function that most users want to use directly and explain how it works after. The documentation for stdenv which is quite nice but contains certainly many information and few (incomplete) examples Another answer of mine, but specific to building pre-compiled binaries Your first derivation A derivation is, informally, a recipe to build a program. When you cook, you need some ingredients (a.k.a. sources and dependencies in nix) and some steps to combine you ingredients into a cake (a.k.a. programs…). A simple C program So let's start with a simple example, the simplest C program that I can imagine. You can write it in a file program.c (for this example) in any folder you like: #include <stdio.h> int main() { printf("Hello, World!\n"); return 0; } The derivation Then, we need to say to nix how to compile this program. So create a file derivation.nix: { stdenv }: stdenv.mkDerivation rec { name = "program-${version}"; version = "1.0"; src = ./.; nativeBuildInputs = [ ]; buildInputs = [ ]; buildPhase = '' gcc program.c -o myprogram ''; installPhase = '' mkdir -p $out/bin cp myprogram $out/bin ''; } This describes a function, where the inputs are the dependencies (the "ingredients" of the cake; here only stdenv that provides useful utilities) and that outputs a derivation thanks to stdenv.mkDerivation: informally, you can imagine that this process will output a folder with all the compiled files. I provided to mkDerivation some informations: the source: here the sources are in the current folder, but you can also take sources from the web as we will see later the dependencies nativeBuildInputs needed to compile the program (gcc is always included by default… so you don't need to specify anything here) the dependencies buildInputs needed to run the program (you will typically put the libraries here) the instructions buildPhase to build the program (it is a bash script). At the beginning of this phase you are dropped in a folder containing the sources the instructions installPhase to describe how to "install" the program (see below). There are actually many more phases (to uncompress the sources, to patch, to configure…) but we don't need them for this example. What is the installPhase doing? The install phase is here to say where the final executable/libraries/assets/… should be located. In a typical Linux environment, the binaries are usually copied in /bin, /usr/bin or /usr/local/bin, the libraries in /lib or /lib64, the assets in /share… and it can quickly be a mess when all programs put there own stuff at the same place. In Nix all programs have there own folder in a path like /nix/store/someUniqueHash-programName-version (the value of this path being set to $out in the installPhase) and the binaries then go to $out/bin, the libraries to $out/lib, the assets to $out/share… reproducing the typical Linux folder hierarchy. So if you are not sure where you should put a file, you surely want to check where you would put it in a normal linux distribution and prepend $out/ to the path (there are few exceptions, like we use $out/bin instead of $out/usr/local/bin since there is no more reasons to have a local folder). Note that many build systems (cmake…) have a variable like PREFIX to say where the program should be installed.:PREFIX might typically be / or /usr/local, and this will install binaries to PREFIX/bin etc. In this case, we can often simply set PREFIX=$out and run the usual compilation commands. When you install the program, Nix will then do the job of properly creating links to the files of the installed softwares, for instance in NixOs the binaries installed globally are linked in /run/current-system/sw/bin $ ls /run/current-system/sw/bin -al | grep firefox lrwxrwxrwx 1 root root 70 janv. 1 1970 firefox -> /nix/store/152drilm2qhjimzfx8mch0hmqvr27p29-firefox-99.0.1/bin/firefox Therefore in our example, in the install phase we just need to create the folder $out/bin and copy the binary obtained during the copy phase… And it's exactly what we did! How can I try it?? To try it, you still need to specify where the dependencies should be obtained (similarly when you cook a cake you should first visit your favorite farmer to buy some eggs). So create another file default.nix (the name is important here as nix will look for this file first) containing { pkgs ? import <nixpkgs> {} }: pkgs.callPackage ./derivation.nix {} Here you basically tell nix to use the channel <nixpkgs> to get the dependencies, and callPackage will properly populate the inputs of derivation.nix. Then, just run $ nix-build At the end, you should have a new folder result present, and this folder is linked to the $out folder of your derivation: $ ls -al | grep result lrwxrwxrwx 1 leo users 55 sept. 13 20:59 result -> /nix/store/xi0hx472hzykl6xjw0hnmh0zjyp6sc52-program-1.0 You can then execute the binary using: $ ./result/bin/myprogram Hello, World! Congratulations, you made your first derivation! We will see below how to package more complex applications. But before, let's see how to install the package and contribute to nixpkgs. Can I install it on my system? You can of course install this derivation. Copy your files (except default.nix, it is not needed) in /etc/nixos and change your list of installed packages into: environment.systemPackages = with pkgs; [ (callPackage ./derivation.nix {}) ] Here you go! You can also install it imperatively on any system using $ nix-env -i -f default.nix How can I submit my derivation to nixpkgs? All package expressions in the nixpkgs project are located in https://github.com/NixOS/nixpkgs and you can add your own package there! To do so, first fork (to do Pull Requests) and clone your repository. Then copy the derivation.nix in pkgs/CATEGORY/PACKAGE/default.nix where CATEGORY is appropriately chosen depending on the range of application of your program and PACKAGE is the name of your program. Of course, the nixpkgs repo does not contain the sources of the program, so you should change the source attribute to point to an external source (see below). Then, the list of all programs available in nixpkgs is located in pkgs/top-level/all-packages.nix so you should add a line: myprogram = callPackage ../CATEGORY/PACKAGE { }; in this file (programs are sorted alphabetically). To test it, go to the root of the repo and call $ nix-build -A myprogram it should compile your program and create a result folder to test it as before. Once it is done, commit and submit your work as a pull request! If you are not familiar with git or want more details, you might like this thread https://discourse.nixos.org/t/how-to-find-needed-librarys-for-closed-source-bin-applications/39118/43?u=tobiasbora What if the sources are online? Most of the time you will be trying to download sources that are hosted online. No problem, just change your src attribute for instance if you download from github (see the list of fetchers here): { stdenv, lib, fetchFromGitHub }: stdenv.mkDerivation rec { name = "program-${version}"; version = "1.0"; # For https://github.com/myuser/myexample src = fetchFromGitHub { owner = "myuser"; repo = "myexample"; rev = "v${version}"; # If there is a release like v1.0, otherwise put the commit directly sha256 = ""; # <-- dummy hash: after the first compilation this line will give an error and the correct hash. Replace lib.fakeSha256 with "givenhash". Or use nix-prefetch-git. On older nix, this might fail, use sha256 = lib.fakeSha256; instead. }; buildPhase = '' gcc program.c -o myprogram ''; installPhase = '' mkdir -p $out/bin cp myprogram $out/bin ''; } Make sure to change the sha256 line with your own hash (needed to verify that the downloaded files are correct). lib.fakeSha256 is a dummy hash, so the first time you compile it will gave an error saying that the hash is wrong and that it is truehash. So replace the hash with this value (there are also tools like nix-prefetch-git but I have to admit that I don't use them). WARNING: if you use instead the hash of another program already in the cache, it will not give any error, instead it well peak the source of the other package! Note also that nix will automatically try to do the right thing with the source, in particular it will unpack automatically compressed files downloaded with src = fetchurl { url = "http://example.org/libfoo-source-${version}.tar.bz2"; sha256 = "0x2g1jqygyr5wiwg4ma1nd7w4ydpy82z9gkcv8vh2v8dn3y58v5m"; }; How to use libraries Now, let us complicate a bit the program by using a library, ncurses for this example. We will use the ncurses hello-world program: #include <ncurses.h> int main(int argc, char ** argv) { initscr(); // init screen and sets up screen printw("Hello World"); // print to screen refresh(); // refreshes the screen getch(); // pause the screen output endwin(); // deallocates memory and ends ncurses return 0; } If you compile this program direcly as we did above you will get an error program.c:1:10: fatal error: ncurses.h: No such file or directory Which is expected as we have not added ncurses as a dependency. To do that, add in the (space separated) list buildInputs the library ncurses (you must also add it in the first line in the input dependencies): doing so will make sure that the binaries of the programs in buildInputs are available, that the compiler searches for the header files in the include subdirectory… Also update the compilation command with -lncurses: { stdenv, ncurses }: stdenv.mkDerivation rec { name = "program-${version}"; version = "1.0"; src = ./.; buildInputs = [ ncurses ]; buildPhase = '' gcc -lncurses program.c -o myprogram ''; installPhase = '' mkdir -p $out/bin cp myprogram $out/bin ''; } Compile and run the program as before, that's it! Debug a program, part 1: nix-shell It can sometimes be annoying to debug a program using nix-build as nix-build will not cache the compilation: every time it fails, it restarts from scratch the compilation the next time (this is needed to ensure reproducibility). However in practice this can be a bit annoying… nix-shell has been created (also) to solve this problem. If you run the gcc command to compile the above file it will fail directly as gcc and the libraries ncurses are not installed globally (and it's a feature, for instance it allows multiple projects to use different versions of the same library). To create a shell in which this is installed, just run nix-shell, it will automatically check what are the dependencies of the program: $ nix-shell $ gcc -lncurses program.c -o myprogram $ ./myprogram We will see later more advanced usages of nix-shell. Save time using the default phases and hooks The recipe to compile a program is often the same, as many programs are compiled simply using: $ ./configure --prefix=$out $ make $ make install So nix will by default try the above commands (and more as it tries to patch, test…), that's why many programs in nixpkgs do not really bother writing any phase. Most of the phases are really configurable: you can for instance enable/disable some parts of the phases, provide some parameters like makeFlags = [ "PREFIX=$(out)" ]; to add flags to the makefile… The whole documentation of these phases is provided in the manual, more specifically in this subsection. If you really want to check what is being run, you can check the function genericBuild in the file pkgs/stdenv/generic/setup.sh that calls then the default phases written above in the file, unless they are overwritten by the derivation. You can also directly read the used code from the nix-shell as we will see later. Note that these default phases can also be overwritten by dependencies. For instance, if your program uses cmake, adding nativeBuildInputs = [ cmake ]; will automatically adapt the configure phase to use cmake (this can also be configured as documented here). Similar behavior will occur with scons, ninja, meson… More generally, nix defines many "hooks" that will run before or after a given phase, in order to modify the build process. Just including them in nativeBuildInputs should be enough to trigger them. Most hooks are documented here, among others you have: autoPatchelfHook that automatically patches (often proprietary) binaries to make them usable in nix (see also my other answer here) For instance, we can use CMake in our (ncurse) program as follows: create a file CMakeLists.txt containing the usual cmake rules to compile a program: cmake_minimum_required(VERSION 3.10) # set the project name project(myprogram) # Configure curses as a dependency find_package(Curses REQUIRED) include_directories(${CURSES_INCLUDE_DIR}) # add the executable add_executable(myprogram program.c) # Link the curses library target_link_libraries(myprogram ${CURSES_LIBRARIES}) # Explains how to install the program install(TARGETS myprogram DESTINATION bin) It is now possible to simplify a lot our derivation.nix: { stdenv, ncurses, cmake }: stdenv.mkDerivation rec { name = "program-${version}"; version = "1.0"; src = ./.; buildInputs = [ ncurses cmake ]; } Debug a program with nix-shell and internals of nix: part 2 NB: this section is not necessary to understand the rest, you can skip it safely. We saw above how nix-shell could be used to drop us in a shell with all the required dependencies to save compilation time by exploiting caching. In this shell, one can of course run the usual commands to compile a program as before, but it is sometime good to run the exact same commands as the one run by the nix builder. This is also the opportunity to learn a bit more things about the internals of nix (we also refer to the Nix pills for more details and to the wiki). When you write a derivation, nix will derive from it a .drv file that explains in a simple json format how to build the package. To see that file, you can run: $ nix-shell # (or "nix-shell -A myprogram" if you run it from nixpkgs) $ nix show-derivation $(nix-instantiate | sed 's/!.*//') { "/nix/store/4ja3vvab4wswalczr7k0lw17dxb69nf7-program-1.0.drv": { "outputs": { "out": { "path": "/nix/store/qv8s0lm7w0az90xjc90dy7rvjqmic9zz-program-1.0" } }, "inputSrcs": [ "/nix/store/9krlzvny65gdc8s7kpb6lkx8cd02c25b-default-builder.sh", "/nix/store/zrpp5wmrq39ylqy73pbk3plvw5sx59vh-example" ], "inputDrvs": { "/nix/store/1av43alhcb8a894sz2cnnf9aldfdyb0h-stdenv-linux.drv": [ "out" ], "/nix/store/6pj63b323pn53gpw3l5kdh1rly55aj15-bash-5.1-p16.drv": [ "out" ], "/nix/store/p6y4zvhi9vjg8h7hli0ix9jxkl225ahk-ncurses-6.3-p20220507.drv": [ "dev" ], "/nix/store/w6jf92i16rghx0jr4ix33snq4d237l8i-cmake-3.24.0.drv": [ "out" ] }, "system": "x86_64-linux", "builder": "/nix/store/1b9p07z77phvv2hf6gm9f28syp39f1ag-bash-5.1-p16/bin/bash", "args": [ "-e", "/nix/store/9krlzvny65gdc8s7kpb6lkx8cd02c25b-default-builder.sh" ], "env": { "buildInputs": "/nix/store/kn8gbpi8bfxkzg6slyskz4y0d2pkl0xk-ncurses-6.3-p20220507-dev /nix/store/xjg2fzw513iig1cghd4mvcq5fh2cyv4y-cmake-3.24.0", "builder": "/nix/store/1b9p07z77phvv2hf6gm9f28syp39f1ag-bash-5.1-p16/bin/bash", "cmakeFlags": "", "configureFlags": "", "depsBuildBuild": "", "depsBuildBuildPropagated": "", "depsBuildTarget": "", "depsBuildTargetPropagated": "", "depsHostHost": "", "depsHostHostPropagated": "", "depsTargetTarget": "", "depsTargetTargetPropagated": "", "doCheck": "", "doInstallCheck": "", "mesonFlags": "", "name": "program-1.0", "nativeBuildInputs": "", "out": "/nix/store/qv8s0lm7w0az90xjc90dy7rvjqmic9zz-program-1.0", "outputs": "out", "patches": "", "propagatedBuildInputs": "", "propagatedNativeBuildInputs": "", "src": "/nix/store/zrpp5wmrq39ylqy73pbk3plvw5sx59vh-example", "stdenv": "/nix/store/bj5n3k01mq8bysw0rcdm7jxvhc620pd3-stdenv-linux", "strictDeps": "", "system": "x86_64-linux", "version": "1.0" } } } The exact output is not really important, but note that there are a few important parts: first, the derivation specifies the output folder, the sources and dependencies, some environment variables that will be available during the build and that nix-shell automatically populated for us: see the "out": …? you already have it properly configured thanks to nix-shell: $ echo $out /nix/store/qv8s0lm7w0az90xjc90dy7rvjqmic9zz-program-1.0 and more importantly these lines: "builder": "/nix/store/1b9p07z77phvv2hf6gm9f28syp39f1ag-bash-5.1-p16/bin/bash", "args": [ "-e", "/nix/store/9krlzvny65gdc8s7kpb6lkx8cd02c25b-default-builder.sh" ], This means that to produce the outputs, nix will simply run the builder /nix/store/…/bin/bash (here it's simply the bash interpreter) with the arguments -e /nix/store/9krlzvny65gdc8s7kpb6lkx8cd02c25b-default-builder.sh This file is quite simple: $ cat /nix/store/9krlzvny65gdc8s7kpb6lkx8cd02c25b-default-builder.sh source $stdenv/setup genericBuild And if you type $ cat $stdenv/setup you will realize that it is exactly equal to the pkgs/stdenv/generic/setup.sh file that configured the default phases! Therefore, in the nix-shell, you can run all the phases at once using something like that (creating a different $out folder allows you not to write in read-only /nix/store): cd empty_directory # important to make sure "source" folder is not existing, otherwise you get an error like "unpacker appears to have produced no directories". Sources will be unpacked in a subdirectory, and it must be removed every time you restart the download process (otherwise we get the above error). export out=/tmp/out # Create a temporary folder to put the output of the derivation set -x # Optional: to display all the command lines, useful to debug sometimes source $stdenv/setup # In order to load the default phase of the derivation set +e # Do not quit the shell on error/Ctrl-C ($stdenv/setup adds a "set -e") genericBuild # start the build process. You can also just specify a few phases to run by replacing the last line with: phases="buildPhase" genericBuild To get the list of phases, you can do: echo "$phases" If it is empty, then the default is given for instance via $ typeset -f genericBuild | grep 'phases=' phases="${prePhases:-} unpackPhase patchPhase ${preConfigurePhases:-} configurePhase ${preBuildPhases:-} buildPhase checkPhase ${preInstallPhases:-} installPhase ${preFixupPhases:-} fixupPhase installCheckPhase ${preDistPhases:-} distPhase ${postPhases:-}" Packaging other languages The instructions provided above certainly work for many languages and cases, but some languages provide some other tools to deal with there own requirements in term of environment variables and dependencies (for instance we can't really use pip to install python dependencies). It is hard to list on this page all the existing languages, so here are some generic advices to follow: The nixpkgs manual contains basically one section per language, and it is therefore usually a good place to start The nixos wiki contains sometimes additional information for a given language. Check if it helps The nixpkgs repo contains thousands of programs… someone certainly packaged before a program like the one you are trying to package. Get inspiration, using search online (sometimes it seems to miss some entries) or rg (a nicer grep) to search in your local copy to find derivations using the tools you want to use. For simplicity I will however put below some cases that you may often encounter. How to package (often proprietary) binaries I already made a quite extensive answer here. You are certainly interested by solution 4 (autoPatchElf) or 5-6 (buildFHSUserEnv)… Basically copy your binaries to $out/bin and if your are lucky adding autoPatchelfHook in your nativeBuildInputs should be enough (if the program has assets you can also copy it to $out/opt and put in $out/bin some links or scripts that call the programs in $out/opt). How to package shell scripts Let's consider the file myshellscript.sh: #!/usr/bin/bash echo "Hello, world" Just use { stdenv }: stdenv.mkDerivation rec { name = "program-${version}"; version = "1.0"; src = ./.; installPhase = '' mkdir -p $out/bin cp myshellscript.sh $out/bin chmod +x $out/bin/myshellscript.sh # not needed if the file is already executable ''; } and the bash script will automatically be patched by the patchShebangsAuto hook that is present by default in the fixup phase. Read further to see how to use trivial builders to make this derivation even smaller! Wrappers, or how to add executables Let's say that our package needs some executables to work, say cowsay. Because nix tries to maintain "hermiticity" (a.k.a. purity) between packages to limit conflicts as beautifully explained here (maybe different programs need different versions ofcowsay), you cannot assume that cowsay will be "available", i.e. present in the $PATH environment variable. Therefore you need to add cowsay to this variable right before calling your program. This is done via a so-called "wrapper" replacing the original program, that will setup $PATH (and more environment variables if needed) before calling the actual program. Note that we will see later tools that make this step even simpler for simple bash scripts, but wrappers are useful in many contexts and it's surely not a waste of time to learn how to use them now. So let's package this myshellscript.sh script: #!/usr/bin/bash cowsay "My first wrapper!" using this derivation.nix: { lib, stdenv, cowsay, makeBinaryWrapper}: stdenv.mkDerivation rec { name = "program-${version}"; version = "1.0"; src = ./.; nativeBuildInputs = [ makeBinaryWrapper # You can also use makeWrapper to use a bash wrapper, but this won't be compatible with MacOs that expects binary loaders ]; buildInputs = [ cowsay ]; installPhase = '' mkdir -p $out/bin cp myshellscript.sh $out/bin chmod +x $out/bin/myshellscript.sh wrapProgram $out/bin/myshellscript.sh \ --prefix PATH : ${lib.makeBinPath [ cowsay ]} ''; } Note how we added the input cowsay and how we created the wrapper using: wrapProgram $out/bin/myshellscript.sh \ --prefix PATH : ${lib.makeBinPath [ cowsay ]} ''; in order to add cowsay to the path. Now if you nix-build (do not forget the usual default.nix file) you can see that the ./result/bin/myshellscript.sh is now a binary file (that you can still somehow read with less)… since it is hard to see what this file is exactly doing you may want to use makeWrapper instead of makeBinWrapper but be aware that it won't work in MacOs for "security" reasons. Here you would read something like: $ cat result/bin/myshellscript.sh #! /nix/store/1b9p07z77phvv2hf6gm9f28syp39f1ag-bash-5.1-p16/bin/bash -e PATH=${PATH:+':'$PATH':'} PATH=${PATH/':''/nix/store/mrl0n0kphz0xwvv8qbk2xyz2x1pr2f76-cowsay-3.04/bin'':'/':'} PATH='/nix/store/mrl0n0kphz0xwvv8qbk2xyz2x1pr2f76-cowsay-3.04/bin'$PATH PATH=${PATH#':'} PATH=${PATH%':'} export PATH exec -a "$0" "/nix/store/xrz4cv51nd8n1bawfw5i6vd4yizzmajb-program-1.0/bin/.myshellscript.sh-wrapped" "$@" This code is a bit complicated but what it does basically is adding the binary of cowsay at the beginning of the path, and then it execute the shell file that has been moved to $out/bin/.myshellscript.sh-wrapped by the wrapProgram tool. It is time to test it now: $ ./result/bin/myshellscript.sh ___________________ < My first wrapper! > ------------------- \ ^__^ \ (oo)\_______ (__)\ )\/\ ||----w | || || Good! Note that you can find the various options to modify the wrappers here. Even shorter derivations thanks to trivial builders Sometimes it can be a bit annoying to write a stdenv.mkDerivation with the install phase, the wrappers etc… so trivial builder have been created to wrap stdenv.mkDerivation into a simpler function. They are documented here in the manual. We won't go through all of them (there are some to create new files, scripts, merge derivations…) but we will use it to simplify our code running cowsay. This way we can simply use this derivation: { lib, stdenv, cowsay, writeShellApplication }: writeShellApplication { name = "mycowsay"; runtimeInputs = [ cowsay ]; text = '' cowsay "My first wrapper!" ''; } and it will create a bash script in $out/bin/mycowsay with the appropriate $PATH based on runtimeInputs. If you prefer instead to write the script in an external file as before, you can do instead: text = builtins.readFile ./myshellscript.sh; How to package python scripts TODO, but see the related topics: for a single script, you can use: https://stackoverflow.com/questions/43837691/how-to-package-a-single-python-script-with-nix for a whole python application and/or module, you certainly want to use buildPythonPackage (that is based on mkDerivation with few extra stuff), and write a proper setup.py file, see e.g. the entry I added in the wiki. if you also package bash scripts that might call python, you also want to wrap it like: postFixup = '' wrapProgram "$out/bin/mssql-cli" \ --prefix PYTHONPATH : "$PYTHONPATH" ''; Inside a buildPythonPackage, binaries (but not scripts) should (to test) be automatically wrapped. (I'm not a big fan of this, but wrapPythonPrograms only works for binary files I think, so I created this issue) How to package GTK applications TODO How to package QT applications TODO
How to package my software in nix or write my own package derivation for nixpkgs
1,282,094,234,000
It's all very confusing. There are different examples out there, for e.g.: <package-name>_<epoch>:<upstream-version>-<debian.version>-<architecture>.deb source: debian package file names Is section 5.6.12 Version or the Debian Policy Manual also related to the actual package filename too? Or only to the fields in the control file? In this wiki topic about repository formats it doesn't really say anything about conventions, same in the developers best practices guide. Maybe I'm just looking for the wrong thing, please help me and tell me where to find the Debian package name conventions. I'm especially curious where to put the Debian codename. I want to do something like this: <package-name>_<version>.<revision>-<debiancodename>_<architecture>.deb where <debiancodename> is just squeeze or wheezy.
My understanding is that you want to distribute/deploy a package to multiple Debian based distributions. In the Debian/Ubuntu world, you should not provide individual .deb file to download and install. Instead you should provide an APT repository. (in the Fedora/Red Hat/CentOS world I would make a similar advice to provide a YUM repository). Not only does is solves issue of how to name deb file, but repository is an effective way to provide newer version of your package, including bug-fix and security updates. Creating an APT repository is beyond the purpose of this page/question, just search for "how to setup an apt repository" Now back to your question: "package naming convention": When you generate the package with dpkg-buildpackage, the package will be named in a standard way. Quoting dpkg-name manpage: A full package name consists of package_version_architecture.package-type as specified in the control file of the package. package_version_architecture.package-type The Debian Policy is the right place to know the syntax of the control files: name (for both Source and binary packages), version, architecture, package-type. There is no provision to state the distribution, because this is not how the way thing goes. If you need to compile the same version of a package for multiple distributions, you will change the version field (in the debian/changelog and debian/control file). Some people use the distribution name in the version field. for example openssl: 0.9.8o-4squeeze14 1.0.1e-2+deb7u14 1.0.1k-1 If that's what you want to do, make sure to read debian-policy about debian_revision in version.
Debian package naming convention?
1,282,094,234,000
Examining a buildlog from a failed build, what does the following error mean, fatal error: ac_nonexistent.h: No such file or directory #include <ac_nonexistent.h> Here is some context. configure:6614: $? = 0 configure:6627: result: none needed configure:6648: checking how to run the C preprocessor configure:6679: gcc -E -Wdate-time -D_FORTIFY_SOURCE=2 conftest.c configure:6679: $? = 0 configure:6693: gcc -E -Wdate-time -D_FORTIFY_SOURCE=2 conftest.c conftest.c:11:28: fatal error: ac_nonexistent.h: No such file or directory #include <ac_nonexistent.h> ^ compilation terminated. configure:6693: $? = 1 configure: failed program was: | /* confdefs.h */ What is ac_nonexistent.h? What should I do when I encounter this error?
That’s a sanity check, to ensure that the configuration script is correctly able to determine whether a header file is present or not: it asks the compiler to use a non-existant header, and checks that the compiler (correctly) fails. Note that your build goes on after that “error”... To figure out the cause of a build failure, you should generally work up from the end of the build log. In this instance the important part of the log is configure:47489: checking for the Wayland protocols configure:47492: $PKG_CONFIG --exists --print-errors "wayland-protocols >= 1.4" Package wayland-protocols was not found in the pkg-config search path. Perhaps you should add the directory containing `wayland-protocols.pc' to the PKG_CONFIG_PATH environment variable No package 'wayland-protocols' found
What is ac_nonexistent.h?
1,282,094,234,000
When showcasing applications, Windows and Mac mostly talk about features. Linux applications, on the other hand, have more details about what language was used to write it (and accompanying libraries) rather than features. Why is that? I could understand knowing the difference between GTK+ versus QT making a difference just because of the desktop integration requirements, but C vs C++ vs Python vs Assembly vs etc.? Really? For instance: foo is a simple blah blah written in C/GTK+.
I think the traditional Linux user (a geeky tinkerer who actually installed the system by self) does care about such information (what technology is behind this tool?). I am also one of those geeky guys who would, for example, refrain from installing and using a package just because it uses some technology I don't like. Some call this sort of behavior religion of course. Silly isn't it? Anyways I can think of two reasons: The packagers are as geeky (if not more) than those Linux users too, so they found it a good idea to add such info. I think when these packagers put such info on their package descriptions, they are likely doing it as some form of promotion. It works at times (it worked on me quite a few times). This is just a guess of course.
Why do Linux applications often put the language it was written with in the summary?
1,282,094,234,000
I want to get detail about binary package and run them on linux. I am running Debian base (Ubuntu/Linux mint) Linux os. How to build binary package from source? And can I directly download binary package for applications (like firefox, etc.) and games (like boswars, etc.) ? I run some direct package which is in "xyz.linux.run" format What are these package? Are they independent of dependencies? or Is it pre-built binary packages? How to build them which can be run on linux operating system by directly "xyz.linux.run" on linux. What is diference between binary package and deb package?
In a strict sense a binary file is one which is not character encoded as human readable text. More colloquially, a "binary" refers to a file that is compiled, executable code, although the file itself may not be executable (referring not so much to permissions as to the capacity to be run alone; some binary code files such as libraries are compiled, but regardless of permissions, they cannot be executed all by themselves). A binary which runs as a standalone executable is an "executable", although not all executable files are binaries (and this is about permissions: executable text files which invoke an interpreter via a shebang such as #!/bin/sh are executables too). What is a binary package? A binary package in a linux context is an application package which contains (pre-built) executables, as opposed to source code. Note that this does not mean a package file is itself an executable. A package file is an archive (sort of like a .zip) which contains other files, and a "binary" package file is one which specifically contains executables (although again, executables are not necessarily truly binaries, and in fact binary packages may be used for compiled libraries which are binary code, but not executables). However, the package must be unpacked in order for you to access these files. Usually that is taken care of for you by a package management system (e.g. apt/dpkg) which downloads the package and unpacks and installs the binaries inside for you. What is diference between binary package and deb package? There isn't -- .deb packages are binary packages, although there are .debs which contain source instead, these usually have -src appended to their name. I run some direct package which is in "xyz.linux.run" format What are these package? Those are generally self-extracting binary packages; they work by embedding a binary payload into a shell script. "Self-extracting" means you don't have to invoke another application (such as a package manager) in order to unpack and use them. However, since they do not work with a package manager, resolving their dependencies may be more of a crapshoot and hence some such packages use statically linked executables (they have all necessary libraries built into them) which wastes a bit of memory when they are used.
What is Binary package? How to build them?
1,282,094,234,000
To be more specific, I would like to do the equivalent of adding the --purge flag to the following command sudo apt-get autoremove --purge [package name] to packages that are no longer on the system. Preferably, I would like to know how to do it to specific packages and to every uninstalled package in the system.
The following should do what you want: aptitude purge \~c This purges all packages with the c (package removed, configuration files still present) state flag. Flag documentation is here.
In Debian based systems, how do we purge configuration files of packages that have already been uninstalled?
1,282,094,234,000
I love (the way) how Linux & Co. lets users install many packages from different repositories. AFAIK, they come also with source-packages, so you can compile them by yourself. But why even bother to "keep/offer" pre-compiled packages, when you could just compile them yourself? What are the intentions of keeping/offering them? Is it possible to configure Linux, to only download source packages and let the OS do the rest? (Just like a pre-compiled package installation?) Thank you for your answers.
It’s a trade-off: distributions which provide pre-built packages spend the time building them once (in all the configurations they support), and their users can then install them without spending the time to build them. The users accept the distributions’ binaries as-is. If you consider the number of package installations for some of the larger distributions, the time saved by not requiring recompilation everywhere is considerable. There are some distributions which ship source and the infrastructure required to build it, and rely on users to build everything locally; see for example Gentoo. This allows users to control exactly how their packages are built. If you go down this path, even with the time savings you can get by simplifying package builds, you should be ready to spend a lot of time building packages. I don’t maintain the most complex packages in Debian, but one of my packages takes over two hours to build on 64-bit x86 builders, and over twelve hours on slower architectures!
Why are there pre-compiled packages in repositories?
1,282,094,234,000
CONTEXT With a local package repository, I'm able to provide my APT instances with a set of software packages and configurations from a server which I control, allowing any client to install this software using just the normal apt install command (providing the repository is added to their /etc/apt/sources.list{,.d/}). For my attempt at creating a local package repository, I followed this tutorial on bipmedia.com, which roughly consists of: Generate the .deb Store the .deb on an Apache2 web server Generate a Package.gz file My Attempt Generating the binary package file To generate the .deb, the software files are required, a DEBIAN folder with metadata is generated and the following command compiles the code and assembles the package: dpkg-deb --build [source code tree with DEBIAN directory] Serve repository files with Apache2 server I'm skipping this part as it's unrelated to the problem I'm seeking to solve with this question. Generating a Packages.gz file (repository metadata) With the an open shell instance whose working directory is the Apache server root folder containing the .deb file from above, I called: dpkg-scanpackages debian /dev/null | gzip -9c >debian/Packages.gz PROBLEM Calling apt update on the client machine, it complains with: W: The repository 'http://example.com packages/ Release' does not have a Release file. This necessary file is missing in my local repository. It seems to be a register of package checksums, but after searching on the Internet, my very limited understanding of the topic kept me from being able to find out how to generate it. Note: My /etc/apt/sources.list file does have the following line: deb http://example.com packages/ QUESTION How do I generate the Release file for a local APT package repository?
There are a number of ways of going about this; I use apt-ftparchive. Create an aptftp.conf file in the root of your repository: APT::FTPArchive::Release { Origin "Your origin"; Label "Your label"; Suite "unstable"; Codename "sid"; Architectures "amd64 i386 source"; Components "main"; Description "Your description"; }; with the appropriate values (change “Origin”, “Label”, “Description” at least, and adjust “Architectures” to match the binaries you host). Create a matching aptgenerate.conf file alongside: Dir::ArchiveDir "."; Dir::CacheDir "."; TreeDefault::Directory "pool/"; TreeDefault::SrcDirectory "pool/"; Default::Packages::Extensions ".deb"; Default::Packages::Compress ". gzip bzip2"; Default::Sources::Compress ". gzip bzip2"; Default::Contents::Compress "gzip bzip2"; BinDirectory "dists/unstable/main/binary-amd64" { Packages "dists/unstable/main/binary-amd64/Packages"; Contents "dists/unstable/Contents-amd64"; SrcPackages "dists/unstable/main/source/Sources"; }; BinDirectory "dists/unstable/main/binary-i386" { Packages "dists/unstable/main/binary-i386/Packages"; Contents "dists/unstable/Contents-i386"; SrcPackages "dists/unstable/main/source/Sources"; }; Tree "dists/unstable" { Sections "main"; # contrib non-free"; Architectures "amd64 i386 source"; }; (removing i386 if you don’t need that). In your repository, clear the database: rm -f packages-i386.db packages-amd64.db Generate the package catalogs: apt-ftparchive generate -c=aptftp.conf aptgenerate.conf Generate the Release file: apt-ftparchive release -c=aptftp.conf dists/unstable >dists/unstable/Release Sign it: gpg -u yourkeyid -bao dists/unstable/Release.gpg dists/unstable/Release gpg -u yourkeyid --clear-sign --output dists/unstable/InRelease dists/unstable/Release (with the appropriate id instead of yourkeyid). Whenever you make a change to the repository, you need to run steps 3 to 6 again.
How to generate the `Release` file on a local package repository?
1,282,094,234,000
The established structure of the .deb file name is package_version_architecture.deb. According to this paragraph: Some packages don't follow the name structure package_version_architecture.deb. Packages renamed by dpkg-name will follow this structure. Generally this will have no impact on how packages are installed by dselect/dpkg, but other installation tools might depend on this naming structure. Question: However, are there any real situations when renaming the .deb package file is highly unrecommended? Is it a normal practice to provide a custom .deb file name for my software? Example: My Program for Linux v1.0.0 (Pro).deb — the custom naming my-program_1.0.0-1_amd64.deb — the proper official naming Note: I'm not planning to create a repo, I'm just hosting the .deb package of my software on my website for direct download.
Over the years, I’ve accumulated a large number of .deb packages with non-standard names, and I don’t remember running into any problems. “Famous” packages with non-standard names that people might come across nowadays include google-chrome-stable_current_amd64.deb and steam.deb. (In both cases, the fixed, versionless name ensures that a stable URL can be used for downloads, and a stable name for installation instructions.) However I don’t remember running across any with spaces in their names; that shouldn’t cause issues with tools either, but it might cause confusion for your users (since they’ll need to quote the filename or escape the spaces if they’re using shell-based tools). Another point to note is that using a non-standard name which isn’t the same as your package name (as stored in the control file) could also cause confusion, e.g. when attempting to remove the package (since the package name won’t be the same as the name used to install it). As a result of all this, if you don’t want to stick to the canonical name I would recommend something like my-program.deb or my-program_amd64.deb (depending on whether you want to support multiple architectures). You can make that a symlink to the versioned filename too if you want to allow older versions to be downloaded.
Is it safe to rename .deb file named by the standards?
1,282,094,234,000
I want to build a debian package with git build package.(gbp) I passed all steps, and at least, when I entered gbp buildpackage, This error appeared. what does it mean? and what should I do? gbp:error: upstream/1.5.13 is not a valid treeish
The current tag/branch you are in, is not a Debian source tree, it doesn't contain the debian/ directory in its root. This is evident because you are using a "upstream/" branch, a name utilized to upload the pristine source tree to git repositories. Try using the branch stable, testing or unstable, or any branch that starts with Debian or a commit tagged using the Debian versioning scheme.
what does " gbp:error: upstream/1.5.13 is not a valid treeish" mean?
1,282,094,234,000
I am trying to edit an Apache module on Debian (strictly, I'm doing this on Raspbian Jessie-Lite), and am broadly following the Debian build instructions: $ mkdir -p ~/src/debian; cd ~/src/debian $ apt-get source apache2-bin $ cd apache2-2.4.10 $ debuild -b -uc -us And the build process takes roughly one and a half hours on an olde original Pi. Which is fine. Once! But I believe the build process is performing a make clean and so after a minor edit of a single mod_*.c file, it wants to rebuild the entire thing, which is kind of slowing down my development! I have tried adding -dc to the debuild command, but then it didn't build anything. I even tried deleting the target mod_*.so file to "encourage" it into rebuilding it, but still no! UPDATE 2016-08-21: Adding -nc to the debuild command does not cause modules to be recompiled. Here's the output from that command: $ debuild -b -uc -us -nc dpkg-buildpackage -rfakeroot -D -us -uc -b -nc dpkg-buildpackage: source package apache2 dpkg-buildpackage: source version 2.4.10-10+deb8u5 dpkg-buildpackage: source distribution jessie-security dpkg-buildpackage: source changed by Salvatore Bonaccorso <[email protected]> dpkg-source --before-build apache2-2.4.10 dpkg-buildpackage: host architecture armhf debian/rules build dh build --parallel --with autotools_dev fakeroot debian/rules binary dh binary --parallel --with autotools_dev dpkg-genchanges -b >../apache2_2.4.10-10+deb8u5_armhf.changes dpkg-genchanges: binary-only upload (no source code included) dpkg-source --after-build apache2-2.4.10 dpkg-buildpackage: binary-only upload (no source included) Now running lintian... N: 16 tags overridden (1 error, 4 warnings, 11 info) Finished running lintian.
Add the -nc option to your debuild command line. This may expose problems in the build system or the packaging though, so be prepared. But for small fixes it usually works fine. However, as the apache2 source package uses debhelper (like many other packages), this alone is not enough, because debhelper also keeps its own journal of completed steps in separate log files for each binary package. These can be removed entirely by dh_clean. But to get debhelper redo no more than the necessary work, truncate only the relevant one by sed -i '/^dh_auto_build$/Q' debian/apache2-bin.debhelper.log before running debuild -nc.
How to prevent debuild from performing a clean?
1,282,094,234,000
I have a .deb debian package which essentially contains the binaries of the software as a /usr/share/bin folder in a compressed data file, and another metadata compressed file containing the checksums of the other files. My goal is to create a PKGBUILD to install such .deb package correctly on archlinux. What's the proper way to do that? Is enough to copy the contents of that /usr/share/bin directory into the pkg /usr/share/bin fakeroot environment using the build() function? The folder will be copied to the true /usr/share/bin location when the builded package is actually installed?
Yes, it will work in the same way as other PKGBUILDs with binary sources - extract it and copy files. The only thing which should be mentioned is that deb-archive consists of 3 other files - debian-binary, control.tar.gz, data.tar.gz. makepkg will extract only first-level archive and then you should manually extract data.tar.gz. prepare() { tar -zxvf data.tar.gz #tar -xvf data.tar.xz # if archives are .tar.xz instead of .tar.gz } package() { # copy files } Alternatively, you can place deb-archive in noextract array and then manually extract only data.tar.gz: $ ar p source.deb data.tar.gz | tar zx
Create PKGBUILD from .deb
1,282,094,234,000
The source code in not open or free, so compilation at installation is not an option. So far I have seen developers that: provide a tar.gz file and it is up to user to uncompress in suitable location. provide a .tar.gz with an install.sh script to run a basic installer, possibly even prompting user for install options. provide RPM and/or deb files, allowing user to continue using the native package management tools they are familiar with to install/upgrade/uninstall. Would like to support the most number of Linux distributions, make users' lives as easy as possible, and yet maintain as little build/packaging/installer infrastructure as possible too. Looking for recommendations on how to package my software.
I see two ways to look at it. One is to target the most popular Linuxes, providing native packages for each, delivering packages in popularity order. A few years ago, that meant providing RPMs for Red Hat type Linuxes first, then as time permitted rebuilding the source RPM for each less-popular RPM-based Linux. This is why, say, the Mandriva RPM is often a bit older than the Red Hat or SuSE RPM. With Ubuntu being so popular these past few years, though, you might want to start with .deb and add RPM later. The other is to try to target all Linuxes at once, which is what those providing binary tarballs are attempting. I really dislike this option, as a sysadmin and end user. Such tarballs scatter files all over the system you unpack them on, and there's no option later for niceties like uninstall, package verification, intelligent upgrades, etc. You can try a mixed approach: native packages for the most popular Linuxes, plus binary tarballs for oddball Linuxes and old-school sysadmins who don't like package managers for whatever reason.
What installer types should commercial software use to support Linux?
1,282,094,234,000
I am running debian stretch and following this guide for building package from source for debian. Sometimes building process takes hours , when I run dpkg-buildpackage -rfakeroot again , it starts building from scratch. dpkg-buildpackage --help does not show any option to resume. How can I resume package building ?
To continue a build that was interrupted for some reason, you can call the appropriate targets of debian/rules directly: debian/rules build will compile the sources, then fakeroot debian/rules binary will run the installation and prepare the packages.
How to resume package building in debian?
1,282,094,234,000
I have created a package of zsh 5.0.7 from sources and now I can install it successfully but when I try to remove it I get this: $ sudo dpkg -i zsh_5.0.7_amd64.deb Selecting previously unselected package zsh. (Reading database ... 177638 files and directories currently installed.) Preparing to unpack zsh_5.0.7_amd64.deb ... Unpacking zsh (5.0.7) ... Setting up zsh (5.0.7) ... Processing triggers for man-db (2.7.0.2-2) ... Reading package lists... Building dependency tree... $ sudo apt-get purge zsh Reading state information... The following packages will be REMOVED: zsh* 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. After this operation, 6473 kB disk space will be freed. Do you want to continue? [Y/n] (Reading database ... 178746 files and directories currently installed.) Removing zsh (5.0.7) ... dpkg: warning: while removing zsh, directory '/usr/local/bin' not empty so not removed dpkg: warning: while removing zsh, directory '/usr/local/lib' not empty so not removed dpkg: warning: while removing zsh, directory '/usr/local/share/man/man1' not empty so not removed Processing triggers for man-db (2.7.0.2-2) ... What can I change in packaging (debian/{control,rules} or other files) to make that warning go away ? debian/control Source: zsh Section: unknown Priority: optional Maintainer: Patryk <[email protected]> Build-Depends: debhelper (>= 8.0.0), autotools-dev Standards-Version: 3.9.4 Homepage: http://zsh.sourceforge.net/ Package: zsh Architecture: any Depends: ${shlibs:Depends}, ${misc:Depends}, libc6 Description: ZSH shell Zsh is a shell designed for interactive use, although it is also a powerful scripting language. Many of the useful features of bash, ksh, and tcsh were incorporated into zsh; many original features were added. debian/rules #!/usr/bin/make -f # -*- makefile -*- # Uncomment this to turn on verbose mode. #export DH_VERBOSE=1 %: dh $@ --with autotools-dev override_dh_auto_configure: ./configure override_dh_usrlocal: EDIT I have forked zsh sources and added debian directory for packaging: https://github.com/pmalek/zsh/tree/5.0.7-deb/debian
In general, this warning is completely harmless and normal. When dpkg is removing (or trying to remove) a package, it removes all files and directories which were created as part of that package installation. Now, suppose there are some files in a directory that is a candidate for removal in such a scenario, and dpkg doesn't know about these files. This could happen either because they were machine generated, either during or after the install, or because they were created by a user. Then, unless instructed, dpkg will not remove those files. Since, by default, it will not remove a non-empty directory, in such a case, the directory containing these files will not be removed. So, in summary, after the package is removed, you may end up with a basically empty directory (or directories) with a few machine generated files or something. This is not a problem - you can just remove these manually. Note, the defaults above are all sensible defaults. There are no bugs here. In your case, you are installing files to /usr/local as part of your Debian binary package, which is a violation of the File Hierarchy Standard, and is wrong. Don't do this. User binaries should go into /usr/bin, for example, libraries should go into /usr/lib, etc. I assume your package creates /usr/local/bin, because dpkg, naturally, does not know about it already. (Since a Debian package containing files/directories in /usr/local is a violation of the FHS and therefore Debian Policy). Therefore it tries to remove that file when it removes the package. Stop installing in /usr/local, and your problem will go away. Give us a little more context, perhaps? Why are you trying to build your own zsh Debian package rather than using the one your distribution ships, and what distribution are you using anyway? If you really want to do this, here is a simple tip. Check how your distribution (or even Debian) packages zsh, and just reuse the packaging. It should work fine. There is no reason to try writing your own, unless you are trying to learn how to package, which I assume is not the case here.
After creating a .deb: dpkg:warning while removing, directory /usr/local/bin not empty so not removed
1,282,094,234,000
I wand to build multiple .deb packages from same source for different versions and distros. Even if the source code is exactly same, some files in debian folder can not be shared because different dependency and distro name. So, I want to make multiple 'debian' directories for each version/distro and specify where to search it when build package. Is it possible? For your information, I'm using debuild command to build .deb package.
Using different branches is one approach, and I can suggest edits for @mestia’s answer if it seems appropriate (but read on...). Another approach is to keep different files side-by-side; see Solaar for an example of this. But both of these approaches have a significant shortcoming: they’re unsuitable for packages in Debian or Ubuntu (or probably other derivatives). If you intend on getting your package in a distribution some day, you should package it in such a way that the same set of files produces the correct result in the various distributions. For an example of this, have a look at the Debian packaging for Solaar (full disclosure: I did the packaging). The general idea is to ask dpkg-vendor what the distribution is; so for Solaar, which has different dependencies in Debian and Ubuntu, debian/rules has derives_from_ubuntu := $(shell (dpkg-vendor --derives-from Ubuntu && echo "yes") || echo "no") and further down an override for dh_gencontrol to fill in “substvars” as appropriate: override_dh_gencontrol: ifeq ($(derives_from_ubuntu),yes) dh_gencontrol -- '-Vsolaar:Desktop-Icon-Theme=gnome-icon-theme-full | oxygen-icon-theme-complete' -Vsolaar:Gnome-Icon-Theme=gnome-icon-theme-full else dh_gencontrol -- '-Vsolaar:Desktop-Icon-Theme=gnome-icon-theme | oxygen-icon-theme' -Vsolaar:Gnome-Icon-Theme=gnome-icon-theme endif This fills in the appropriate variables in debian/control: Package: solaar Architecture: all Depends: ${misc:Depends}, ${debconf:Depends}, udev (>= 175), passwd | adduser, ${python:Depends}, python-pyudev (>= 0.13), python-gi (>= 3.2), gir1.2-gtk-3.0 (>= 3.4), ${solaar:Desktop-Icon-Theme} and Package: solaar-gnome3 Architecture: all Section: gnome Depends: ${misc:Depends}, solaar (= ${source:Version}), gir1.2-appindicator3-0.1, gnome-shell (>= 3.4) | unity (>= 5.10), ${solaar:Gnome-Icon-Theme} You can use the test in debian/rules to control any action you can do in a makefile, which means you can combine this with alternative files and, for example, link the appropriate files just before they’re used in the package build.
Build the same source package for different Debian based distros
1,282,094,234,000
I know that apt-get source <package_name> gives you the source package. It contains a debian folder with a file called rules. If I understand it correctly, this file describes how the source package can be transformed into a .deb package, including which compiler flags should be used. Two questions: How do I get the compiler flags that are actually used? Is it necessary to run make -n (if this is even possible) or can I get them somehow by parsing the document(s) ? Given the case of a source package from an official repository. Are the compiler flags a 100% determined by the rules file or do they depend on the system that the .deb creation is done on? Do I need to 'mirror' the official build system to get to the same flags that were used in the official .deb building process? How can I do this? I learned here, that debian does not have an official policy which compiler flags are used for the .deb-packed binaries.
The compiler flags used are a function of the debian/rules file, the package's build files (since the upstream author may specify flags there too), the build system used (dh, cdbs etc.), the default compiler settings. To see the flags used you effectively need to at least compile the package: debian/rules build Trying things like debian/rules -n generally won't take you very far; for instance on a dh-based package it will just say dh build or something similar; asking dh to show what that would do (with --no-act) will produce dh_testdir dh_auto_configure dh_auto_build and so on. There is no fool-proof, easy-to-explain way to determine the build flags by reading debian/rules; you can get some idea by looking for flags set there, and also (where appropriate) by looking for options for dpkg-buildflags (such as DEB_BUILD_MAINT_OPTIONS) and running that. For many packages the easiest way to see what flags were used is to look at the build logs for the packages shipped in the archives, starting from https://buildd.debian.org. For example the logs for coreutils on i386 show that the flags used were -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -fstack-protector-strong -Wformat -Werror=format-security for compilation, and -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -Wl,--as-needed -Wl,-z,relro for linking (thanks to Faheem Mitha for pointing out the latter!).
How to get the compiler flags that are used to build the binaries in a (.deb) package?
1,282,094,234,000
There's a .deb software that doesn't have a .desktop file, I created that file. I want to create another package for that software, when the user installs it, I want the .desktop file to be generated automatically and placed in /usr/share/applications/ How to do so?
Sounds like all you want to do is extract your .deb archive, add your .desktop file and the rebuild the package. This is a fairly simple process. To extract: dpkg-deb -R package.deb extract_dir Note -R is a raw extract to get the control files as well. Next create /usr/share/applications/ if it doesn't already exist: mkdir -p extract_dir/usr/share/applications/ Then just add your .desktop file (be careful the name isn't going to conflict with anything else you are likely to install) and rebuild: cp desktop_file.desktop extract_dir/usr/share/applications/ dpkg-deb -b extract_dir package_new.deb Note you can also use dpkg-deb -b extract_dir . to create the package with its canonical name, but you will probably have to move your original out of the way first or else it will be clobbered. Sources: www.debian.org/doc/debian-policy/ap-pkg-binarypkg.html man dpkg-deb
How to automatically generate .desktop file?
1,282,094,234,000
I need to build a project which depends on the particular version of third-party library: ➜ cat debian/control Source: libhole-cpp Priority: optional Maintainer: Vitaly Isaev <[email protected]> Build-Depends: debhelper (>= 9), cmake, flatbuffers (= 1.2.0-1), libboost-dev, libboost-system-dev, catch Standards-Version: 3.9.5 However, there are several flatbuffers packages in our repo, including the newer ones: ➜ apt-cache policy flatbuffers flatbuffers: Installed: (none) Candidate: 1.4.0-17 Version table: 1.4.0-17 500 500 http://repo12.mailbuild-2.embarce.ro xenial/local amd64 Packages 1.2.0-1 500 500 http://repo12.mailbuild-2.embarce.ro xenial/local amd64 Packages Under this conditions mk-build-deps refuses to install the desired version of a package: ➜ sudo mk-build-deps --install debian/control dh_testdir dh_testroot dh_prep dh_testdir dh_testroot dh_install dh_installdocs dh_installchangelogs dh_compress dh_fixperms dh_installdeb dh_gencontrol dh_md5sums dh_builddeb dpkg-deb: building package 'libhole-cpp-build-deps' in '../libhole-cpp-build-deps_1.0.1ubuntu1_all.deb'. The package has been created. Attention, the package has been created in the current directory, not in ".." as indicated by the message above! Selecting previously unselected package libhole-cpp-build-deps. (Reading database ... 68846 files and directories currently installed.) Preparing to unpack libhole-cpp-build-deps_1.0.1ubuntu1_all.deb ... Unpacking libhole-cpp-build-deps (1.0.1ubuntu1) ... Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies...Starting pkgProblemResolver with broken count: 1 Starting 2 pkgProblemResolver with broken count: 1 Investigating (0) libhole-cpp-build-deps [ amd64 ] < 1.0.1ubuntu1 > ( devel ) Broken libhole-cpp-build-deps:amd64 Depends on flatbuffers [ amd64 ] < none -> 1.4.0-17 > ( devel ) (= 1.2.0-1) Considering flatbuffers:amd64 0 as a solution to libhole-cpp-build-deps:amd64 -2 Removing libhole-cpp-build-deps:amd64 rather than change flatbuffers:amd64 Done Done Starting pkgProblemResolver with broken count: 0 Starting 2 pkgProblemResolver with broken count: 0 Done The following packages will be REMOVED: libhole-cpp-build-deps 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 9216 B disk space will be freed. Do you want to continue? [Y/n] y (Reading database ... 68850 files and directories currently installed.) Removing libhole-cpp-build-deps (1.0.1ubuntu1) ... mk-build-deps: Unable to install libhole-cpp-build-deps at /usr/bin/mk-build-deps line 402. mk-build-deps: Unable to install all build-dep packages ➜ libhole-cpp git:(v12) ✗ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Could anyone please clarify what's wrong with my build toolchain? The OS is Ubuntu 16.04.
tldr; Delegate to the aspcud solver via apt-cudf-get: mk-build-deps \ --install \ --remove \ --tool \ 'apt-cudf-get --solver aspcud -o APT::Get::Assume-Yes=1 -o Debug::pkgProblemResolver=0 -o APT::Install-Recommends=0' \ debian/control Explanation This solution differs from that of Johannes Schauer in that... it uses apt-cudf-get instead of apt-get it omits the following options (which didn't work in my testing): -o APT::Solver::Strict-Pinning=false -o APT::Solver::aspcud::Preferences="-new,-removed,-changed" it adds the following options: -o APT::Get::Assume-Yes=1 (to install packages non-interactively) -o Debug::pkgProblemResolver=1 (mk-build-deps --tool default) -o APT::Install-Recommends=0 (mk-build-deps --tool default) References: debian-devel mailing list thread index for "mk-build-deps cannot install particular version of Build-Depends packages": https://lists.debian.org/debian-devel/2016/08/threads.html#00442 Vitaly Isaev starting the thread: https://lists.debian.org/debian-devel/2016/08/msg00442.html Johannes Schauer, proposing aspcud: https://lists.debian.org/debian-devel/2016/08/msg00446.html aspcud homepage: https://potassco.org/aspcud/ source code repository: https://github.com/potassco/aspcud apt-cudf Debian package: https://packages.debian.org/stable/apt-cudf apt-cudf Debian man page: https://manpages.debian.org/stable/apt-cudf/apt-cudf-get.8.html apt-cudf-get Debian man page: https://manpages.debian.org/stable/apt-cudf/apt-cudf.1.html mk-build-deps Debian man page: https://manpages.debian.org/stable/devscripts/mk-build-deps.1.html
Debian packaging: mk-build-deps cannot install particular version of Build-Depends packages
1,282,094,234,000
I have tried creating these scripts, the install goes well, however, once the application version is bumped and say I try to upgrade to apx v2.0 nothing goes well. This is my postinst script #!/bin/sh set -e chmod 755 /usr/bin/apx chmod 755 /usr/lib/apx/apx.py chmod -R 755 /usr/lib/apx/data/binaries exit 0 this is my postrm script #!/bin/sh set -e U_HOME=$(getent passwd $SUDO_USER | cut -d: -f6) LOG="/var/log/apx" UHOME="$U_HOME/.apx" if [ -d $LOG ]; then rm -rf $LOG fi if [ -d $UHOME ]; then rm -rf $UHOME fi rm -rf /usr/lib/apx exit 0
You should delete them. Your postinst only sets file permissions; these are supposed to be set in the packaged contents, not in a post-installation script. Your postrm deletes log files, and files inside the uninstalling user’s home directory (assuming it’s uninstalled using sudo); both of these are definite no-nos, home directories are off-limits for maintainer scripts, and logs should be left behind on removal (and purge). Your script also deletes /usr/lib/apx which is another no-no: dpkg is supposed to handle that. I strongly recommend you read the Debian New Maintainers’ Guide.
How can I create a proper debian postinst and postrm script?
1,282,094,234,000
Flatpak and snapd packages are available on other distributions because their respective package managers being built for installation on multiple distros [1][2]. Is this also true for the Guix package manager? I remember hearing that Guix packages were (or will be) installable on Debian, but I can't find a reference. [1] http://flatpak.org/index.html#about [2] http://arstechnica.com/information-technology/2016/06/goodbye-apt-and-yum-ubuntus-snap-apps-are-coming-to-distros-everywhere/
I'm an occasional Guix contributor. Yes, you can run Guix packages on top of other distributions (GuixSD is a standalone distribution of Guix, whereas Guix itself is a package manager, so it can be used under any other distribution). The Binary installation section shows you how to easily set up Guix on top of another GNU/Linux distribution. You can also run Guix without splatting it over your root filesystem; see the "Running Guix Before It Is Installed" section. (There are other tutorials out there; I've even written my own, you can search for it if you so care.) So yes, Guix can be run as a userspace packaging system on top of a more "traditional" distribution. (You do need the daemon running as root and the worker users and etc, but once you have that, different users can installing packages for themselves without clobbering each other.) However, you might notice that maybe it's a bit more work than desirable to get Guix running. It would be much nicer if you could apt-get install guix or install from yum, pacman, etc. That would reduce some steps! Guix could be packaged for other distributions; Diane Trout was working on this for Debian. However, for good reasons (maybe too long to go into here?) Guix does not follow the Filesystem Heirarchy Standard, and for that reason alone will probably not be installed in the main repositories of Debian at least soon. Maybe some day this will change. Hope that helps!
Can Guix packages be delivered to other distros?
1,282,094,234,000
I know I can list the files in a package using the following command, which first download the package (.deb) into /var/cache/apt/archives and then list its contents: apt-get --download-only install <pkg> dpkg --contents <pkg> (.deb) Does apt-get support any way of listing the package contents without first downloading the package? Extra: Furthermore, how can I download a package using apt-get --download-only ... without all of its dependencies?
I don't think that apt-get can do it, no, but apt-file can: sudo apt install apt-file sudo apt update And then: sudo apt-file list <pkg> For example: $ sudo apt-file list xterm xterm: /etc/X11/app-defaults/KOI8RXTerm xterm: /etc/X11/app-defaults/KOI8RXTerm-color xterm: /etc/X11/app-defaults/UXTerm xterm: /etc/X11/app-defaults/UXTerm-color xterm: /etc/X11/app-defaults/XTerm xterm: /etc/X11/app-defaults/XTerm-color xterm: /usr/bin/koi8rxterm xterm: /usr/bin/lxterm xterm: /usr/bin/resize xterm: /usr/bin/uxterm xterm: /usr/bin/xterm xterm: /usr/share/applications/debian-uxterm.desktop xterm: /usr/share/applications/debian-xterm.desktop xterm: /usr/share/doc-base/xterm-ctlseqs xterm: /usr/share/doc-base/xterm-faq xterm: /usr/share/doc/xterm/NEWS.Debian.gz xterm: /usr/share/doc/xterm/README.Debian xterm: /usr/share/doc/xterm/README.i18n.gz xterm: /usr/share/doc/xterm/changelog.Debian.gz xterm: /usr/share/doc/xterm/copyright xterm: /usr/share/doc/xterm/ctlseqs.ms.gz xterm: /usr/share/doc/xterm/ctlseqs.txt.gz xterm: /usr/share/doc/xterm/xterm.faq.gz xterm: /usr/share/doc/xterm/xterm.faq.html xterm: /usr/share/doc/xterm/xterm.log.html xterm: /usr/share/doc/xterm/xterm.termcap.gz xterm: /usr/share/doc/xterm/xterm.terminfo.gz xterm: /usr/share/icons/hicolor/48x48/apps/xterm-color.png xterm: /usr/share/icons/hicolor/scalable/apps/xterm-color.svg xterm: /usr/share/man/man1/koi8rxterm.1.gz xterm: /usr/share/man/man1/lxterm.1.gz xterm: /usr/share/man/man1/resize.1.gz xterm: /usr/share/man/man1/uxterm.1.gz xterm: /usr/share/man/man1/xterm.1.gz xterm: /usr/share/pixmaps/filled-xterm_32x32.xpm xterm: /usr/share/pixmaps/filled-xterm_48x48.xpm xterm: /usr/share/pixmaps/mini.xterm_32x32.xpm xterm: /usr/share/pixmaps/mini.xterm_48x48.xpm xterm: /usr/share/pixmaps/xterm-color_32x32.xpm xterm: /usr/share/pixmaps/xterm-color_48x48.xpm xterm: /usr/share/pixmaps/xterm_32x32.xpm xterm: /usr/share/pixmaps/xterm_48x48.xpm As for downloading, that's what the download command is for: apt-get download <pkg> See man apt-get: download download will download the given binary package into the current directory.
Is there any way to display the package contents using `apt-get` without first downloading the package? [duplicate]
1,282,094,234,000
I've seen an interesting pattern in RPM packaging. The main library package will include the shared library itself: /usr/lib64/libavcodec.so.54 The -devel package will provide headers and a symlink: /usr/include/libavcodec/libavcodec.h /usr/lib64/libavcodec.so -> /usr/lib64/libavcodec.so.54 Why is the libavcodec.so symlink provided by the devel package and not just included with the shared library package? What about the symlink has anything to do with something a developer would want? The headers make sense, but why the symlink to the shared object?
Software from the distribution is mechanically linked consistently, and expects to find libavcodec.so.54, so the unversioned name isn't required for any of the pre-built packages. If you're building software yourself, however, it's common to use -lavcodec or similar, which will find libavcodec.so unversioned. Similarly, build scripts may expect these names to exist. The unversioned names aren't required for the distribution packages, so they're not included by default, but as they're useful when building other software they're included in the -devel package. Other distributions make different delineations and include the .so link in the main package; both are reasonable choices.
Why are .so packages provided by the devel packages?
1,282,094,234,000
I'm a little confused about RPM's in Red Hat and/or Fedora (and/or other distros?). I can certainly accept that 64-bit RPM's are needed for 64-bit OS'es and 32-bit for 32-bit OS'es but... If I have an RPM for ... OpenOffice.org, is that RPM valid for any of my RPM-accepting OS'es or do I need to seek out an RPM specifically tailored to the OS that I'm working with?
As usual: The answer depends. RPMs (or basically any given binary package container) contain runnable code. Most of the time that code depends on certain libraries or programs, the package specifies that it does for example depend on library libA in a version >= 1.0. Now take two different distributions with both using the RPM packaging format. let's say one calls the package libA-1.0 so the RPM you have specifies that it depends on libA. The second binary distribution has a different naimg scheme and prefixes the package with a language so it's named language-libA. Even if the contents of both these libA packages are identical, the package manager cannot know that. You could of course force RPM to just install the package anyways without looking at the dependencies but that's usually just asking for punishment. The problem is less bad if both distributions are related or even based upon one another: Ubuntu for example is based on debian and therefore shares many of the naming conventions and packages so you can othen transfer a package build for debian to a Ubuntu box. It also depends a lot on what language the package is written in: If you have something interpreted like Python where the package is basically just a bunch of text files taking a package for a different distribution usually is easy to handle, but if it's written in C++ and depends and both distributions use different versions of core libraries or compilers you're basically out of luck.
Are RPMs valid across platforms?
1,282,094,234,000
I know that different distributions patch the packages that are available in the respective repositories but I've never understood why is there a need to do so. I would appreciate it if somebody could explain or point me to the relevant documentation online. Thanks.
It took a few tries, but I think I comprehend what you're asking now. There are several possible reasons for a distribution to patch given software before packaging. I'll try and give a non-exclusive list; I'm sure there are other possible reasons. For purposes of this discussion, "upstream" refers to the original source code from the official developers of the software Patches that upstream has not (or not yet) incorporated into their main branch for whatever reason or reasons. Usually because the distribution's package maintainer for that package believes that said patches are worthwhile, or because they're needed to keep continuity in the distribution (Suppose you've got a webserver and after a routine update to php several functions you've been relying on don't work anymore, or it's unable to read a config file from the old style) Distributions tend to like standardized patterns for their filesystem hierarchy in /etc/; every software developer may or may not have their own ideas for what constitutes proper standards. Therefore, one of the first thing a distribution package maintainer tends to do is patch the build scripts to configure and expect said configuration files in a hierarchy pattern that corresponds to the rest of the distribution. Continuing on the topic of configuration, one of the first "patches" tends to be a set of default configuration files that will work with the rest of the distribution "out of the box" so to speak, allowing the end user to get started immediately after installing rather than having to manually sort out a working configuration. That's just off the top of my head. There may very well be others, but I hope this gives you some idea.
Why do different Linux distributions need to patch packages?
1,282,094,234,000
How to build a debian package from a bash script and a systemd service? The systemd service will control the script by starting/stoping ready to use after the .deb will be installed successfully. From a web searching there are some easy exemples to convert only a single file (python, shell , ruby ... script) to .deb.
Here’s a minimal source package which will install a shell script and an associated service. The tree is as follows: minpackage ├── debian │   ├── changelog │   ├── control │   ├── install │   ├── minpackage.service │   ├── rules │   └── source │   └── format └── script script is your script, with permissions 755; debian/minpackage.service is your service. debian/changelog needs to look something like minpackage (1.0) unstable; urgency=medium * Initial release. -- GAD3R <[email protected]> Tue, 05 Jan 2021 21:08:35 +0100 debian/control should contain Source: minpackage Section: admin Priority: optional Maintainer: GAD3R <[email protected]> Build-Depends: debhelper-compat (= 13) Standards-Version: 4.5.1 Rules-Requires-Root: no Package: minpackage Architecture: all Depends: ${misc:Depends} Description: My super package debian/rules should contain #!/usr/bin/make -f %: dh $@ (with a real Tab before dh). The remaining files can be created as follows: mkdir -p debian/source echo "3.0 (native)" > debian/source/format echo script usr/bin > debian/install To build the package, run dpkg-buildpackage -uc -us in the minpackage directory. This will create minpackage_1.0_all.deb in the parent directory. It will also take care of the systemd maintainer scripts for you, so the service will automatically be enabled when the package is installed, and support the various override mechanisms available in Debian.
How to create a debian package from a bash script and a systemd service?
1,282,094,234,000
Is it feasible to build an RPM package and then utilize alien to create the DEB package rather than investing time in building a DEB package? Or do certain pieces not translate well?
Alien is good in some cases, i.e. you want to install a package fast and there is only a DEB or RPM for that package. From my experience Alien is not reliable for deploying a package in a distro. i.e. you created a RPM package from your project, and you now want to create a DEB package as well, and not wanting to spend time learning how deb packaging works. And you just use Alien. (it might work well but it has limitations, it depends on what package you throw at it) What I recommend: If you want to build packages for multiple Linux distributions and multiple architectures the way to go is to use openSUSE Build Service(OBS) It's philosophy is: "Maintain sources once, offer binaries for any platform". For an overview on what you can do with it watch http://www.youtube.com/watch?v=pjOUX0WFkkk , also see OBS Build Tutorial
How well does alien work for converting packages?
1,282,094,234,000
Is there a standard for source packages to be able to build rpms, debs (and perhaps others) without too much customization? I'm talking mostly about Python, PyQt programs.
FPM can build debs/rpms from python packages on PyPI or from a local setup.py file. You can build a deb with fpm -s python -t deb $package-name-on-pypi or fpm -s python -t deb setup.py Building packages in other formats only requires you to change the -t (target type) parameter. To produce debs I can also recommend python-stdeb.
Creating deb and rpm from the same source
1,282,094,234,000
I tried to make && make install package, but I get an error: libX11.so.6 not found Where can I get this library?
You need to install the libX11 package: $ rpm -qf /usr/lib/libX11.so.6 libX11-1.3.1-3.fc13.i686 Just go $ yum -y install libX11 One more thing though: if you don't know how to find and install a library package, care to share why you are trying to compile a piece of software that is officially packaged for Fedora 13 in the most recent version? $ yum info gpicview Available Packages Name : gpicview Arch : x86_64 Version : 0.2.1 Release : 3.fc13 Size : 93 k Repo : fedora Summary : Simple and fast Image Viewer for X URL : http://lxde.sourceforge.net/gpicview/ License : GPLv2+ Description : Gpicview is an simple and image viewer with a simple and intuitive interface. : It's extremely lightweight and fast with low memory usage. This makes it : very suitable as default image viewer of desktop system. Although it is : developed as the primary image viewer of LXDE, the Lightweight X11 Desktop : Environment, it only requires GTK+ and can be used in any desktop environment.
libX11.so.6 Not found
1,282,094,234,000
I'd like to know how bug fixing exactly works in Linux distributions. I mean, after all a distro is made of opensource software made by external developers, and then packaged by the distro's maintainers. So why every distro has it own bug tracker? Shouldn't these bugs be submitted to the original authors of such softwares?
(I'll refer to original authors or original software as upstream authors and upstream software because that's what I'm used to calling them.) From the end-user's perspective, it's nice to have a single place to report bugs, rather than having to sign up for accounts in various upstream bugtrackers for all the software they use. From an upstream author's perspective, it's nice to be shielded from a distribution's users' bug reports, for a couple of reasons: the distribution's maintainers may introduce bugs themselves (or bugs may occur because of interactions between a distribution's packages), it shouldn't be up to the upstream author to fix those; the distribution may have requirements that the upstream software author doesn't care about or can't handle (e.g. various hardware architectures). Note that this doesn't mean that bugs which are in the upstream software don't get forwarded; if a user files a bug in a distribution bug tracker, and the bug is upstream's responsibility, then the bug will be forwarded to the upstream bug tracker. But usually the distribution maintainer will take care of that. For complex bugs the user may well be instructed to follow up upstream though, to avoid a middle-man. Distribution bug trackers support this quite well, and will update a bug's status automatically as it changes in the upstream bug tracker. From a distribution maintainer's perspective, it's necessary to have some distribution-specific bug tracker to track work to be done in the distribution itself (library version changes, new toolchains, new architectures, new distribution tools...). In addition, in many cases distributions provide support for older versions of packages, where bugs may still exist even though they have already been fixed by the upstream author in newer versions of the software. In that situation, it's somewhat annoying for users to ask upstream authors to fix the bugs, since they're already fixed from upstream's perspective; if the bug is sufficiently annoying, it should be up to the distribution's maintainers to backport the fix. (This is debatable for security fixes in important packages; many upstream provide security fixes for older releases themselves.) A further factor to take into account is that there may no longer be an upstream for some pieces of software which are still important; this was the case for a long time for cron for example. If distributions didn't have their own bug trackers there would be nowhere for users to report bugs in such pieces of software. In most projects all this tends to happen quite naturally, in a friendly fashion: distribution maintainers help upstream fix bugs, and vice versa, and distribution maintainers share bug fixes with other distributions.
How bug fixing exactly works in a distro ? upstream vs downstream
1,282,094,234,000
I followed a tutorial to make a package application, but it only deals with the source; there's absolutely no other file types mentioned. How do I include data files so that I can access them from my application in a package? For example, in the makefile, there's a $(DESTDIR) option, but I would never move the data files in $(DESTDIR)/usr/bin -- at least I think I'm not supposed to!
If you are including binary data (pictures) you will want to create a version 3.0 package. You put the additional files inside the debian/ directory and either move them from the debian/rules script using install -D -m 644 debian/filename $(DESTDIR)/path/to/install/to or using the debian/install file to list the file and the path to install it to like debian/filename path/to/install/to.
How to include data files (pictures, text files, ...) in a debian package