date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,391,274,358,000
Using a common command line tool like sed or awk, is it possible to join all lines that end with a given character, like a backslash? For example, given the file: foo bar \ bash \ baz dude \ happy I would like to get this output: foo bar bash baz dude happy
a shorter and simpler sed solution: sed ' : again /\\$/ { N s/\\\n// t again } ' textfile or one-liner if using GNU sed: sed ':x; /\\$/ { N; s/\\\n//; tx }' textfile
How can you combine all lines that end with a backslash character?
1,391,274,358,000
Consider a text file with the following entries: aaa bbb ccc ddd eee fff ggg hhh iii Given a pattern (e.g. fff), I would like to grep the file above to get in the output: all_lines except (pattern_matching_lines U (B lines_before) U (A lines_after)) For example, if B = 2 and A = 1, the output with pattern = fff should be: aaa bbb ccc hhh iii How can I do this with grep or other command line tools? Note, when I try: grep -v 'fff' -A1 -B2 file.txt I don't get what I want. I instead get: aaa bbb ccc ddd eee fff -- -- fff ggg hhh iii
don's might be better in most cases, but just in case the file is really big, and you can't get sed to handle a script file that large (which can happen at around 5000+ lines of script), here it is with plain sed: sed -ne:t -e"/\n.*$match/D" \ -e'$!N;//D;/'"$match/{" \ -e"s/\n/&/$A;t" \ -e'$q;bt' -e\} \ -e's/\n/&/'"$B;tP" \ -e'$!bt' -e:P -e'P;D' This is an example of what is called a sliding window on input. It works by building a look-ahead buffer of $B-count lines before ever attempting to print anything. And actually, probably I should clarify my previous point: the primary performance limiter for both this solution and don's will be directly related to interval. This solution will slow with larger interval sizes, whereas don's will slow with larger interval frequencies. In other words, even if the input file is very large, if the actual interval occurrence is still very infrequent then his solution is probably the way to go. However, if the interval size is relatively manageable, and is likely to occur often, then this is the solution you should choose. So here's the workflow: If $match is found in pattern space preceded by a \newline, sed will recursively Delete every \newline that precedes it. I was clearing $match's pattern space out completely before - but to easily handle overlap, leaving a landmark seems to work far better. I also tried s/.*\n.*\($match\)/\1/ to try to get it in one go and dodge the loop, but when $A/$B are large, the Delete loop proves considerably faster. Then we pull in the Next line of input preceded by a \newline delimiter and try once again to Delete a /\n.*$match/ once again by referring to our most recently used regular expression w/ //. If pattern space matches $match then it can only do so with $match at the head of the line - all $Before lines have been cleared. So we start looping over $After. Each run of this loop we'll attempt to s///ubstitute for &itself the $Ath \newline character in pattern space, and, if successful, test will branch us - and our whole $After buffer - out of the script entirely to start the script over from the top with the next input line if any. If the test is not successful we'll branch back to the :top label and recurse for another line of input - possibly starting the loop over if $match occurs while gathering $After. If we get past a $match function loop, then we'll try to print the $last line if this is it, and if !not try to s///ubstitute for &itself the $Bth \newline character in pattern space. We'll test this, too, and if it is successful we'll branch to the :Print label. If not we'll branch back to :top and get another input line appended to the buffer. If we make it to :Print we'll Print then Delete up to the first \newline in pattern space and rerun the script from the top with what remains. And so this time, if we were doing A=2 B=2 match=5; seq 5 | sed... The pattern space for the first iteration at :Print would look like: ^1\n2\n3$ And that's how sed gathers its $Before buffer. And so sed prints to output $B-count lines behind the input it has gathered. This means that, given our previous example, sed would Print 1 to output, and then Delete that and send back to the top of the script a pattern space which looks like: ^2\n3$ ...and at the top of the script the Next input line is retrieved and so the next iteration looks like: ^2\n3\n4$ And so when we find the first occurrence of 5 in input, the pattern space actually looks like: ^3\n4\n5$ Then the Delete loop kicks in and when it's through it looks like: ^5$ And when the Next input line is pulled sed hits EOF and quits. By that time it has only ever Printed lines 1 and 2. Here's an example run: A=8 B=7 match='[24689]0' seq 100 | sed -ne:t -e"/\n.*$match/D" \ -e'$!N;//D;/'"$match/{" \ -e"s/\n/&/$A;t" \ -e'$q;bt' -e\} \ -e's/\n/&/'"$B;tP" \ -e'$!bt' -e:P -e'P;D' That prints: 1 2 3 4 5 6 7 8 9 10 11 12 29 30 31 32 49 50 51 52 69 70 71 72 99 100
How to grep-inverse-match and exclude "before" and "after" lines
1,391,274,358,000
I want to print the odd-numbered and even-numbered lines from files. I found this shell script which makes use of echo. #!/bin/bash # Write a shell script that, given a file name as the argument will write # the even numbered line to a file with name evenfile and odd numbered lines # in a text file called oddfile. # ------------------------------------------------------------------------- # Copyright (c) 2001 nixCraft project <http://cyberciti.biz/fb/> # This script is licensed under GNU GPL version 2.0 or above # ------------------------------------------------------------------------- # This script is part of nixCraft shell script collection (NSSC) # Visit http://bash.cyberciti.biz/ for more information. # ------------------------------------------------------------------------- file=$1 counter=0 eout="evenfile.$$" # even file name oout="oddfile.$$" # odd file name if [ $# -eq 0 ] then echo "$(basename $0) file" exit 1 fi if [ ! -f $file ] then echo "$file not a file" exit 2 fi while read line do # find out odd or even line number isEvenNo=$( expr $counter % 2 ) if [ $isEvenNo -ne 0 ] then # even match echo $line >> $eout else # odd match echo $line >> $oout fi # increase counter by 1 (( counter ++ )) done < $file echo "Even file - $eout" echo "Odd file - $oout" But isn't there a way to do it in one line? Yes, use awk, I read. Even-numbered lines: awk 'NR % 2' filename odd-numbered lines: awk 'NR % 2 == 1' filename But it doesn't work for me. Both produce the same output, according to diff. When compared to the original file, they are both indeed half as long, and they both contain the odd-numbered lines. Am I doing something wrong?
As you asked “in one line”: awk '{print>sprintf("%sfile.%d",NR%2?"odd":"even",PROCINFO["pid"])}' filename Note that most of the code is due to your fancy output filename choice. Otherwise the following code would be enough to put odd lines in “line-1” and even lines in “line-0”: awk '{print>"line-"NR%2}' filename
Print odd-numbered lines, print even-numbered lines
1,391,274,358,000
I have an embedded linux system using Busybox (OpenWRT) - so commands are limited. I have two files that look like: first file aaaaaa bbbbbb cccccc mmmmmm nnnnnn second file mmmmmm nnnnnn yyyyyy zzzzzz I need to merge these 2 lists into 1 file, and remove the duplicates. I don't have diff (space is limited) so we get to use the great awk, sed, and grep (or other tools that might be included in a standard Busybox instance). Going to a merge file like: command1 > mylist.merge command2 mylist.merge > originallist is totally ok. It doesn't have to be a single-line command. Currently defined functions in the instance of Busybox that I am using (default OpenWRT): [, [[, arping, ash, awk, basename, brctl, bunzip2, bzcat, cat, chgrp, chmod, chown, chroot, clear, cmp, cp, crond, crontab, cut, date, dd, df, dirname, dmesg, du, echo, egrep, env, expr, false, fgrep, find, free, fsync, grep, gunzip, gzip, halt, head, hexdump, hostid, hwclock, id, ifconfig, init, insmod, kill, killall, klogd, less, ln, lock, logger, logread, ls, lsmod, md5sum, mkdir, mkfifo, mknod, mktemp, mount, mv, nc, netmsg, netstat, nice, nslookup, ntpd, passwd, pgrep, pidof, ping, ping6, pivot_root, pkill, poweroff, printf, ps, pwd, reboot, reset, rm, rmdir, rmmod, route, sed, seq, sh, sleep, sort, start-stop-daemon, strings, switch_root, sync, sysctl, syslogd, tail, tar, tee, telnet, telnetd, test, time, top, touch, tr, traceroute, true, udhcpc, umount, uname, uniq, uptime, vconfig, vi, watchdog, wc, wget, which, xargs, yes, zcat
I think sort file1 file2 | uniq aaaaaa bbbbbb cccccc mmmmmm nnnnnn yyyyyy zzzzzz will do what you want. Additional Documentation: uniq sort
Merge two lists while removing duplicates
1,391,274,358,000
I was reading this awk script awk -F"=" '{OFS="=";gsub(",",";",$2)}1' I want to know what is the function of 1 at the end of it.
An awk program is a series of condition-action pairs, conditions being outside of curly braces and actions being enclosed in them. A condition is considered false if it evaluates to zero or the empty string, anything else is true (uninitialized variables are zero or empty string, depending on context, so they are false). Either a condition or an action can be implied; braces without a condition (as yours begins) are considered to have a true condition and are always executed if they are hit, and any condition without an action will print the line if and only if the condition is met. The 1 at the end of your script is a condition (always true) with no action, so it executes the default action for every line, printing the line (which may have been modified by the previous action in braces).
What is the meaning of '1' at the end of an awk script
1,391,274,358,000
I've got an extreme problem, and all of the solutions I can imagine are complicated. According to my UNIX/Linux experience there must be an easy way. I want to delete the first 31 bytes of each file in /foo/. Each file is long enough. Well, I'm sure somebody will deliver me a suprisingly easy solution I just can't imagine. Maybe awk?
for file in /foo/* do if [ -f "$file" ] then dd if="$file" of="$file.truncated" bs=31 skip=1 && mv "$file.truncated" "$file" fi done or the faster, thanks to Gilles' suggestion: for file in /foo/* do if [ -f $file ] then tail +32c $file > $file.truncated && mv $file.truncated $file fi done Note: Posix tail specify "-c +32" instead of "+32c" but Solaris default tail doesn't like it: $ /usr/bin/tail -c +32 /tmp/foo > /tmp/foo1 tail: cannot open input /usr/xpg4/bin/tail is fine with both syntaxes. If you want to keep the original file permissions, replace ... && mv "$file.truncated" "$file" by ... && cat "$file.truncated" "$file" && rm "$file.truncated"
Delete the first n bytes of files
1,391,274,358,000
Is there a simple utility or script to columnate the output from one of my scripts? I have data in some form: A aldkhasdfljhaf B klajsdfhalsdfh C salkjsdjkladdag D lseuiorlhisnflkc E sdjklfhnslkdfhn F kjhnakjshddnaskjdh but if this becomes two long, write the data in the following form (where still vertically ordered): A aldkhasdfljhaf D lseuiorlhisnflkc B klajsdfhalsdfh E sdjklfhnslkdfhn C salkjsdjkladdag F kjhnakjshddnaskjdh From reading the manpage, I don't think that this is something column would be appropriate for but I'm not sure. It's easy enough to split in the form: A B C D E F by only printing \n every second line (what my current script does). Any ideas? Thanks!
column seems to be what you want: $ cat file A aldkhasdfljhaf B klajsdfhalsdfh C salkjsdjkladdag D lseuiorlhisnflkc E sdjklfhnslkdfhn $ column file A aldkhasdfljhaf D lseuiorlhisnflkc B klajsdfhalsdfh E sdjklfhnslkdfhn C salkjsdjkladdag F kjhnakjshddnaskjdh
Split long output into two columns
1,391,274,358,000
how to print the line in case the first field start with Linux1 for example: echo Linux1_ver2 12542 kernel-update | awk '{if ($1 ~ Linux1 ) print $0;}' the target is to print the line , while the first field start with Linux1 example of lines: Linux1-new 36352 Version:true Linux1-1625543 9847 Linux1:16254 8467563 remark - space or TAB could be before the first filed
One way: echo "Linux1_ver2 12542 kernel-update" | awk '$1 ~ /^ *Linux1/'
awk + print line only if the first field start with string as Linux1
1,469,380,359,000
I typed help while I was in the GDB but didn't find anything about step-into, step-over and step-out. I put a breakpoint in an Assembly program in _start (break _start). Afterwards I typed next and it finished the debugging. I guess it was because it finished _start and didn't step-into as I wanted.
help running provides some hints: There are step and next instuctions (and also nexti and stepi). (gdb) help next Step program, proceeding through subroutine calls. Usage: next [N] Unlike "step", if the current source line calls a subroutine, this command does not enter the subroutine, but instead steps over the call, in effect treating it as a single source line. So we can see that step steps into subroutines, but next will step over subroutines. The step and stepi (and the next and nexti) are distinguishing by "line" or "instruction" increments. step -- Step program until it reaches a different source line stepi -- Step one instruction exactly Related is finish: (gdb) help finish Execute until selected stack frame returns. Usage: finish Upon return, the value returned is printed and put in the value history. A lot more useful information is at https://sourceware.org/gdb/onlinedocs/gdb/Continuing-and-Stepping.html
How to step-into, step-over and step-out with GDB?
1,469,380,359,000
When I use GDB command add-symbol-file to load the symbol, GDB always asks me 'y or n', like this: gdb> add-symbol-file mydrv.ko 0xa0070000 add symbol table from file "mydrv.ko" at .text_addr = 0xa0070000 (y or n) How to make it not ask and execute quietly?
gdb will ask you to confirm certain commands, if the value of the confirm setting is on. From Optional Warnings and Messages: set confirm off Disables confirmation requests. Note that running GDB with the --batch option (see -batch) also automatically disables confirmation requests. set confirm on Enables confirmation requests (the default). show confirm Displays state of confirmation requests. That's a single global setting for confirm. In case you want to disable confirmation just for the add-symbol-file command, you can define two hooks, which will run before and after the command: (gdb) define hook-add-symbol-file set confirm off end (gdb) define hookpost-add-symbol-file set confirm on end If you want to disable confirmation just for a single invocation of a command, precede it with the server keyword, which is part of gdb's annotation system.
How to make gdb not ask me "y or n"?
1,469,380,359,000
I wrote a program that calls setuid(0) and execve("/bin/bash",NULL,NULL). Then I did chown root:root a.out && chmod +s a.out When I execute ./a.out I get a root shell. However when I do gdb a.out it starts the process as normal user, and launches a user shell. So... can I debug a setuid root program?
You can only debug a setuid or setgid program if the debugger is running as root. The kernel won't let you call ptrace on a program running with extra privileges. If it did, you would be able to make the program execute anything, which would effectively mean you could e.g. run a root shell by calling a debugger on /bin/su. If you run Gdb as root, you'll be able to run your program, but you'll only be observing its behavior when run by root. If you need to debug the program when it's not started by root, start the program outside Gdb, make it pause in some fashion before getting to the troublesome part, and attach the process inside Gdb (at 1234 where 1234 is the process ID).
Can gdb debug suid root programs?
1,469,380,359,000
I was reading the manpage for gdb and I came across the line: You can use GDB to debug programs written in C, C@t{++}, Fortran and Modula-2. The C@t{++} looks like a regex but I can't seem to decode it. What does it mean?
GNU hates man pages, so they usually write documentation in another format and generate a man page from that, without really caring if the result is usable. C@t{++} is some texinfo markup which didn't get translated. It wasn't intended to be part of the user-visible documentation. It should simply say C++ (possibly with some special font for the ++ to make it look nice).
What does C@t{++} mean in the gdb man page?
1,469,380,359,000
Is there a way to get a core dump (or something similar) for a process without actually killing the processes? I have a multithreaded python process running on an embedded system. And I want to be able to get a snapshot of the process under normal conditions (ie with the other processes required to be running), but I don't have enough memory to connect gdb (or run it under gdb) without the python process being the only one running. I hope this question makes sense.
The usual trick is to have something (possibly a signal like SIGUSR1) trigger the program to fork(), then the child calls abort() to make itself dump core. from os import fork, abort (...) def onUSR1(sig, frame): if os.fork == 0: os.abort and during initialization from signal import signal, SIGUSR1 from wherever import onUSR1 (...) signal.signal(signal.SIGUSR1, wherever.onUSR1) Used this way, fork won't consume much extra memory because almost all of the address space will be shared (which is also why this works for generating the core dump). Once upon a time this trick was used with a program called undump to generate an executable from a core dump to save an image after complex initialization; emacs used to do this to generate a preloaded image from temacs.
Dump process core without killing the process
1,469,380,359,000
I'm debugging using core dumps, and note that gdb needs you to supply the executable as well as the core dump. Why is this? If the core dump contains all the memory that the process uses, isn't the executable contained within the core dump? Perhaps there's no guarantee that the whole exe is loaded into memory (individual executables are not usually that big though) or maybe the core dump doesn't contain all relevant memory after all? Is it for the symbols (perhaps they're not loaded into memory normally)?
The core dump is just the dump of your programs memory footprint, if you know where everything was then you could just use that. You use the executable because it explains where (in terms of logical addresses) things are located in memory, i.e. the core file. If you use a command objdump it will dump the meta data about the executable object you are investigating. Using an executable object named a.out as an example. objdump -h a.out dumps the header information only, you will see sections named eg. .data or .bss or .text (there are many more). These inform the kernel loader where in the object various sections can be found and where in the process address space the section should be loaded, and for some sections (eg .data .text) what should be loaded. (.bss section doesn't contain any data in the file but it refers to the amount of memory to reserve in the process for uninitialised data, it is filled with zeros ). The layout of the executable object file conforms to a standard, ELF. objdump -x a.out - dumps everything If the executable object still contains its symbol tables (it hasn't been stripped - man strip and you used -g to generate debug generation to gcc assuming a c source compilation), then you can examine the core contents by symbol names, e.g. if you had a variable/buffer named inputLine in your source code, you could use that name in gdb to look at its content. i.e. gdb would know the offset from the start of your programs initialised data segment where inputLine starts and the length of that variable. Further reading Article1, Article 2, and for the nitty gritty Executable and Linking Format (ELF) specification. Update after @mirabilos comment below. But if using the symbol table as in $ gdb --batch -s a.out -c core -q -ex "x buf1" Produces 0x601060 <buf1>: 0x72617453 and then not using symbol table and examining address directly in, $ gdb --batch -c core -q -ex "x 0x601060" Produces 0x601060: 0x72617453 I have examined memory directly without using the symbol table in the 2nd command. I also see, @user580082 's answer adds further to explanation, and will up-vote.
Why does GDB need the executable as well as the core dump?
1,469,380,359,000
I was thrown off guard today by gdb: Program exited with code 0146. gdb prints the return code in octal; looking into why I found: http://comments.gmane.org/gmane.comp.gdb.devel/30363 But that's not a particularly satisfying answer. Some quick googling did not reveal the history, so I was hoping someone on SO might know the back story. A somewhat related question, how would one even view the return code in octal? Perhaps older machines always printed the return code? $ printf %o\\n $? Is pretty awkward :)
The octal representation eases the interpretation of the exit code for small values, which are the most commonly used. Should this number, which is a byte, been printed in decimal, finding which signal interrupted a process would require a little bit of calculation while in octal, they can be read as they are: a process exits with status 5, gdb displays 05 which makes no difference a process exits because it got a SIGINT (Control+C), gdb displays 0202 which is easier to recognize as signal #2 than 130. Moreover, the exit status might also be a bit mask and in such case, octal (at least when you are used to it which was more common a couple of decades ago than these days) is easier to convert mentally into bits than decimal or even hexadecimal, just like for example chmod still accept an octal number to represent file permissions: 0750 = 111 101 000 = rwx r-x ---.
Unix History: return code octal?
1,469,380,359,000
I made an alias ff and sourced it from ~/.zsh/aliases.zsh. The aliases run well themselves: alias ff ff='firefox --safe-mode' and it runs as expected. But when I try to run it under gdb I get: > gdb ff GNU gdb (Debian 7.12-6+b1) 7.12.0.20161007-git Copyright (C) 2016 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> ... For help, type "help". Type "apropos word" to search for commands related to "word"... ff: No such file or directory. (gdb) quit I tried using gdb firefox --safe-mode but that wouldn't run. Can somebody identify what is wrong?
Aliases are a feature of the shell. Defining an alias creates a new shell command name. It's recognized only by the shell, and only when it appears as a command name. For example, if you type > ff at a shell prompt, it will invoke your alias, but if you type > echo ff the ff is just an argument, not a command. (At least in bash, you can play some tricks if the alias definition ends with a space. See Stéphane Chazelas's answer for a possible solution if you're determined to use shell aliases.) You typed > gdb ff so the shell invoked gdb, passing it the string ff as an argument. You can pass arguments to the debugged program via the gdb command line, but you have to use the --args option. For example: > gdb firefox --safe-mode tries (and fails) to treat --safe-mode as an option to gdb. To run the command with an argument, you can do it manually: > gdb firefox ... (gdb) run --safe-mode or, as thrig's answer reminds me, you can use --args: > gdb --args firefox --safe-mode ... (gdb) run (The first argument following --args is the command name; all remaining arguments are passed to the invoked command.) It's possible to extract the arguments from a shell alias, but I'd recommend just defining a separate alias: alias ff='firefox --safe-mode' alias gdbff='gdb --args firefox --safe-mode' Or, better, use shell functions, which are much more versatile. The bash manual says: For almost every purpose, shell functions are preferred over aliases.
why doesn't gdb like aliases [duplicate]
1,469,380,359,000
When I debug an executable program with arguments arg1 arg2 with gdb I perform the following sequence gdb file ./program run arg1 arg2 bt quit How can I do the same from one command line in shell script?
You can pass commands to gdb on the command line with option -ex. You need to repeat this for each command. This can be useful when your program needs to read stdin so you don't want to redirect it. Eg, for od -c echo abc | gdb -ex 'break main' -ex 'run -c' -ex bt -ex cont -ex quit od So in particular for your question, you can use: gdb -ex 'run arg1 arg2' -ex bt -ex quit ./program
gdb in one command
1,469,380,359,000
Sometimes you need to unmount a filesystem or detach a loop device but it is busy because of open file descriptors, perhaps because of a smb server process. To force the unmount, you can kill the offending process (or try kill -SIGTERM), but that would close the smb connection (even though some of the files it has open do not need to be closed). A hacky way to force a process to close a given file descriptor is described here using gdb to call close(fd). This seems dangerous, however. What if the closed descriptor is recycled? The process might use the old stored descriptor not realizing it now refers to a totally different file. I have an idea, but don't know what kind of flaws it has: using gdb, open /dev/null with O_WRONLY (edit: an comment suggested O_PATH as a better alternative), then dup2 to close the offending file descriptor and reuse its descriptor for /dev/null. This way any reads or writes to the file descriptor will fail. Like this: sudo gdb -p 234532 (gdb) set $dummy_fd = open("/dev/null", 0x200000) // O_PATH (gdb) p dup2($dummy_fd, offending_fd) (gdb) p close($dummy_fd) (gdb) detach (gdb) quit What could go wrong?
Fiddling with a process with gdb is almost never safe though may be necessary if there's some emergency and the process needs to stay open and all the risks and code involved is understood. Most often I would simply terminate the process, though some cases may be different and could depend on the environment, who owns the relevant systems and process involved, what the process is doing, whether there is documentation on "okay to kill it" or "no, contact so-and-so first", etc. These details may need to be worked out in a post-mortem meeting once the dust settles. If there is a planned migration it would be good in advance to check whether any processes have problematic file descriptors open so those can be dealt with in a non-emergency setting (cron jobs or other scheduled tasks that run only in the wee hours when migrations may be done are easily missed if you check only during daytime hours). Write-only versus Read versus Read-Write Your idea to reopen the file descriptor O_WRONLY is problematic as not all file descriptors are write-only. John Viega and Matt Messier take a more nuanced approach in the "Secure Programming Cookbook for C and C++" book and handle standard input differently than standard out and standard error (p. 25, "Managing File Descriptors Safely"): static int open_devnull(int fd) { FILE *f = 0; if (!fd) f = freopen(_PATH_DEVNULL, "rb", stdin); else if (fd == 1) f = freopen(_PATH_DEVNULL, "wb", stdout); else if (fd == 2) f = freopen(_PATH_DEVNULL, "wb", stderr); return (f && fileno(f) == fd); } In the gdb case the descriptor (or also FILE * handle) would need to be checked whether it is read-only or read-write or write-only and an appropriate replacement opened on /dev/null. If not, a once read-only handle that is now write-only will cause needless errors should the process attempt to read from that. What Could Go Wrong? How exactly a process behaves when its file descriptors (and likely also FILE * handles) are fiddled behind the scenes will depend on the process and will vary from "no big deal" should that descriptor never be used to "nightmare mode" where there is now a corrupt file somewhere due to unflushed data, no file-was-properly-closed indicator, or some other unanticipated problem. For FILE * handles the addition of a fflush(3) call before closing the handle may help, or may cause double buffering or some other issue; this is one of the several hazards of making random calls in gdb without knowing exactly what the source code does and expects. Software may also have additional layers of complexity built on top of fd descriptors or the FILE * handles that may also need to be dealt with. Monkey patching the code could turn into a monkey wrench easily enough. Summary Sending a process a standard terminate signal should give it a chance to properly close out resources, same as when a system shuts down normally. Fiddling with a process with gdb will likely not properly close things out, and could make the situation very much worse.
Safest way to force close a file descriptor
1,469,380,359,000
I am making a nice presentation of ARM assembly code execution and I would need GDB to step the code every 1 second infinitely long (well until I press CTRL+C). Has anyone got solution? I don't want to keep on standing next to the keyboard and stepping the program when visitors come visit my stall.
Gdb's CLI supports a while loop. There's no builtin sleep command, but you can either call out to the shell to run the sleep program, or use gdb's builtin python interpreter, if it has one. It's interruptible with Control-C. Method 1: (gdb) while (1) >step >shell sleep 1 >end Method 2: (gdb) python import time (gdb) while (1) >step >python time.sleep(1) >end Method 3 (define a macro): (gdb) define runslowly Type commands for definition of "runslowly". End with a line saying just "end". >python import time >while (1) >step >python time.sleep(1) >end >end (gdb) document runslowly Type documentation for "runslowly". End with a line saying just "end". >step a line at a time, every 1 second >end (gdb) runslowly
GDB step in delays
1,469,380,359,000
(gdb)printf "Hello %d", 7 Hello 7 (gdb)set $MyVar = printf "Hello %d", 7 // ??? How to save the result of printf "Hello %d", 7 to $MyVar?
eval does a printf of its arguments and then runs it as a command. So you can take your printf argument list, insert set $MyVar = at the beginning, and eval it. (gdb) eval "set $MyVar = \"Hello %d\"", 7 (gdb) print $MyVar $2 = "Hello 7"
How to save the result of printf to a variable in GDB?
1,469,380,359,000
This is the result of looking at virtual memory of a process in gdb; I have some questions regarding this: Why are some parts of the virtual memory are repeated? For example, our program (stack6) and libc library is repeated 4 times; if they have partitioned them into different parts, then why? Why not just put them all together? Is the top path (/opt/pro...) the instruction section (text section) of our virtual memory and only contains the instructions? Why are the sizes of the 4 libc's different? What's the deal with the offset, if we already have the size and starting addr, then what is offset for? Where are the data, bss, kernel and heap sections and why do some parts of the above picture have no info about them? Is there any better option in gdb that actually shows all the parts? Is there any better program than gdb that shows the virtual memory part of our process much better? I just want to have a good visual of an actual virtual memory, which debugging program provides the best result. The sections that I mentioned :
There’s one important piece of information missing from gdb’s output: the pages’ permissions. (They’re shown on Solaris and FreeBSD, but not on Linux.) You can see those by looking at /proc/<pid>/maps; the maps for your Protostar example show $ cat /proc/.../maps 08048000-08049000 r-xp 00000000 00:0f 2925 /opt/protostar/bin/stack6 08049000-0804a000 rwxp 00000000 00:0f 2925 /opt/protostar/bin/stack6 b7e96000-b7e97000 rwxp 00000000 00:00 0 b7e97000-b7fd5000 r-xp 00000000 00:0f 759 /lib/libc-2.11.2.so b7fd5000-b7fd6000 ---p 0013e000 00:0f 759 /lib/libc-2.11.2.so b7fd6000-b7fd8000 r-xp 0013e000 00:0f 759 /lib/libc-2.11.2.so b7fd8000-b7fd9000 rwxp 00140000 00:0f 759 /lib/libc-2.11.2.so b7fd9000-b7fdc000 rwxp 00000000 00:00 0 b7fe0000-b7fe2000 rwxp 00000000 00:00 0 b7fe2000-b7fe3000 r-xp 00000000 00:00 0 [vdso] b7fe3000-b7ffe000 r-xp 00000000 00:0f 741 /lib/ld-2.11.2.so b7ffe000-b7fff000 r-xp 0001a000 00:0f 741 /lib/ld-2.11.2.so b7fff000-b8000000 rwxp 0001b000 00:0f 741 /lib/ld-2.11.2.so bffeb000-c0000000 rwxp 00000000 00:0f 0 [stack] (The Protostar example runs in a VM which is easy to hack, presumably to make the exercises tractable: there’s no NX protection, and no ASLR.) You’ll see above that what appears to be repeated mappings in gdb actually corresponds to different mappings with different permissions. The text segment is mapped read-only and executable; the data segment is mapped read-only; BSS and the heap are mapped read-write. Ideally, the data segment, BSS and heap are not executable, but this example lacks NX support so they are executable. Each shared library gets its own mapping for its text segment, data segment and BSS. The fourth mapping is a non-readable, non-writable, non-executable segment typically used to guard against buffer overflows (although given the age of the kernel and C library used here this might be something different). The offset, when given, indicates the offset of the data within the file, which doesn’t necessarily have much to do with its position in the address space. When loaded, this is subject to alignment constraints; for example, libc-2.11.2.so’s program headers specify two “LOAD” headers: Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align LOAD 0x000000 0x00000000 0x00000000 0x13d2f4 0x13d2f4 R E 0x1000 LOAD 0x13e1cc 0x0013f1cc 0x0013f1cc 0x027b0 0x0577c RW 0x1000 (Use readelf -l to see this.) These can result in multiple mappings at the same offset, with different virtual addresses, if the sections mapped to the segments have different protection flags. In stack6’s case: Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align LOAD 0x000000 0x08048000 0x08048000 0x00604 0x00604 R E 0x1000 LOAD 0x000604 0x08049604 0x08049604 0x00114 0x00128 RW 0x1000 (This also explains the small size shown by proc info mappings for stack6: each header requests less than 4KiB, with a 4KiB alignment, so it gets two 4KiB mappings with the same offset at different addresses.) Blank mappings correspond to anonymous mappings; see man 5 proc for details. You’d need to break on mmap in gdb to determine what they correspond to. You can’t see the kernel mappings (apart from the legacy vsyscall on some architectures) because they don’t matter from the process’s perspective (they’re inaccessible). I don’t know of a better gdb option, I always use /proc/$$/maps. See How programs get run: ELF binaries for details of the ELF format as read by the kernel, and how it maps to memory allocations; it has pointers to lots more reference material.
Why some libraries and other parts get repeated in the linux virtual memory with gdb?
1,469,380,359,000
My computer run with Ubuntu 14.04. GDB seems to be abnormal in different account. For example I make a very simple test. I write a file under ~/test/test.c like this: #include <stdio.h> #include <stdlib.h> int main(int argc, char *argv[]) { printf("hello,world"); return 0; } and build with command "gcc -g test.c -o test", then I get result file named test. Next step, run gdb to debug. Notice that current account is user of my own. $gdb test (gdb)l //work well (gdb) b 6 //work well (gdb) r //error: Cannot exec /home/xxx/test/test -c exec /home/xxx/test/test . //Error: No such file or directory But if I change to root account by command "su", gdb works well. Why?
It appears that your SHELL variable is set to a file that doesn't exist. Try the following: export SHELL=/bin/sh gdb test ensuring that /bin/sh exists and is executable. This works via su not because of root permissions, but because su resets the SHELL environment variable. Per the su manual page: Note that the default behavior for the environment is the following: The $HOME, $SHELL, $USER, $LOGNAME, $PATH, and $IFS environment variables are reset. Further Discussion If you are asking yourself, "How was I supposed to know that from the following error message?": //error: Cannot exec /home/xxx/test/test -c exec /home/xxx/test/test . //Error: No such file or directory You are not alone. This appears to be poor error reporting from gdb. When gdb decides that it needs to execute your command using a shell, it constructs a command in the following format: /path/to/shell -c exec /path/to/executable However, when it prints the error message, it does this: save_errno = errno; fprintf_unfiltered (gdb_stderr, "Cannot exec %s", exec_file); for (i = 1; argv[i] != NULL; i++) fprintf_unfiltered (gdb_stderr, " %s", argv[i]); fprintf_unfiltered (gdb_stderr, ".\n"); fprintf_unfiltered (gdb_stderr, "Error: %s\n", safe_strerror (save_errno)); gdb_flush (gdb_stderr); Here exec_file is the expanded path of the file you passed on the command line. It prints exec_file first, followed by the elements of argv, starting at the 1st index. argv contains the arguments it passed to execvp. Unfortunately, the shell it is trying to use is in the 0th element of argv, which never gets printed. Thus, you never see the file that execvp couldn't find. Further, it then prints a trailing . which isn't actually one of the arguments that it passed to execvp and is presumably there to make the message a completely sentence. Finally, it prints the error we got back from our call to execvp, which is that the executable file could not be found. This bug likely is caused by the fact that this error handling code is the same for the case where gdb attempts to exec the command directly and for the case when it uses the shell. In the former case, the constructed error message would look correct since exec_file and argv[0] would be the same.
GDB cannot exec my test program
1,469,380,359,000
I am using a third party .NET Core application (a binary distribution used by a VS Code extension) that unfortunately has diagnostic logging enabled with no apparent way to disable it (I did already report this to the authors). The ideal solution (beside being able to disable it), would be if I could specify to systemd that it should not log anything for that particular program, but I have been unable to find any way to do so. Here is everything I tried so far: The first thing I tried was to redirect stdout and stderr to /dev/null: dotnet-app > /dev/null 2>&1. This indeed disabled any of the normal output, but the diagnostic logging was still being written to the systemd journal. I hoped that the application had a command line argument that allowed me to disable the diagnostic logging. It did have a verbosity argument, but after experimenting with, it only seemed to have effect on the normal output, not the diagnostic logging. By using strace and looking for calls to connect, I found out that the application instead wrote the diagnostic logging directly to /dev/log. The path /dev/log is a symlink to /run/systemd/journal/dev-log, so to verify my finding, I changed the symlink to point to /dev/null instead. This indeed stopped the diagnostic logging from showing up in the systemd journal. I was told about LD_PRELOAD and made a library that replaced the standard connect with my own version that returned an error in the case it tried to connect to /dev/log. This worked correctly in my test program, but failed with the .NET Core application, failing with connect ENOENT /tmp/CoreFxPipe_1ddf2df2725f40a68990c92cb4d1ff1e. I experimented with my library, but even if all I did was directly pass the arguments to the standard connect function, it would still fail with the same error. I then tried using Linux namespaces to make it so that /dev/log would point to /dev/null only for the .NET Core application: unshare --map-root-user --mount sh -c "mount --bind /dev/null /dev/log; dotnet-app $@". This too failed with the same error, even though it again worked for my test program. Even just using unshare --map-root-user --mount dotnet-app "$@" would fail with the error. Next I tried using gdb to close the file descriptor to /dev/log while the application was running. This worked, but it reopens it after some time has passed. I also tried changing the file descriptor to point to /dev/null, which also worked, but it too was reset to /dev/log after some time. My last attempt was to write my own UNIX socket that would filter out all written to it by the .NET Core application. That actually worked, but I learned that the PID is send along with what is written to UNIX sockets, so everything passed along to the systemd journal would report coming from the PID of the program backing my UNIX socket. For now this is solution is acceptable for me, because on my system almost nothing uses /dev/log, but I would welcome a better solution. For example, I read that it was possible to spoof certain things as root for UNIX sockets, but I was unable to find out more about it. Or if someone might have any insights on why both LD_PRELOAD and unshare might fail for the .NET Core application, while they work fine for a simple C test program that writes to /dev/log?
In short, have your library loaded by LD_PRELOAD override syslog(3) rather than connect(3). The /dev/log Unix socket is used by the syslog(3) glibc function, which connects to it and writes to it. Overriding connect(3) probably doesn't work because the syslog(3) implementation inside glibc will execute the connect(2) system call rather than the library function, so an LD_PRELOAD hook will not trap the call from within syslog(3). There's a disconnect between strace, which shows you syscalls, and LD_PRELOAD, which can override library functions (in this case, functions from glibc.) The fact that there's a connect(3) glibc function and also a connect(2) system call also helps with this confusion. (It's possible that using ltrace would have helped here, showing calls to syslog(3) instead.) You can probably confirm that overriding connect(3) in LD_PRELOAD as you're doing won't work with syslog(3) by having your test program call syslog(3) directly rather than explicitly connecting to /dev/log, which I suspect is how the .NET Core application is behaving. Hooking into syslog(3) is also potentially more useful, because being at a higher level in the stack, you can use that hook to make decisions such as selectively forwarding some of the messages to syslog. (You can load the syslog function from glibc with dlsym(RTLD_NEXT, "syslog"), and then you can use that function pointer to call syslog(3) for the messages you do want to forward from your hook.) The approach of replacing /dev/log with a symlink to /dev/null is flawed in that /dev/null will not accept a connect(2) operation (only file operations such as open(2)), so syslog(3) will try to connect and get an error and somehow try to handle it (or maybe return it to the caller), in any case, this might have side effects. Hopefully using an LD_PRELOAD override of syslog(3) is all you need here.
How to prevent a process from writing to the systemd journal?
1,469,380,359,000
I wished to know if someone knew how to put the compilation warnings of GCC in a text file ? For example : I wrote (willingly) a undefined function foo(). So gcc tell me : warning: implicit declaration of function 'foo [Wimplicit-function-declaration] How can i get this line written in a text file ? I search for an option in the man page of gcc : https://linux.die.net/man/1/gcc but didn't find anything about that. I tried the commands echo and tee to do that but those commands only catch the compilation line (eg : ... gcc -o test test.o main.o ...)
You can redirect the output, say your program is: main() { } When you compile, gcc will say something like: a.c:1:1: warning: return type defaults to ‘int’ [-Wimplicit-int] main() ^~~~ That is shown in the terminal. What you want to do is redirect this to a file. In this case, you need to redirect the standar error output (in the terminal you see both: standard ouput, and standard error output): $ cc a.c 2> output.txt 2> means send the error output (in this case warnings) to this file. a simple > would redirect the standard output.
How can i put the gcc warning messages in a text file? [duplicate]
1,469,380,359,000
Let me preface this question by saying that I've found a lot of answers for questions similar to my question but for 32-bit machines. However, I can't find anything for 64-bit machines. Please no answers with respect to 32-bit machines. According to many sources on Stack Exchange, /proc/kcore can be literally dumped (e.g., with dd) to a file in order to get a copy of physical memory... But this clearly does not work for a 64-bit machine, for which /proc/kcore is 128TB in size. As an aside, I note that it is possible to access only the first MB of memory through /dev/mem. This is for security reasons. Getting around this involves recompiling the kernel, which I don't want to do... nor can I do for my purposes (I have to work with the running kernel). Ok... so, /proc/kcore is an ELF-core file dump of the physical memory and it can be viewed using gdb. For example, with: gdb /usr/[blah]/vmlinux /proc/kcore This I can do... but, this is not what I want to do. I would like to export the physical memory to a file for offline analysis. But I'm running into issues. For one thing, I can't just dump /proc/kcore to a file since it's 128TB. I want to dump all of physical memory, but I don't know where it is in /proc/kcore. I only see non-zero data up until byte 3600 and then it's all zeros for as far as I have looked (about 40GB). I think this may have to do with how the memory is mapped to /proc/kcore, but I don't understand the structure and need some guidance. More stuff I think I know: I know that only 48 bits are used for addressing, not 64 bits. This implies that there should be 248=256TB of memory available... but /proc/kcore is only 128TB, which is I think because addressing is further divided into a chunk from 0x0000000000000000 to 0x00007fffffffffff (128TB) and a chunk from 0xffff800000000000 to 0xffffffffffffffff (128TB). So, somehow this makes /proc/kcore 128TB... but is this because one of these chunks is mapped to /proc/kcore and one isn't? Or some other reason? So, as an example, I can use gdb to analyze /proc/kcore and find, e.g., the location (?) of the sys_call_table: (gdb) p (unsigned long*) sys_call_table $1 = (unsigned long *) 0xffffffff811a4f20 <sys_read> Does this mean that the chunk of memory from 0xffff8000000000000 to 0xffffffffffffffff is what is in /proc/kcore? And if so, how is this mapped to /proc/kcore? For example using dd if=/proc/kcore bs=1 skip=2128982200 count=100 | xxd shows only zeros (2128982200 is a little before 0xffffffffffffffff-0xffffffff811a4f20)... Furthermore, I know how to use gcore to dump the memory of a given process for analysis. And I also know that I can look in /proc/PID/maps to see what process memory looks like... but nevertheless I still have no idea how to dump the whole physical memory... and it's kind of driving me nuts. Please help me avoid going crazy... ;)
After a lot more searching I think I have convinced myself that there is no simple way to get what I want. So, what did I end up doing? I installed LiME from github (https://github.com/504ensicsLabs/LiME) git clone https://github.com/504ensicsLabs/LiMe cd /LiME/src make -C /lib/modules/`uname -r`/build M=$PWD modules The above commands create the lime.ko kernel module. A full dump of memory can be obtained by then running: insmod ./lime.ko "path=/root/temp/outputDump.bin format=raw dio=0" which just inserts the kernel module and the string are the parameters specifying the output file location and format... AND IT WORKED! YAY.
The structure of /proc/kcore on 64-bit machine and relation to physical memory
1,469,380,359,000
When I try stepping through a program, gdb throws this error std::ostream::operator<< (this=0x6013c0 <std::cout@@GLIBCXX_3.4>, __n=2) at /build/gcc/src/gcc-build/x86_64-unknown-linux-gnu/libstdc++-v3/include/bits/ostream.tcc:110 110 /build/gcc/src/gcc-build/x86_64-unknown-linux-gnu/libstdc++-v3/include/bits/ostream.tcc: No such file or directory. This is the program I am trying to debug. #include <iostream> int printPrime(int, int); int main() { int t, c; std::cin >> t; c = t; int m[t], n[t]; while (t--) { std::cin >> m[t] >> n[t]; } while (c--) { printPrime(m[c], n[c]); std::cout << std::endl; } return 0; } int printPrime(int m, int n) { do { int c = m; int lim = c>>2; if (c <= 1) continue; while (c-- && c>lim) { if (m%c == 0) { if (c == 1) { std::cout << m << std::endl; break; } break; } } } while(m++ && m<=n); } There is no problem with the program code as it runs correctly. I guess it is a problem with my install of GDB on Arch. The error is shown when it encounters cin or cout. This error doesn't show when I tried running it in my Ubuntu VM
I've filled a bug report against this issue: https://bugs.archlinux.org/task/47220 This happens because the ostream source file cannot be found. Workaround 1 You can strip the libstdc++ library: sudo strip /usr/lib/libstdc++.so.6 And then gdb will not try to open the source file and the error will not appear anymore. You can switch back to the unstripped version by reinstalling it with: sudo pacman -S gcc-libs Workaround 2 You can add a substitution rule in gdb: gdb tst (gdb) set substitute-path /build/gcc/src/gcc-build/x86_64-unknown-linux-gnu/libstdc++-v3/include /usr/include/c++/5.2.0
GDB throws error on Arch Linux
1,469,380,359,000
I'm running a wheezy:armhf chroot using qemu user emulation on my jessie:x86_64 system. Somehow, a git clone on a particular private repository will hang inside the chroot, while succeed natively. This might be a bug, who knows? To improve my karma, I want to find out what's going on! As a side-note: the hang I'm experiencing is occurring with git-2.0 inside jessie:armel chroot as well... The hang does not occur inside a full-system-emulation. So I went on digging in the wheezy:armhf rabbithole, just because I had to choose one... I cannot test on a native machine... So. There is no git-dbg packet, I roll my own. Inside the wheezy:armhf chroot: sudo apt-get install build-essential fakeroot sudo apt-get build-dep git apt-get source git && cd git-1.7.10.4 DEB_CFLAGS_APPEND="-fno-stack-protector" DEB_CXXFLAGS_APPEND="-fno-stack-protector" DEB_BUILD_MAINT_OPTIONS=hardening=-stackprotector,-fortify DEB_BUILD_OPTIONS="noopt nostrip nocheck" fakeroot dpkg-buildpackage -j´getconf _NPROCESSORS_ONLN` sudo dpkg -i ../git_1.7.10.4-1+wheezy1_armhf.deb As far as I read the gcc-documentation, setting DEB_CFLAGS_APPEND and DEB_CXXFLAGS_APPEND additionally with -fno-stack-protector is not needed, but anyhow, want to be sure) Then, using qemu's builtin gdb_stub inside the chroot I'm doing: QEMU_GDB=1234 git clone /path/to/breaking/repo /tmp/bla Debugging inside the qemu throws an unsupported syscal 26 error. Firing up gdb-multiarch outside the chroot, to connect: gdb-multiarch -q (gdb) set architecture arm # prevents "warning: Architecture rejected target-supplied description" (gdb) target remote localhost:1234 (gdb) set sysroot /opt/chroots/wheezy:armhf (gdb) file /opt/chroots/wheezy:armhf/usr/bin/git Reading symbols from /opt/chroots/wheezy:armhf/usr/bin/git...done. # good! has debug symbols! (gdb) list # works! code is not stripped (gdb) step Cannot find bounds of current function # meh... (gdb) backtracke #0 0xf67e0c90 in ?? () #1 0x00000000 in ?? () # wtf? Giving a continue to let the clone happen will result in a hang, sending a ctrl-c is ignored. Generating a core-file and loading it into gdb (inside the chroot) will give me a corrupt stack: gdb -q /usr/bin/git qemu_git_20140514-160951_22373.core Reading symbols from /usr/bin/git...done. [New LWP 22373] Cannot access memory at address 0xf67fe948 Cannot access memory at address 0xf67fe944 (gdb) bt #0 0xf678b3e4 in ?? () #1 0xf678b3d4 in ?? () #2 0xf678b3d4 in ?? () Backtrace stopped: previous frame identical to this frame (corrupt stack?) Now I'm lost. Where is the problem? Did I miss some detail in the qemu-user-emulation? Do I have to use a completely emulated arm-machine? A misunderstanding in cross-debugging? gdb-multiarch limitations? The creation of debug-packages? Thanks for any suggestions, pointers, hints, tips, comments and what-not. My best guess in the moment is based on the fact that git does a clone (I can see two processes/threads), but the QEMU_GDB environment variable is unset by qemu after using. Hence only the initial process is going to gdb. See here for example. But still: I should be able to properly debug the parent process? I can easily cross-debug a hello-world MWE.
Turns out that this particular hang of "git clone" is a qemu-related problem... Other problems in the qemu-user-emulation prevail, so I have to fall back to full-system emulation... ;-( Using a qemu-user-static, compiled from their git (the qemu-2.0.0+dfsg-4+b1 currently in jessie has less fixes and won't work for my case...): git clone git://git.qemu-project.org/qemu.git $HOME/qemu.git && cd $HOME/qemu.git ./configure --static --disable-system --target-list=arm-linux-user --prefix=$HOME/qemu.install --disable-libssh2 make && make install sudo cp $HOME/qemu.install/bin/qemu-arm /opt/chroots/wheezy:armhf/usr/bin/qemu-arm-static so... However, I'm still not able to get backtrace of complex programs...
Backtrace of "git clone" running inside qemu-user-emulation based arm-chroot
1,469,380,359,000
I'm trying to debug Linux kernel using qemu and gdb. The problem is that gdb won't stop at the breakpoint. I've searched about it and found that turning kASLR off may help because kASLR confuses gdb. -- Install that kernel on the guest. +- Install that kernel on the guest, turn off KASLR by adding "nokaslr" to the kernel command line . Unfortunately, I don't know what it means to add nokaslr to the command line and the way to do that. Any ideas would be appreciated.
Kernel boot parameters can be set temporarily per boot or always via some configuration file; how this is done depends on the bootloader which for current versions of Ubuntu is grub2; $ grep GRUB_CMDLINE_LINUX_DEFAULT /etc/default/grub GRUB_CMDLINE_LINUX_DEFAULT="quiet" $ sudo perl -i -pe 'm/quiet/ and s//quiet nokaslr/' /etc/default/grub $ grep quiet /etc/default/grub GRUB_CMDLINE_LINUX_DEFAULT="quiet nokaslr" $ sudo update-grub and then reboot; confirm at the grub menu that the parameters appear as expected.
Turning off kASLR to debug linux kernel using qemu and gdb
1,469,380,359,000
Is there a way to measure elapsed time running the program under gdb? Look to this: <------------bp----------------------------------> Assume that we are debugging a file and in some random place, we set a breakpoint. Now in gdb we perform something and then we let the program continue the execution by using the gdb command line (run). My question is here. I want to measure the elapsed time from the bp until the program either successfully ends or some error occurs. My suggestion is to use .gdbinit file, and in that file we call some C function to start the timer after run command and at the end of the execution we also call a gettime() C fun. So, my pseudo code is a bit like this (.gdbinit file): break *0x8048452 (random place) //start time run //get time
The easiest way to do this (if your gdb has python support): break *0xaddress run # target process is now stopped at breakpoint python import time python starttime=time.time() continue python print (time.time()-starttime) If your gdb doesn't have python support but can run shell commands, then use this: shell echo set \$starttime=$(date +%s.%N) > ~/gdbtmp source ~/gdbtmp continue shell echo print $(date +%s.%N)-\$starttime > ~/gdbtmp source ~/gdbtmp Alternatively, you could make the target process call gettimeofday or clock_gettime, but that is a lot more tedious. These functions return the time by writing to variables in the process's address space, which you'd probably have to allocate by calling something like malloc, and that may not work if your breakpoint stopped the program in the middle of a call to another malloc or free. However, a slight problem with this solution is that the continue and print result lines need to be run right after each other, or else the timing will be inaccurate. We can solve this by putting the commands in a canned script through "define". If we run define checkTime, then gdb will prompt us to enter a list of commands. Just enter any of the command lists above(python/shell), and then you can call the script by just using the command checkTime. Then, the timing will be accurate. Additionally, you can put define checkTime and then the list of commands in the ./gdbinit file so that you don't have to manually redefine it every time you execute a new program.
Elapsed time in gdb
1,469,380,359,000
When debugging it is often helpful to loot at assembly, but on Debian 9 when I try layout asm I get: Undefined command: "layout". Try "help". According to some internet research it seems like I need to have TUI enabled, but I'm not sure how to enable or install it.
The default installation from Debian 9's netinst ISO doesn't include gdb or C or C++ compilers. A user would typically run apt install build-essential gdb to install them. In certain circumstances - I could reproduce this by using the netinst ISO and choosing to install KDE - the gdb-minimal package will be installed, which provides a gdb that doesn't include TUI (or python). mp@debian9$ apt-rdepends -r gdb-minimal gdb-minimal Reverse Depends: plasma-workspace (4:5.8.6-2.1+deb9u1) plasma-workspace Reverse Depends: kde-plasma-desktop (>= 5:92) ... kde-plasma-desktop Reverse Depends: kde-full (>= 5:92) Reverse Depends: kde-standard (>= 5:92) It looks like you have this. Running apt install gdb will remove gdb-minimal and install the full gdb.
How to enable TUI for gdb on Debian 9?
1,469,380,359,000
GDB is telling me, that the program compiled with gcc -m32 (i386 program) is incompatible with my shared libraries (i386:x86-64). Output of gdb: (gdb) r Starting program: /root/format warning: `/libexec/ld-elf.so.1': Shared library architecture i386:x86-64 is not compatible with target architecture i386. It would be nice if someone could explain how to fix this / how to get the libraries.
You do not tell anything about your system so I'll just make the most likely guess. You are running a 64 bit system and have not installed any 32 bit libraries. The simplest method is to simply add them from the installer: bsdinstall ...and select lib32. You can run the installer at any time (not just first install). That's it. What it does is getting the lib32.txz tarball from somewhere like http://ftp.freebsd.org/pub/FreeBSD/releases/amd64/12.1-RELEASE/ and unpack it to /usr/lib32. Remember to get the correct version (check with uname -a). You can do this manually as well if you prefer. Maybe you already have it available in /usr/freebsd-dist or on a DVD. If you do it manually then you may need to tell ldconfig that you have new libraries. Or if you have placed them in unusual locations you need to correct the paths. ldconfig -32 /usr/lib32/ /usr/local/lib32/compat/ ...or... ldconfig -v -m -R /usr/lib32 See ldconfig(8) You tell very little of why you try to run a 32 bit binary on a 64 bit system. If it is because it was actually made on an older version then you might want one of the compatability packages: misc/compat8x, misc/compat9x, misc/compat10x, misc/compat11x And just to be sure: You are aware that you would normally target amd64 (i386-64) on a recent system? i386 is only 32 bit and we have moved on to 64 bit. This might be trivial to you but with the lack of context in the question I just want to make sure that we are not trying to solve the wrong problem.
How can I install i386/x86 shared libraries on freebsd?
1,469,380,359,000
I'm trying to debug a code using GDB in a Fedora machine. It produces this message each time I run it. Missing separate debuginfos, use: debuginfo-install glibc-2.18-12.fc20.x86_64 libgcc-4.8.3-1.fc20.x86_64 libstdc++-4.8.3-1.fc20.x86_64 My questions: Should these packages be in GDB by default? What is the function of each of these packages? In real production environments should these packages be installed for GDB? Is it ok if I do not install these packages? What will be the effect?
No. gdb is packaged by a maintainer, glibc is packaged by another maintainer, gcc, libstdc and so on all all packaged by different maintainers. To package the debuginfo for these along with gdb would take considerable coordination. Each time one of the packages changed, the gdb maintainer would have to repackage and release. It would become quite cumbersome to manage. gdb can also debug other languages, for example java, which wouldn't need the debuginfo for the libraries listed. The debuginfo packages contain the source code and symbols stripped from the executable. They are only required during debugging, therefore are redundant during normal use. They do take up a fair amount of space, therefore are stripped during production releases. It depends. Most C code will use glibc etc. However, if you're debugging package X and don't need to delve into the internals of glibc you could manage without installing it. If you want to follow the code in gdb all the way to the low-level glibc, or if you think there's a bug in the library itself, then you'll need to install it. On the other hand, some C code might be statically linked and should have everything needed within it's own debuginfo package, or an application could be written in another language. Neither would need these installed. Yes. The effect of not installing these packages is that you will not be able to debug effectively into the routines provided by them. As in 3 above, it all depends on whether you need to debug at that level or not. Note: You'll find that many applications have been optimised (with the -O flag in the compiler) and don't debug that well with debuginfo. A workaround is to recompile without any optimisation.
Missing separate debuginfos
1,469,380,359,000
I am trying to debug a kernel running on QEMU with GDB. The kernel has been compiled with these options: CONFIG_DEBUG_INFO=y CONFIG_GDB_SCRIPTS=y I launch the kernel in qemu with the following command: qemu-system-x86_64 -s -S -kernel arch/x86_64/boot/bzImage In a separate terminal, I launch GDB from the same path and issue these commands in sequence: gdb ./vmlinux (gdb) target remote localhost:1234 (gdb) hbreak start_kernel (gdb) c I did not provide a rootfs, as I am not interested in a full working system as of now, just the kernel. I also tried combinations of hbreak/break. The kernel just boots and reaches a kernel panic as rootfs cannot be found... expected. I want it to stop at start_kernel and then step through the code. observation: if I set an immediate breakpoint, it works and stops, but not on start_kernel / startup_64 / main Is it possible that qemu is not calling all these functions, or is it being masked in some way? Kernel: 4.13.4 GDB: GNU gdb (Ubuntu 7.7.1-0ubuntu5~14.04.3) 7.7.1 GCC: gcc (Ubuntu 4.8.4-2ubuntu1~14.04.3) 4.8.4 system: ubuntu 14.04 LTS NOTE: This exact same procedure worked with kernel 3.2.93, but does not work with 4.13.4, so I guess some more configurations are needed. I could not find resources online which enabled this debug procedure for kernel 4.0 and up, so as of now I am continuing with 3.2, any and all inputs on this are welcome.
I ran into the same problem and found the solution from the linux kernel newbies mailing list. You should disable KASLR in your kernel command line with nokaslr option, or disable kernel option "Randomize the kernel memory sections" inside "Processor type and features" when you build your kernel image.
Hardware breakpoint in GDB +QEMU missing start_kernel
1,469,380,359,000
I've downloaded the GDB source: git clone git://sourceware.org/git/binutils-gdb.git now how do I generate the documentation from source as can be downloaded from: https://sourceware.org/gdb/current/onlinedocs/ ? I'm especially interested in the HTML documentation, especially if it is possible to build a single page version of it. I'm at GDB master f47998d69f8d290564c022b010e63d5886a1fd7d after gdb-8.2-release.
cd binutils-gdb/gdb ./configure cd doc make html MAKEINFO=makeinfo MAKEINFOFLAGS='--no-split' ls *.html This assumes that you have makeinfo installed; that should be something like apt-get install texinfo on debian-like systems.
How to build the GDB documentation from source?
1,469,380,359,000
I am trying to debug a segfault without code a binary in a nonstandard path (specifically in /frs/alg/alg/bin/) and I was noticing that the decompiled code has fewer symbols than when debugging under gdb. I am assuming that the debug symbols are detached, but Where should I look to find them?
Presumably the binary has detached debug information; if gdb is able to find this without any particular configuration, it should be in one of a build-id based file under /usr/lib/debug/.build-id; a .debug file alongside the binary; a .debug file in /frs/alg/alg/bin/.debug; a .debug file in /usr/lib/debug/frs/alg/alg/bin. The binary might have a debug link pointing at the detached information; look for a .gnu_debuglink section in the binary. To find its build-id (if any), look for a section named .note.gnu.build-id or something along those lines.
How do I find detached debug symbols for decompilation?
1,469,380,359,000
After a program is compiled and the binary file is generated, we can use objdump to disassemble the binary file and extract the assembly code and a lot of information. However, using -j .text with objdump, it will disassembly the whole functions (glibc, OS functions, etc ... ) that I do not want. I want to focus only on my own functions in the binary file. Using nm, it is possible to find only the user defined functions. After extracting the name of these functions, I want to disassembly only these functions. However, I do not want to search in the hug dump file that objdump generates it and extract disassembly code that related to my functions. Assuming we have the binary file for basicmath program from MiBench benchmark. using nm, it is possible to find only the functions that are defined in the source code of this program. The command below will show the functions that I want (the user-defined functions) nm -P tst.o | awk '$2 == "T" && $1 != "main" {print "b " $1}' The result will be (considering basicmath program) b deg2rad b rad2deg b solveCubic b usqrt Now, I need a way to tell objdump to disassembly only these functions and write the result to a single file.
I don't see a way to tell the objdump program from binutils or elfutils to limit disassembly to specific functions. There are a couple of workarounds, though. Assume the list of functions we're interested in is in file list: $ nm -P basicmath_small | awk '{ if ($2 == "T" && $1 != "main" && substr($1,1,1) != "_") print $1}' > list $ cat list deg2rad rad2deg SolveCubic usqrt use awk to filter the big disassembly output from objdump. Each region of interest starts with a line that ends with <functionname>: and continues until an empty line (or end-of-file). $ xargs < list | sed -e 's/^/<(/' -e 's/ /|/g' -e 's/$/)>:\$/' > rlist $ cat rlist <(deg2rad|rad2deg|SolveCubic|usqrt)>:$ $ objdump -d -j .text basicmath_small | awk -v rlist="$(cat rlist)" \ '{ if ($0 ~ rlist) doprint=1; if ($0 == "") doprint=0; if (doprint) print }' 0000000000400fc0 <rad2deg>: 400fc0: f2 0f 59 05 c8 4d 0a mulsd 0xa4dc8(%rip),%xmm0 # 4a5d90 <c2+0x10> 400fc7: 00 400fc8: f2 0f 5e 05 b8 4d 0a divsd 0xa4db8(%rip),%xmm0 # 4a5d88 <c2+0x8> 400fcf: 00 400fd0: c3 retq 400fd1: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1) 400fd6: 66 2e 0f 1f 84 00 00 nopw %cs:0x0(%rax,%rax,1) 400fdd: 00 00 00 0000000000400fe0 <deg2rad>: 400fe0: f2 0f 59 05 a0 4d 0a mulsd 0xa4da0(%rip),%xmm0 # 4a5d88 <c2+0x8> 400fe7: 00 400fe8: f2 0f 5e 05 a0 4d 0a divsd 0xa4da0(%rip),%xmm0 # 4a5d90 <c2+0x10> 400fef: 00 400ff0: c3 retq 400ff1: 66 2e 0f 1f 84 00 00 nopw %cs:0x0(%rax,%rax,1) 400ff8: 00 00 00 400ffb: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1) 0000000000401000 <SolveCubic>: 401000: f2 0f 5e c8 divsd %xmm0,%xmm1 ... use the objdump from Go, which takes a -s regexp option $ xargs < list | sed -e 's/^/^(/' -e 's/ /|/g' -e 's/$/)\$/' > rlist $ cat rlist ^(deg2rad|rad2deg|SolveCubic|usqrt)$ $ go tool objdump -s "$(cat rlist)" basicmath_small TEXT rad2deg(SB) :0 0x400fc0 f20f5905c84d0a00 MULSD 0xa4dc8(IP), X0 :0 0x400fc8 f20f5e05b84d0a00 DIVSD 0xa4db8(IP), X0 :0 0x400fd0 c3 RET TEXT deg2rad(SB) :0 0x400fe0 f20f5905a04d0a00 MULSD 0xa4da0(IP), X0 :0 0x400fe8 f20f5e05a04d0a00 DIVSD 0xa4da0(IP), X0 :0 0x400ff0 c3 RET TEXT SolveCubic(SB) :0 0x401000 f20f5ec8 DIVSD X0, X1 ...
How to disassembly multiple functions using Linux utility objdump?
1,469,380,359,000
I want to emulate an ARM processor for running the assembly programs using QEMU in RHEL. I have installed QEMU but I still have problems in running the assembly program. I got the assembly program, memory map and the makefile from this link. However, if I run the below command, qemu-system-arm -S -s -M versatilepb -daemonize -m 128M -d in_asm,cpu,exec -kernel hello_world.bin ; gdb-multiarch --batch --command=hello_world.gdb I get an error as "hello_world.bin - No such file or directory". I am not sure what is to be done to run the above command. So, I got an ARM image from this link. Instead of hello_world.bin, if I specify the kernel name as "zImage.Integrator", I am getting a QEMU console window. However, I am not able to type or do anything in that window. Can someone please let me know how to run an assembly program using QEMU for ARM?
Alright, I figured out what am making wrong. I should actually run the make command which will create my object file and binary file. I got more information on running the command from this link. Now, I have to figure out how to install GDB to interact between the ARM and QEMU.
QEMU for ARM programs with GDB [closed]
1,603,617,720,000
I know the basics of how to use gdb. But I would like to learn some advanced debugging techniques using gdb. What are the best resources - books, blogs, tutorials - that any of you use regularly. I did look at this question: Tips or resources for learning advanced debugging techniques GDB in xcode but what I'm looking for is the GDB equivalent of the following: http://www.dumpanalysis.org/ http://WinDbg.info Memory Dump Analysis Anthology I understand that this is subjective question. But there are lot of questions which are very similar in nature (e.g The Definitive C++ Book Guide and List) and this question is not asked in here. If there is a duplicate that I missed please site it in the comments and close this question.
Norman Matloff's book on debugging: The Art of Debugging is quite good, though I don't know if you would consider it advanced. There is also his online tutorial, Guide to Faster, Less Frustrating Debugging, which might be an earlier version of the book. There is also a tutorial My debugging tutorial, linked from the page Norm Matloff's DDD Tutorial. Personally, I use print statements. :-) I've tried to use GDB in the past, but only with C++ (I don't use C). The problems I had were that, first GDB was itself quite buggy, and second, did not cope well with displaying complex C++ structures. This was some time ago, and the situation may have improved.
what are some of the best resources to learn advanced debugging techniques using gdb? [closed]
1,603,617,720,000
I'm building a continuous integration environment for a firmware codebase, programming an ARM Cortex M0 using a Segger JLink device and running tests on the target using gdb and Segger's RTT tool. I have three processes I need to start from "expect": The gdb server. This listens for a connection from... The gdb client. Segger's RTT client logs output from the target to the host terminal so I can see how the tests are getting on. I have "make" targets for each of these. When I run tests as a human, I have a terminal tab open for each. Run one by one in the terminal, they all run fine. However, when run by the following "expect" script, the gdb client will stop at the point at which it's supposed to be sending stuff to the gdb server. Why? #!/usr/bin/expect # Bike Tracker firmware/hardware test. For syntax, see "man expect". # gdb server spawn /Applications/SEGGER/JLink/JLinkGDBServer -device nrf51822 -if swd -speed 4000 -port 2331 expect { -ex "Waiting for GDB connection..." } sleep 1 send_user "\nexpect: gdb server running OK\n" # gdb client spawn ~/dev/gcc-arm-none-eabi-4_9-2015q1/bin/arm-none-eabi-gdb -x ./_build/.gdbinit ./_build/biketracker_app_s130.elf sleep 2 set timeout 10 expect { -ex "(gdb)" { send "cont" send "cont" } -ex "Operation timed out" { send_user "expect: Timed out on gdb client. Did the server start OK?\n" exit 1 } timeout { send_user "\nexpect: Timed out on gdb client output.\n" exit 1 } } send_user "\nexpect: gdb client running OK\n" # Segger RTT client spawn /Applications/SEGGER/JLink/JLinkRTTClient -device nrf51822 -if swd -speed 4000 # If we do an RX operation on the modem, that takes 6s for the TX and about 30s for the RX. set timeout 40 expect { -ex "END_OF_TEST" { exit 0 } eof { exit 0 } -ex "ASSERT" { send_user "\nexpect: A test failed. See RTT output for details.\n" exit 1 } timeout { send_user "\nexpect: Timed out on RTT output.\n" exit 1 } } Terminal output: expect test.expect spawn /Applications/SEGGER/JLink/JLinkGDBServer -device nrf51822 -if swd -speed 4000 -port 2331 SEGGER J-Link GDB Server V5.12f Command Line Version JLinkARM.dll V5.12f (DLL compiled May 17 2016 16:04:43) -----GDB Server start settings----- GDBInit file: none GDB Server Listening port: 2331 SWO raw output listening port: 2332 Terminal I/O port: 2333 Accept remote connection: yes Generate logfile: off Verify download: off Init regs on start: off Silent mode: off Single run mode: off Target connection timeout: 0 ms ------J-Link related settings------ J-Link Host interface: USB J-Link script: none J-Link settings file: none ------Target related settings------ Target device: nrf51822 Target interface: SWD Target interface speed: 4000kHz Target endian: little Connecting to J-Link... J-Link is connected. Firmware: J-Link ARM V8 compiled Nov 28 2014 13:44:46 Hardware: V8.00 S/N: 268006243 OEM: SEGGER-EDU Feature(s): FlashBP, GDB Checking target voltage... Target voltage: 3.04 V Listening on TCP/IP port 2331 Connecting to target...Connected to target Waiting for GDB connection... expect: gdb server running OK spawn ~/dev/gcc-arm-none-eabi-4_9-2015q1/bin/arm-none-eabi-gdb -x ./_build/.gdbinit ./_build/biketracker_app_s130.elf GNU gdb (GNU Tools for ARM Embedded Processors) 7.8.0.20150304-cvs Copyright (C) 2014 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "--host=x86_64-apple-darwin10 --target=arm-none-eabi". Type "show configuration" for configuration details. For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>. Find the GDB manual and other documentation resources online at: <http://www.gnu.org/software/gdb/documentation/>. For help, type "help". Type "apropos word" to search for commands related to "word"... Reading symbols from ./_build/biketracker_app_s130.elf...done. 0x0002ecf2 in rx_done_event (bytes=1 '\001', p_data=0x20002cec <rx_buffer> ",") at /Users/Eliot/dev/nRF5_SDK_11.0.0_89a8197/components/drivers_nrf/uart/nrf_drv_uart.c:631 631 m_cb.handler(&event,m_cb.p_context); Loading section .text, size 0x1fbec lma 0x1b000 expect: Timed out on gdb client output.
The problem is probably that the gdb server output is being blocked because no one reads it, so the gdb client is also blocked. You can get expect to read and ignore the rest of the output from gdb with expect_background eof exit which makes the last spawned command continue with its output until end-of-file is read.
Why does a gdb client fail to talk to its gdb server when started by this "expect" script?
1,603,617,720,000
I'm having lots of problems with GDB; usually crashes and starts using 100% CPU until I kill the process using the activity monitor on the Mac (using Mavericks). How do I remove GDB from my machine (using GDB 7.6.1)? I plan to install an older version (GDB 6.x.x) after uninstalling this version.
A quick solution would be to simply remove whole /usr/local in case you haven't installed anything else there (according to out little detour in comments, I think that is the case). So sudo mv /usr/local /usr/_local will get rid of whatever gunk is there (you can delete the directory later, when you're sure it contains nothing important). Then instead of following the path of building gdb from scratch, use homebrew to install gdb. If this doesn't work as expected either, then just mv /usr/local /usr/__local (or something similar, I think you can see the pattern to emerge here) and try building an older version from source. I'd still recommend that you at least try homebrew bottled version, because gdb 6 is quite aged already.
Uninstalling GDB on a Mac
1,603,617,720,000
I am running the following: command: gcore 56058 output: Missing separate debuginfo for /lib64/libdl.so.2 Try: zypper install -C "debuginfo(build-id)=dcca9c1f648bda0a7318a7c8844982c440e3e4a3" Missing separate debuginfo for /lib64/librt.so.1 Try: zypper install -C "debuginfo(build-id)=a8648696e4118ee36ec41c9d75c0520c213ad6ea" Missing separate debuginfo for /usr/lib64/libstdc++.so.6 Try: zypper install -C "debuginfo(build-id)=a6fb063da357832cfb5db486b331ab960937c906" Missing separate debuginfo for /lib64/libm.so.6 Try: zypper install -C "debuginfo(build-id)=00ad299aa07655131d2732eee1b767b99cf9c85e" Missing separate debuginfo for /lib64/libgcc_s.so.1 Try: zypper install -C "debuginfo(build-id)=9da24cf706b41e55ce5373bcb6253c1618b00abf" Missing separate debuginfo for /lib64/libpthread.so.0 Try: zypper install -C "debuginfo(build-id)=a3bdfa74d39fa9e1c4252ecf5007f7e8c1fcb628" [Thread debugging using libthread_db enabled] [New Thread 0x20002407910 (LWP 56067)] [New Thread 0x20001c07910 (LWP 56066)] [New Thread 0x20001407910 (LWP 56065)] [New Thread 0x20000c07910 (LWP 56064)] [New Thread 0x20000037910 (LWP 56059)] Missing separate debuginfo for /lib64/libc.so.6 Try: zypper install -C "debuginfo(build-id)=eeb7bc1f31ca2e17c31f5768901f653e47acd6d3" Missing separate debuginfo for /lib/ld64.so.1 What are the debuginfo's about, what are they cause by and is there any reason for concern? Basically, what does it mean?
Compilers can be configured to generate extra information with the executable and/or libraries that aid debugging. With this extra information, your debugger can show the original source code and variable names amongst other things. Unfortunately, this debugging information take up a lot of space on the system. Considering that they are hardly ever used (if everything is working to plan) then they are just redundant and take up disk space. To get around this, many distros split the package into two - one contains everything that is needed to make that package run and the second contains the debug information above. The latter are called debuginfo packages and they need to be installed to successfully debug the main package. You're using SuSE and as I don't use it, I can't really comment on how to install these packages on that distro other than I believe you enable a repository and use zypper to install the same package with debuginfo in it's name. On Fedora you enable a repository and use the debuginfo-install command to install these debuginfo package. Your command gcore is creating a core dump of process 56058. With the debuginfo packages installed it could add far more useful debugging info in the core dump, which is why it's suggesting you install them.
"Missing separate debuginfo for ..." when running gcore
1,603,617,720,000
While doing post-mortem debugging of my x86-64 application, I've come across a strange symptom: (gdb) p/x $xmm1 $8 = {v4_float = {<unavailable>, <unavailable>, <unavailable>, <unavailable>}, v2_double = {<unavailable>, <unavailable>}, v16_int8 = {<unavailable> <repeats 16 times>}, v8_int16 = {<unavailable>, <unavailable>, <unavailable>, <unavailable>, <unavailable>, <unavailable>, <unavailable>, <unavailable>}, v4_int32 = {<unavailable>, <unavailable>, <unavailable>, <unavailable>}, v2_int64 = {<unavailable>, <unavailable>}, uint128 = <unavailable>} Puzzled, I've then tried (gdb) info all-registers rax 0x7f4fb3286020 139980284911648 rbx 0x7fff90cbf720 140735622674208 rcx 0xffff0 1048560 rdx 0xffef0 1048304 rsi 0xfbeea0 16510624 rdi 0x7f4fb3386010 139980285960208 rbp 0x7fff90cbf6f0 0x7fff90cbf6f0 rsp 0x7fff90cad5e8 0x7fff90cad5e8 r8 0x7f4fb3386004 139980285960196 r9 0x4 4 r10 0x3 3 r11 0x246 582 r12 0xd466f0 13919984 r13 0xffff4 1048564 r14 0x7fff90cad620 140735622600224 r15 0x7fff90cad610 140735622600208 rip 0x7f4fc1c01728 0x7f4fc1c01728 <__memcpy_ssse3_back+7016> eflags 0x10206 [ PF IF RF ] cs 0x33 51 ss 0x2b 43 ds 0x0 0 es 0x0 0 fs 0x0 0 gs 0x0 0 st0 *value not available* st1 *value not available* st2 *value not available* st3 *value not available* st4 *value not available* st5 *value not available* st6 *value not available* st7 *value not available* fctrl *value not available* fstat *value not available* ftag *value not available* fiseg *value not available* fioff *value not available* foseg *value not available* ---Type <return> to continue, or q <return> to quit--- fooff *value not available* fop *value not available* mxcsr *value not available* ymm0 *value not available* ymm1 *value not available* ymm2 *value not available* ymm3 *value not available* ymm4 *value not available* ymm5 *value not available* ymm6 *value not available* ymm7 *value not available* ymm8 *value not available* ymm9 *value not available* ymm10 *value not available* ymm11 *value not available* ymm12 *value not available* ymm13 *value not available* ymm14 *value not available* ymm15 *value not available* I take it to mean that core dumps don't save FPU and SSE/AVX state. Is it true? Or could it be a bug in GDB? How can I check whether the core file itself contains the values for these registers? GDB is GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-64.el7. The same thing appears on Kubuntu 14.04 with the same executable and its core file with GDB 7.11 compiled from sources.
It seems to be true that Linux doesn't save these registers for the crashed thread. I've tried eu-readelf --notes myapp.core and it only reported PRSTATUS and various signal-related info for the crash, but not FPREGSET. Amusingly, other threads do appear to have FPREGSET saved in the dump. So the file just lacks this info. I've found an LKML message about this posted in 2014, but there doesn't seem to have been any reply. I assume this is just a kernel bug, not something optional and disabled on my system.
Are FPU/SSE/AVX registers not saved in core dumps?
1,603,617,720,000
GDB seems to hang everytime when I try run command from gdb prompt. When I ran ps, there are two gdb processes that have been spawned and pstack reveals the following - 15:47:02:/home/stufs1/pmanjunath/a2/Asgn2_code$ uname -a SunOS compserv1 5.10 Generic_118833-24 sun4u sparc SUNW,Sun-Blade-1500 15:44:04:/home/stufs1/pmanjunath/a2/Asgn2_code$ ps aux | grep gdb pmanjuna 13121 0.1 0.1 1216 968 pts/23 S 15:44:11 0:00 grep gdb pmanjuna 13077 0.0 0.1 7616 4392 pts/15 S 15:41:41 0:00 gdb client pmanjuna 13079 0.0 0.1 7616 4392 pts/15 T 15:41:51 0:00 gdb client 15:44:50:/home/stufs1/pmanjunath/a2/Asgn2_code$ pstack 13077 13077: gdb client fef42c30 vfork () 00065938 procfs_create_inferior (32ea10, 32d728, 317430, 1, 0, 657a8) + 190 0008c668 sol_thread_create_inferior (32ea10, 32d728, 317430, 1, 25e030, 0) + 18 000ffda0 find_default_create_inferior (32ea10, 32d728, 317430, 1, 405c, 4060) + 20 000d8690 run_command_1 (0, 1, 32ea10, 1, ffbff0f4, 316fd0) + 208 0007e344 do_cfunc (316fd0, 0, 1, 1, 0, 0) + c 0008016c cmd_func (316fd0, 0, 1, 0, 1, 0) + 30 0004c1d4 execute_command (316fd0, 1, 0, 4f00c, 1, 2dc800) + 390 000eb6a0 command_handler (2f4ee0, 0, 2f3800, 8acf, ff000000, ff0000) + 8c 000ebbcc command_line_handler (2f3800, 7200636c, 32d71c, 7200, 2dfc00, 2dfc00) + 2a4 0019b354 rl_callback_read_char (fef6b6f8, 0, 931d8, 0, fef68284, fef68284) + 340 000eafb4 rl_callback_read_char_wrapper (0, fef709b0, 0, 11, 0, eafb0) + 4 000eb590 stdin_event_handler (0, 0, 932b4, fef6fad4, 0, 1) + 60 000ea780 handle_file_event (1, 1084, 932f4, 4f00c, ff1f2000, 1000) + bc 000ea11c process_event (0, 0, ffffffff, 0, 2df9f8, 0) + 84 000ea9d4 gdb_do_one_event (1, 1, 0, 2f3158, ff1f2000, 2) + 108 000e7cd4 catch_errors (ea8cc, 0, 2473a8, 6, ffbff6f0, 1) + 5c 000907e8 tui_command_loop (0, 64, ffffffff, 0, 0, 2f6190) + e0 000e7fcc current_interp_command_loop (800000, ff400000, ffc00000, 800000, 0, 331b40) + 54 00045b80 captured_command_loop (1, 1, 0, fef33a54, ff1f2000, 2) + 4 000e7cd4 catch_errors (45b7c, 0, 22db20, 6, 2dc400, 0) + 5c 0004625c captured_main (2d1800, 2f4ae0, 0, 0, 0, 0) + 6a0 000e7cd4 catch_errors (45bbc, ffbffc18, 22db20, 6, 0, 0) + 5c 00046bb0 gdb_main (ffbffc18, 0, 0, 0, 0, 0) + 24 00045b6c main (2, ffbffc9c, ffbffca8, 2f45b8, ff1f0100, ff1f0140) + 28 000459dc _start (0, 0, 0, 0, 0, 0) + 5c 15:45:38:/home/stufs1/pmanjunath/a2/Asgn2_code$ pstack 13079 13079: gdb client fef4098c execve (ffbfffe6, ffbffc9c, ffbffca8) feec4a7c execlp (ffbffdc6, ffffffff, 289bc0, ffbfed18, 0, ffbfed10) + ac 0016e3e8 fork_inferior (32ea10, 32d728, 317430, 6567c, 653dc, 0) + 310 00065938 procfs_create_inferior (32ea10, 32d728, 317430, 1, 0, 657a8) + 190 0008c668 sol_thread_create_inferior (32ea10, 32d728, 317430, 1, 25e030, 0) + 18 000ffda0 find_default_create_inferior (32ea10, 32d728, 317430, 1, 405c, 4060) + 20 000d8690 run_command_1 (0, 1, 32ea10, 1, ffbff0f4, 316fd0) + 208 0007e344 do_cfunc (316fd0, 0, 1, 1, 0, 0) + c 0008016c cmd_func (316fd0, 0, 1, 0, 1, 0) + 30 0004c1d4 execute_command (316fd0, 1, 0, 4f00c, 1, 2dc800) + 390 000eb6a0 command_handler (2f4ee0, 0, 2f3800, 8acf, ff000000, ff0000) + 8c 000ebbcc command_line_handler (2f3800, 7200636c, 32d71c, 7200, 2dfc00, 2dfc00) + 2a4 0019b354 rl_callback_read_char (fef6b6f8, 0, 931d8, 0, fef68284, fef68284) + 340 000eafb4 rl_callback_read_char_wrapper (0, fef709b0, 0, 11, 0, eafb0) + 4 000eb590 stdin_event_handler (0, 0, 932b4, fef6fad4, 0, 1) + 60 000ea780 handle_file_event (1, 1084, 932f4, 4f00c, ff1f2000, 1000) + bc 000ea11c process_event (0, 0, ffffffff, 0, 2df9f8, 0) + 84 000ea9d4 gdb_do_one_event (1, 1, 0, 2f3158, ff1f2000, 2) + 108 000e7cd4 catch_errors (ea8cc, 0, 2473a8, 6, ffbff6f0, 1) + 5c 000907e8 tui_command_loop (0, 64, ffffffff, 0, 0, 2f6190) + e0 000e7fcc current_interp_command_loop (800000, ff400000, ffc00000, 800000, 0, 331b40) + 54 00045b80 captured_command_loop (1, 1, 0, fef33a54, ff1f2000, 2) + 4 000e7cd4 catch_errors (45b7c, 0, 22db20, 6, 2dc400, 0) + 5c 0004625c captured_main (2d1800, 2f4ae0, 0, 0, 0, 0) + 6a0 000e7cd4 catch_errors (45bbc, ffbffc18, 22db20, 6, 0, 0) + 5c 00046bb0 gdb_main (ffbffc18, 0, 0, 0, 0, 0) + 24 00045b6c main (2, ffbffc9c, ffbffca8, 2f45b8, ff1f0100, ff1f0140) + 28 000459dc _start (0, 0, 0, 0, 0, 0) + 5c Why are these processes hanging in vfork and execve? This happens on my university machine where fellow students also have accounts. None of them have reported this problem. Seems to happen only to me. EDIT : With schily's help, I am able to corner the problem. When I log in, I am in csh by default. GDB works pretty fine here. Now, I run bash from csh to enter bash shell. Now GDB hangs. When I check the output of echo $SHELL, I see something strange $ echo $SHELL /bin/bash= There is an equal sign at the end of the output. I guess GDB is trying to spawn a bash shell I guess using the default shell variable and fails to find the binary cos of that equal sign. Now, the problem is to find out how that equal sign is getting into the shell path.
The process that calls vfork() hangs because it is the vfork() parent and the child did borrow the process image at that time so it cannot run until the child finishes a call to_exit() or exec*(). So you need to find out why the exec*() hangs. A typical reason for a hang in exec*() is a NFS hang or a traversal through a non-existent automount point. Call truss -p 13079 to get the path for the hanging exec*().
GDB hangs forever on Solaris
1,603,617,720,000
I am trying a debug a peculiar performance behavior in the thumbnail-generating process for eog, specifically gdk-pixbuf. The minimal files to reproduce are here: https://github.com/nbeaver/gdk-pixbuf-bug The process tree looks like this: systemd,1 splash `-plasmashell,4366 `-konsole,6783 `-bash,6793 `-make,6949 reproduce `-eog,6973 /usr/share/doc/docutils-doc/docs/user/images `-bwrap,10071 --ro-bind /usr /usr --ro-bind /bin /bin --ro-bind /lib64 /lib64 --ro-bind /lib /lib --ro-bind /sbin /sbin --proc /proc --dev /dev --chdir / --setenv GIO_USE_VFS local --unshare-all --die-with-parent --bind /tmp/gnome-desktop-thumbnailer-2HUN5Z /tmp --ro-bind /usr/share/doc/docutils-doc/docs/user/images/s5-files.svg /tmp/gnome-desktop-file-to-thumbnail.svg --seccomp 11 /usr/bin/gdk-pixbuf-thumbnailer -s 128 file:///tmp/gnome-desktop-file-to-thumbnail.svg /tmp/gnome-desktop-thumbnailer.png `-bwrap,10074 --ro-bind /usr /usr --ro-bind /bin /bin --ro-bind /lib64 /lib64 --ro-bind /lib /lib --ro-bind /sbin /sbin --proc /proc --dev /dev --chdir / --setenv GIO_USE_VFS local --unshare-all --die-with-parent --bind /tmp/gnome-desktop-thumbnailer-2HUN5Z /tmp --ro-bind /usr/share/doc/docutils-doc/docs/user/images/s5-files.svg /tmp/gnome-desktop-file-to-thumbnail.svg --seccomp 11 /usr/bin/gdk-pixbuf-thumbnailer -s 128 file:///tmp/gnome-desktop-file-to-thumbnail.svg /tmp/gnome-desktop-thumbnailer.png `-gdk-pixbuf-thum,10075 -s 128 file:///tmp/gnome-desktop-file-to-thumbnail.svg /tmp/gnome-desktop-thumbnailer.png From the strace log, it looks like /usr/bin/gdk-pixbuf-thumbnailer is spending about 30 seconds looking at font files: 22:44:05 munmap(0x7fd491988000, 20930832) = 0 <0.000558> 22:44:05 openat(AT_FDCWD, "/usr/share/fonts/opentype/noto/NotoSansCJK-Bold.ttc", O_RDONLY) = 5 <0.000060> 22:44:05 fcntl(5, F_SETFD, FD_CLOEXEC) = 0 <0.000014> 22:44:05 fstat(5, {st_mode=S_IFREG|0644, st_size=20930832, ...}) = 0 <0.000013> 22:44:05 mmap(NULL, 20930832, PROT_READ, MAP_PRIVATE, 5, 0) = 0x7fd491988000 <0.000021> 22:44:05 close(5) = 0 <0.000011> 22:44:06 munmap(0x7fd491988000, 20930832) = 0 <0.000525> 22:44:06 openat(AT_FDCWD, "/usr/share/fonts/opentype/noto/NotoSansCJK-Bold.ttc", O_RDONLY) = 5 <0.000076> 22:44:06 fcntl(5, F_SETFD, FD_CLOEXEC) = 0 <0.000013> 22:44:06 fstat(5, {st_mode=S_IFREG|0644, st_size=20930832, ...}) = 0 <0.000012> 22:44:06 mmap(NULL, 20930832, PROT_READ, MAP_PRIVATE, 5, 0) = 0x7fd491988000 <0.000023> 22:44:06 close(5) = 0 <0.000013> <snip> 22:44:31 stat("/usr/share/fonts/opentype/stix-word/STIXMath-Regular.otf", {st_mode=S_IFREG|0644, st_size=476872, ...}) = 0 <0.000024> 22:44:31 openat(AT_FDCWD, "/usr/share/fonts/opentype/stix-word/STIXMath-Regular.otf", O_RDONLY) = 5 <0.000026> 22:44:31 fcntl(5, F_SETFD, FD_CLOEXEC) = 0 <0.000014> 22:44:31 fstat(5, {st_mode=S_IFREG|0644, st_size=476872, ...}) = 0 <0.000013> 22:44:31 mmap(NULL, 476872, PROT_READ, MAP_PRIVATE, 5, 0) = 0x7fd49c26a000 <0.000023> 22:44:31 close(5) = 0 <0.000015> There is a particular SVG that triggers this behavior. However, it's not enough to just run eog or gdk-pixbuf-thumbnailer on the SVG. This behavior only happens when: running eog on a directory; there is a particular SVG in the directory that does not already have a thumbnail in ~/.cache/thumbnails/. (I use touch to update the timestamp of the SVG and make the thumbnailer run again every time.) there is at least one other image in the same directory; and the other image has a filename that collates before the SVG filename. (If the filename collates after the SVG filename, it generates the thumbnail in less than a second. Otherwise it takes around 30 seconds.) There are some other puzzles, too. In the strace log, the wall clock times don't seem to match the time spent in the system calls. I've run eog under strace with the -f flag: -f Trace child processes as they are created by currently traced processes as a result of the fork(2), vfork(2) and clone(2) system calls. and I've also tried the -ff flag: -ff If the -o filename option is in effect, each processes trace is written to filename.pid where pid is the numeric process id of each process. but in either case gdk-pixbuf-thumbnailer doesn't show up in the logfiles of child processes. I'm also having trouble running gdb on gdk-pixbuf-thumbnailer (something about "Target and debugger are in different PID namespaces"), so I can't tell where it's getting stuck. $ sudo gdb -p 20789 [sudo] password for nathaniel: <snip> Error while mapping shared library sections: Could not open `target:/lib/x86_64-linux-gnu/libbsd.so.0' as an executable file: No such file or directory warning: Unable to find dynamic linker breakpoint function. GDB will be unable to debug shared library initializers and track explicitly loaded dynamic code. warning: Target and debugger are in different PID namespaces; thread lists and other data are likely unreliable. Connect to gdbserver inside the container. (gdb) quit Detaching from program: target:/newroot/usr/bin/gdk-pixbuf-thumbnailer, process 20789 I'm guessing this has to do with the bwrap container. Version information: $ apt-cache policy libgdk-pixbuf2.0-bin eog libgdk-pixbuf2.0-bin: Installed: 2.36.11-2 Candidate: 2.36.11-2 Version table: *** 2.36.11-2 500 500 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 Packages 100 /var/lib/dpkg/status eog: Installed: 3.28.1-1 Candidate: 3.28.1-1 Version table: *** 3.28.1-1 500 500 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 Packages 100 /var/lib/dpkg/status My questions are: Is this bug reproducible on other machines and other versions? (I happen to be using Ubuntu 18.04, but I want to know if this happens on other distributions.) Why isn't strace -f picking up /usr/bin/gdk-pixbuf-thumbnailer as a child process of eog? Does eog use an unusual method to create child processes? How can I use gdb to attach to the /usr/bin/gdk-pixbuf-thumbnailer process and see what function it's spending time in? What might be causing this behavior?
After hitting upon the right combination of web search keywords, I am 90% sure this is a duplicate of this bug from December 15, 2018: Slow thumbnail generation due to font issues So I was investigating a slowdown in eog while auto-reloading SVG files, and it seems the problem was in the thumbnail generation, which was taking ~10s. (For a tiny SVG, mind you.) More specifically, gdk-pixbuf-thumbnailer complained about not finding a font config and spent a lot of time looking at fonts. Adding --ro-bind /var/cache/fontconfig /var/cache/fontconfig to the arguments for bwrap fixed the issue and the time is down to ~0.2s. https://gitlab.gnome.org/GNOME/gnome-desktop/issues/90 It's mentioned here: ...and we have also the huge slowdown, see https://gitlab.gnome.org/GNOME/gnome-desktop/issues/90 https://bugs.launchpad.net/ubuntu/+source/gnome-desktop3/+bug/1795668 The fix is a patch in gnome-desktop3. thumbnail: Fix slow thumbnailer due to missing font cache On some distributions, the font cache doesn't live in /usr but in /var, which we don't allow access to when sandboxing the thumbnailers. Bind mount the fontconfig cache directory read-only if it lives outside /usr, to speed up thumbnailer startup. https://gitlab.gnome.org/GNOME/gnome-desktop/merge_requests/25/diffs It looks like the fix is in gnome-desktop3 version 3.30 and later, so as of July 19, 2019, that is only Ubuntu 19.10 (Eoan Ermine, unreleased) and 19.04 (Disco Dingo, end of life January 2020). https://launchpad.net/ubuntu/+source/gnome-desktop3 https://launchpad.net/ubuntu/+source/gnome-desktop3/+publishinghistory Version information for my machine: $ apt-cache policy libgnome-desktop-3-17 libgnome-desktop-3-17: Installed: 3.28.2-0ubuntu1.5 Candidate: 3.28.2-0ubuntu1.5 Version table: *** 3.28.2-0ubuntu1.5 500 500 http://us.archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages 100 /var/lib/dpkg/status 3.28.2-0ubuntu1.3 500 500 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages 3.28.1-1ubuntu1 500 500 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 Packages
Debugging a slow thumbnailer process
1,603,617,720,000
This is somewhat related to gdb set overwrite logging on should overwrite gdb.txt correct? . Let's say I'm running a session of some application. For example purposes, let me take the example of qbittorrent again. As shared before this is how a run happens - $ gdb qbittorrent (gdb) set logging overwrite on (gdb) set logging on (gdb) set pagination 0 (gdb) run one way I know is exiting the application gracefully but sometimes the application hangs/takes too much time or simply doesn't respond. Then the only option which remains with me is using CTRL+C which if I understand correctly kills the underlying application, in our example qbittorrent and then am able to quit gdb by means of (gdb) quit Is/would there be any other way of quitting the application and still let the gdb session keep running or the only way is the crude way I mentioned above. AFAI know killing an application process should be the last solution and not the first.
You can use signals for this. Before you start your program, set up USR1 or USR2 to break gdb without affecting the program: handle SIGUSR1 nopass Then you can run your program, and when you need to stop it, run kill -USR1 from another shell with the appropriate (child) pid. gdb will pause the application, and you can then add breakpoints, examine state etc., and if you want to, continue the execution with cont.
How to quit an application running in gdb gracefully when the application isn't reponding.
1,603,617,720,000
I have been trying to find the address of the SHELL environment variable in a program on a Ubuntu 12.04 machine. I have turned off ASLR. I found similar posts : SO question and blog post I have tried using the following in gdb (gdb) x/s *((char **)environ) but I get the message : (gdb) No symbol "environ" in current context Is this not valid in Ubuntu 12.04 ? Is there any other way to inspect the address of an environment variable in a process using gdb ?
Has your binary been stripped of its symbols? If so, there will be no symbol table and you will have no hope of finding this symbol. You can find out with readelf - here my hello binary does have its symbol table: $ readelf -S hello | grep -i symtab [28] .symtab SYMTAB 0000000000000000 000018f8 $ Also when you run GDB, has your program actually started? It looks like this symbol is not resolvable until the symbol table has loaded. It won't be loaded when you first start GDB, but should be by the time you hit main(). You can simply put a breakpoint at main(), run the program, and then inspect the variable when you hit the main() breakpoint: Reading symbols from hello...(no debugging symbols found)...done. (gdb) x/s *((char **)environ) No symbol table is loaded. Use the "file" command. (gdb) b main Breakpoint 1 at 0x4005c8 (gdb) r Starting program: /home/ubuntu/hello Breakpoint 1, 0x00000000004005c8 in main () (gdb) x/s *((char **)environ) 0x7fffffffe38f: "XDG_VTNR=7" (gdb)
Using gdb to inspect environment variables
1,603,617,720,000
I want to inject a bit-flip fault into a running program. For this purpose, I'm using gdb to insert a breakpoint into the target program and then flipping a single bit in a random-selected register. When I perform this instruction under gdb in Ubuntu, I got this error when trying to manipulate $eip: (gdb) info r ... eip 0x804af59 0x804af59 <main+37> ... (gdb) p/a $eip $4 = 0x804af59 <main+37> (gdb) set $eip = $eip ^ 0x800 argument to arithmetic operation not a number or boolean (gdb) set $eax = $eax ^ 0x1 (gdb) I can't realize is this a bug in GDB? or a syntax error. The error only occurs when I try to change the following registers: %eip, %esp, and %ebp. As we can see from the listing, no problem occurs when I change the content of the eax register. More information... In safety critical systems that work in harsh environments, the system is more likely susceptible to soft error, i.e. Single Error Upset (SEU) like bit-flip. In this context, the researchers have been developed several techniques to detect such error and keep the system reliable, i.e., Fault-tolerance Techniques. In order to evaluate such techniques, the most powerful method is fault injection. You should inject fault to the most critical parts of architecture at run-time and then monitor the hardened system to assess the fault coverage of the adopted fault-tolerance technique. Generally speaking, we should imitate soft errors using fault injection. I quite understand the eip registrar what its job is and how sensitive it is to the control flow of the program. The version of the used GDB is like follow: gdb --version GNU gdb (Ubuntu 7.11.1-0ubuntu1~16.5) 7.11.1 Copyright (C) 2016 Free Software Foundation, Inc. Some output from the gdb session: (gdb) p $eip $1 = (void (*)()) 0x804af34 <main> (gdb) ptype $eip type = void (*)() (gdb) In addition, when I cast the void to int, it works without any error, but the result, I think, is not sound, since I only try to toggle single bit by xoring the content of eip with 0x1 !! (gdb) set $eip=*(int *) $eip ^ 0x1 (gdb) p $eip $2 = (void (*)()) 0x4244c8c the value of eip is 0x804af34. So, if we perform a bitwise operation with 0x1, the result should be equal to 0x804AF35 not 0x4244c8c ?! (gdb) p $eip $8 = (void (*)()) 0x804af34 <main> (gdb) p *(int *) ($eip) $9 = 69487757 (gdb) p *(int *) $eip ^0x1 $10 = 69487756 (gdb) p/a *(int *) $eip ^0x1 $11 = 0x4244c8c (gdb)
You should be using (int) to coerce the pointer to an int. And in your later tests you should not be using * to de-reference the pointer; you are fetching the memory that $eip points to. (gdb) p/x (int)$eip $4 = 0xf7eb9810 (gdb) p/x (int)$eip^1 $5 = 0xf7eb9811 (gdb) set $eip = (int)$eip^1 (gdb) p/x (int)$eip $6 = 0xf7eb9811 (gdb) set $eip = (int)$eip^0x800 (gdb) p/x (int)$eip $7 = 0xf7eb9011
How can we perform an arithmetic operation on a register using GDB? [closed]
1,603,617,720,000
This starts partially from why doesn't gdb like aliases Now I put the following arguments - gdb firefox-esr (gdb) set logging file my-firefox-esr-1802018.log (gdb) set pagination 0 (gdb) show logging Future logs will be written to firefox-esr-020818.log. Logs will be appended to the log file. Output will be logged and displayed. (gdb) run --safe-mode when it crashed I did - (gdb) bt (gdb) thread apply all bt When it finished showing all the threads and the outputs therein I put (gdb) quit But now when I am in /home/shirish I don't see that log file. Should I have given the whole path ?
By default the directive set logging file in gdb will write to the current directory. So, in your example, the log file would be written to the directory where firefox-esr is located, if the user being used has write rights on that directory. So the answer is yes, to write the log file in your home directory, you have to give the whole path to set logging file. See gdb backtrace to file for an interesting hack to accomplish your actions: alias bt='echo 0 | gdb -batch-silent -ex "run" -ex "set logging overwrite on" -ex "set logging file gdb.bt" -ex "set logging on" -ex "set pagination off" -ex "handle SIG33 pass nostop noprint" -ex "echo backtrace:\n" -ex "backtrace full" -ex "echo \n\nregisters:\n" -ex "info registers" -ex "echo \n\ncurrent instructions:\n" -ex "x/16i \$pc" -ex "echo \n\nthreads backtrace:\n" -ex "thread apply all backtrace" -ex "set logging off" -ex "quit" --args' bt $crashing_application See also Backtraces with Gentoo
where does gdb put the log file?
1,603,617,720,000
I have a CentOS 7 system. I need to attach my GDB to an already running application, but get the (apparently usual) "ptrace: Operation not permitted." error. Running GDB as root prevents the error, but I would rather not resort to this. I have researched the issue and did find multiple answers stating that you simply needed to either modify /proc/sys/kernel/yama/ptrace_scope with the value 0 or go for a permanent fix regarding the file /etc/sysctl.d/10-ptrace.conf... Well, apparently everyone assumes you are using YAMA, which seems not to be the case over here. Yet, I have not been able to find what to do in my situation yet. I have checked, and it seems my system is configured with SELinux, but it is not enabled. My Kernel boot settings include the flag selinux=0, and the command grep CONFIG_SECURITY /boot/config-`uname -r` reads # CONFIG_SECURITY_DMESG_RESTRICT is not set CONFIG_SECURITY=y CONFIG_SECURITYFS=y CONFIG_SECURITY_NETWORK=y CONFIG_SECURITY_NETWORK_XFRM=y # CONFIG_SECURITY_PATH is not set CONFIG_SECURITY_SECURELEVEL=y CONFIG_SECURITY_SELINUX=y CONFIG_SECURITY_SELINUX_BOOTPARAM=y CONFIG_SECURITY_SELINUX_BOOTPARAM_VALUE=1 CONFIG_SECURITY_SELINUX_DISABLE=y CONFIG_SECURITY_SELINUX_DEVELOP=y CONFIG_SECURITY_SELINUX_AVC_STATS=y CONFIG_SECURITY_SELINUX_CHECKREQPROT_VALUE=1 # CONFIG_SECURITY_SELINUX_POLICYDB_VERSION_MAX is not set # CONFIG_SECURITY_SMACK is not set # CONFIG_SECURITY_TOMOYO is not set # CONFIG_SECURITY_APPARMOR is not set # CONFIG_SECURITY_YAMA is not set Finally, getsebool deny_ptrace returns getsebool: SELinux is disabled. From my understanding, no LSM is currently enabled on my system, yet I still get the ptrace limitations. I am here clueless about where to look next, or what even causes the ptrace limitation at this point. Is the fact that the setuid bit is set on my executable file potentially causing this issue? Both gdb and the application are themselves launched using the same user, without any super-user privileges specifically added to either. ps -eouid,comm also shows both as having the same uid. Only the application is run using the setuid bit, and the file belongs to root:root.
Debugging a program that has privileges effectively gives the debugger the same privileges. Therefore, regardless of any security settings, debugging a program that has extra privileges must require the debugger to have at least all of these privileges. For example, a setuid program has the privileges of both the original and the target user, so the debugger has to have the privileges of both users. In practice, this means that the debugger must be root. On Linux, it's enough to give the debugger the capability CAP_SYS_PTRACE (this doesn't reduce the debugger's effective privileges, but it means that the debugger won't e.g. accidentally overwrite files of another user). It's generally more convenient to debug the program while running without extra privileges. Adjust file permissions, paths and so on accordingly. If you need to debug the program in real conditions with the privileges then the debugger needs to run as root. On Linux, this can be root in a user namespace that contains the two users involved.
Debug a setuid binary as non-root
1,603,617,720,000
I have this program in C. #include <stdio.h> #include <string.h> char * pwd = "pwd0"; void print_my_pwd() { printf("your pwd is: %s\n", pwd); } int check_pwd(char * uname, char * upwd) { char name[8]; strcpy(name, uname); if (strcmp(pwd, upwd)) { printf("non authorized\n"); return 1; } printf("authorized\n"); return 0; } int main(int argc, char ** argv) { check_pwd(argv[1], argv[2]); return 0; } I build it and examine it for buffer overflow. $ make gcc -O0 -ggdb -o main main.c -fno-stack-protector $ gdb main GNU gdb (Ubuntu 8.2-0ubuntu1~18.04) 8.2 Copyright (C) 2018 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-linux-gnu". Type "show configuration" for configuration details. For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>. Find the GDB manual and other documentation resources online at: <http://www.gnu.org/software/gdb/documentation/>. For help, type "help". Type "apropos word" to search for commands related to "word"... Reading symbols from main...done. (gdb) b check_pwd Breakpoint 1 at 0x76c: file main.c, line 12. (gdb) run joe f00b4r42 Starting program: /home/developer/main joe f00b4r42 [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". Breakpoint 1, check_pwd (uname=0x7fffffffdc01 "joe", upwd=0x7fffffffdc05 "f00b4r42") at main.c:12 12 strcpy(name, uname); (gdb) info frame Stack level 0, frame at 0x7fffffffd6d0: rip = 0x55555555476c in check_pwd (main.c:12); saved rip = 0x5555555547ef called by frame at 0x7fffffffd6f0 source language c. Arglist at 0x7fffffffd6c0, args: uname=0x7fffffffdc01 "joe", upwd=0x7fffffffdc05 "f00b4r42" Locals at 0x7fffffffd6c0, Previous frame's sp is 0x7fffffffd6d0 Saved registers: rbp at 0x7fffffffd6c0, rip at 0x7fffffffd6c8 (gdb) p &name $1 = (char (*)[8]) 0x7fffffffd6b8 (gdb) p &print_my_pwd $2 = (void (*)()) 0x55555555473a <print_my_pwd> (gdb) Quit A debugging session is active. Inferior 1 [process 21935] will be killed. Quit anyway? (y or n) y $ ./main $(python -c "print 'AAAAAAAAAAAAAAAA:GUUUU'") B non authorized your pwd is: pwd0 Segmentation fault (core dumped) $ So it was possible to leak the secret from the program but what could I do if the address would have been 0x55555555003a instead of 0x55555555473a ? Then I would not know how to pass the zero because it would be represented as a null byte and the shell would not interpret that.
So your program does the following. The check_pwd function allocates a buffer on the stack that you overflow so the return address is corrupted. You try and choose this corruption so that it points to another function, print_my_pwd, the :GUUUU string, if interpreted as a 64bit value on a little endian machine is 0x??0055555555473A with the first 8 bits being undefined. If the 8 bits are zero then you have the address of print_my_pwd. The 16 A characters are there to fill the 8 byte name array and then overwrite the stored frame pointer. So your question is "If I need a 0 byte as part of the return address, how do I specify it on the command line?", to which the answer is "it doesn't matter, as you are using strcpy to do the overflow and that will stop at the NUL byte so the attack will fail." In general the answer to your question is that the interface to the kernel is based around C strings, so even if your program used memcpy rather than strcpy you still would not be able to do what you want. Real buffer overflow attacks jump to their own code, which frequently constructs the value to be stored in memory and doesn't need to worry about NUL characters.
How to pass zeroes in the argument to the program [duplicate]
1,603,617,720,000
I use Conque-GDB as a plugin in Vim. Here is how my Vim now looks like: As you see I also use the Nerdtree and I can easily change its size: https://codeyarns.com/2014/05/08/how-to-change-size-of-nerdtree-window/ But I don't know how to change the size of Conque-GDB.
The ConqueGDB is a split in vim, so you can always resize it using vim commands, for example: :resize +20 :res -20 Where +20 and -20 is the number of pixels that will be added or subtracted from the current split size. The same way you can increase/decrease the sizing of the NERDTree: :vertical resize +20 Not sure if there is any way to specify the default split size for the ConqueGDB on its start, but you can always map the above commands for quicker resizing after the ConqueGDB is launched. More on how to resize vim splits more quick.
Conque-GDB in vim: how to set size
1,603,617,720,000
So I was wondering how to stop gdb every time my variable (test_v) changes I know watch test_v Do I do watch test_v stop To stop the program every time the variable test_v changes?
You don't need to use stop to make the program stop when the variable changes. Just watch test_v is enough. stop command is not for stopping the program, but only meant to be hooked so you can execute some commands automatically when the program stops. Example usage from the gdb manual: define hook-stop handle SIGALRM nopass end define hook-run handle SIGALRM pass end define hook-continue handle SIGALRM pass end
gdb: stop the program when the variable changes
1,603,617,720,000
I have installed gdb and added -g option to my compilation command, but when I try (gdb) s or (gdb) n it says: The program is not being run. It only works when I try (gdb) r and goes and stops where my program stops because of it's error(that I could see this without gdb in command line). How should I trace line-by-line my code?
You need to define a breakpoint, for example break main Then run and gdb will start your program and stop when it enters main.
How to trace line by line by "gdb" C/C++ code? [closed]
1,603,617,720,000
In gdb, the usual instructions for debugging given are - gdb $package set logging on set pagination 0 handle SIG33 pass nostop noprint run and of course than collecting backtraces and all. Of the above, what does handle SIG33 pass nostop noprint and where it should be used and where not ?
handle SIG33 tells gdb how to handle signal 33; in the version you give, pass means to pass the signal on, nostop tells the debugger not to stop when the signal is emitted, and noprint not to print anything. This kind of directive is useful when debugging runtimes which use signals internally. Signal 33 is used on Android, by Bionic (for back-traces); if you don’t ignore it there you’ll end up stopping all the time. You’d see similar instructions with Flash (with signals 32 and 33 at least, IIRC).
what does `handle SIG33 pass nostop noprint` does when used in gdb
1,603,617,720,000
I have a simple clock program that uses math.h functions. I am currently on Kubuntu 21.10, the GCC version is (Ubuntu 12.2.0-3ubuntu1) 12.2.0, and the GDB version is (Ubuntu 12.1-3ubuntu2) 12.1. The program source code (although it might not be needed): #include <stdio.h> #include <time.h> #include <math.h> #include <string.h> #include <unistd.h> #include "conio.h" #include <sys/ioctl.h> #include <stdlib.h> #define PI 3.14159265358979323846264338327950288419716939937510 #define RAD_90 1.570796 // precomputed value of 90 degrees in radians #define RAD_30 0.523599 // precomputed value of 30 degrees in radians #define RAD_6 0.104720 // precomputed value of 6 degree in radians #define RAD_1 0.017453 // precomputed value of 1 degree in radians #define X 0 // x co-ordinate in array #define Y 1 // y co-ordinate in array int COLUMNS, ROWS; #define CLOCK_RADIUS (COLUMNS/2)-1 #define FPS 24 #define MOVE_TO_HOME() (printf("\033[H")) #define CLEAR_TERMINAL_SCREEN() (printf("\033[2J")) #define cls() (CLEAR_TERMINAL_SCREEN()) void die(const char *s) { cls(); printf("clock: error: %s: ", s); fflush(stdout); perror(NULL); fflush(stderr); exit(1); } char **output/*[ROWS][COLUMNS*2]*/; struct tm *t = NULL; void get_window_size(int *rows, int *cols) {; struct winsize ws; if(ioctl(STDOUT_FILENO, TIOCGWINSZ, &ws) == -1 || ws.ws_col == 0) { if(write(STDOUT_FILENO, "\x1b[999C\x1b[999B", 12) != 12) die("write"); fflush(stdout); char buf[32]; unsigned int i = 0; if(write(STDOUT_FILENO, "\x1b[6n", 4) != 4) die("write"); printf("\r\n"); fflush(stdout); while(i < (sizeof(buf)-1)) { if(read(STDIN_FILENO, &buf[i], 1) != 1) die("read"); if(buf[i] == 'R') break; i++; } buf[i] = '\0'; if((buf[0] != '\x1b') || (buf[1] != '[')) die("\\x1b[6n read failure"); if(sscanf(&buf[2], "%d;%d", rows, cols) != 2) die("sscanf(146)"); cls(); } else { *cols = ws.ws_col; *rows = ws.ws_row; } } void print_char(char c, int x, int y) { if((x >= 0) && (y >= 0) && (x < COLUMNS) && (y < ROWS)) { output[y][x*2] = c; } } double deg_to_rad(int deg) { return deg*PI/180; } void clear_buffer() { for(int i = 0;i < ROWS;i++) { memset(output[i], ' ', COLUMNS*2); } output[ROWS-1][COLUMNS*2] = '\0'; } void print_buffer() { for(int i = 0;i < ROWS;i++) { puts(output[i]); } } void print_circle(char body, int r, int center[]) { if(r == 0) { print_char(body, center[X], center[Y]); return; } int offset[2], prev_offset[2] = {-1, -1}; double ang = 0, ang_leap; ang_leap = deg_to_rad((1*360)/(2*PI*r)); if(ang_leap > RAD_1) { ang_leap = RAD_1; } else if(ang_leap == 0) { ang_leap = 0.0001; } while(ang <= RAD_90) { offset[X] = round(sin(ang)*r); offset[Y] = round(cos(ang)*r); if((offset[X] == prev_offset[X]) && (offset[Y] == prev_offset[Y])) { ang += ang_leap; continue; } print_char(body, center[X]+offset[X], center[Y]+offset[Y]); // 1st quadrant print_char(body, center[X]-offset[X], center[Y]+offset[Y]); // 2nd quadrant print_char(body, center[X]-offset[X], center[Y]-offset[Y]); // 3rd quadrant print_char(body, center[X]+offset[X], center[Y]-offset[Y]); // 4th quadrant prev_offset[X] = offset[X]; prev_offset[Y] = offset[Y]; ang += ang_leap; } } void print_numbers(int r, int center[]) { /* * deg_to_rad(360/NUM_OF_NUMBERS) = ang * => deg_to_rad(360/12) = ang * => ang = deg_to_rad(30) * * * sin(ang) = P/H * = sin(ang)*H = P * * => offset_x = sin(ang)*r * offset_y = cos(ang)*r */ int offset[2]; for(int i = 1;i <= 12;i++) { offset[X] = round(sin(RAD_30*i)*r); offset[Y] = round(cos(RAD_30*i)*r); if(i >= 10) { print_char((i/10)+'0', center[X]+offset[X], center[Y]-offset[Y]); print_char((i%10)+'0', center[X]+offset[X]+1, center[Y]-offset[Y]); } else { print_char(i+'0', center[X]+offset[X], center[Y]-offset[Y]); } } } void print_hands(int r, int center[], struct tm t) { int len, offset[2]; double ang, sin_value, cos_value; char body; // second hand body = '.'; len = (r*80)/100; ang = t.tm_sec*RAD_6; sin_value = sin(ang); cos_value = cos(ang); for(int i = 0;i <= len;i++) { offset[X] = round(sin_value*i); offset[Y] = round(cos_value*i); print_char(body, center[X]+offset[X], center[Y]-offset[Y]); } // minute hand body = '*'; len = (r*65)/100; ang = deg_to_rad((t.tm_min*6)/*+(t.tm_sec/10)*/); // seconds adjustement causes confusion sin_value = sin(ang); cos_value = cos(ang); for(int i = 0;i <= len;i++) { offset[X] = round(sin_value*i); offset[Y] = round(cos_value*i); print_char(body, center[X]+offset[X], center[Y]-offset[Y]); } // hour hand body = '@'; len = (r*40)/100; ang = deg_to_rad((t.tm_hour*30)+(t.tm_min/2)+(t.tm_sec/120)); sin_value = sin(ang); cos_value = cos(ang); for(int i = 0;i <= len;i++) { offset[X] = round(sin_value*i); offset[Y] = round(cos_value*i); print_char(body, center[X]+offset[X], center[Y]-offset[Y]); } } struct tm *get_time() { time_t seconds = time(NULL); if(seconds == -1) { perror("error while calling function time()"); return NULL; } struct tm *tm = localtime(&seconds); if(tm == NULL) { perror("error while calling function localtime()"); return NULL; } return tm; } int print_clock() { int center[] = {COLUMNS/2, ROWS/2}; print_circle('.', CLOCK_RADIUS, center); print_numbers(CLOCK_RADIUS, center); t = get_time(); if(t == NULL) { return 1; } print_hands(CLOCK_RADIUS, center, *t); print_char('@', center[X], center[Y]); return 0; } void print_centered(int col_size, char *str) { int str_len = strlen(str); int start_pos = col_size-(str_len/2); for(int i = 0;i < start_pos;i++) { printf(" "); } printf("%s", str); } int main() { get_window_size(&ROWS, &COLUMNS); if(ROWS > COLUMNS/2) { COLUMNS -= 2; COLUMNS /= 2; ROWS = COLUMNS; } else if(COLUMNS/2 > ROWS) { ROWS -= 2; COLUMNS = ROWS; } output = malloc(sizeof(char*)*ROWS); for(int i = 0;i < ROWS;i++) { output[i] = malloc(sizeof(char)*((COLUMNS*2)+1)); } CLEAR_TERMINAL_SCREEN(); while(!kbhit()) { MOVE_TO_HOME(); clear_buffer(); if(print_clock()) { return 1; } print_buffer(); print_centered(COLUMNS, asctime(t)); usleep((1000*1000)/FPS); } for(int i = 0;i < ROWS;i++) { free(output[i]); } free(output); return 0; } When I compile the program with gcc clock.c -lm -g, and run it with gdb ./a.out, I allow gdb to download debug info from https://debuginfod.ubuntu.com. I set breakpoint at line 175 (which uses sin function), then enter step and I see this error: Breakpoint 1, print_hands (r=17, center=0x7fffffffda20, t=...) at clock.c:175 175 sin_value = sin(ang); (gdb) step __sin_fma (x=0.83775999999999995) at ../sysdeps/ieee754/dbl-64/s_sin.c:201 Download failed: Invalid argument. Continuing without source file ./math/../sysdeps/ieee754/dbl-64/s_sin.c. 201 ../sysdeps/ieee754/dbl-64/s_sin.c: No such file or directory. As I see, it fails to download debug info for sin function here. I tried searching the internet for similar questions, but couldn't find anything similar. What is the problem here with my gdb and how can I correct it?
The info you need is not in the math.h header file, it's in the source code of the C standard library. Unfortunately, as noted in the Service - Debuginfod documentation, the Ubuntu Debuginfod service currently does not provide that: Currently, the service only provides DWARF information. There are plans to make it also index and serve source-code in the future. You can however download the source code to a local directory, and point gdb to that via the dir command (or its -d command line equivalent). Ex. given: #include <stdio.h> #include <stdlib.h> #include <math.h> int main(int argc, char *argv[]) { double x = atof(argv[1]); double y = sin(x); printf("sin(%.3f) = %.3f\n", x, y); return(0); } then mkdir -p src && cd src apt-get source libc6 cd .. gcc -g -o myprog myprog.c -lm DEBUGINFOD_URLS="https://debuginfod.ubuntu.com" gdb -d ./src/glibc-2.35 myprog results in the following interactive session GNU gdb (Ubuntu 12.1-0ubuntu1~22.04) 12.1 [copyright header skipped] For help, type "help". Type "apropos word" to search for commands related to "word"... Reading symbols from myprog... (gdb) b 8 Breakpoint 1 at 0x11b8: file myprog.c, line 8. (gdb) r 3.14 Starting program: /home/steeldriver/myprog 3.14 This GDB supports auto-downloading debuginfo from the following URLs: https://debuginfod.ubuntu.com Enable debuginfod for this session? (y or [n]) y Debuginfod has been enabled. To make this setting permanent, add 'set debuginfod enabled on' to .gdbinit. [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". Breakpoint 1, main (argc=2, argv=0x7fffffffdf98) at myprog.c:8 8 double y = sin(x); (gdb) s __sin_avx (x=3.1400000000000001) at ../sysdeps/ieee754/dbl-64/s_sin.c:201 201 { (gdb)
GDB fails to download debuginfo for math.h
1,603,617,720,000
Actually, I was learning about the buffer overflow attack. So, can we figure out buffer address (I am using buffer variable in my program so that while writing in the buffer, I will make changes in the stack)?
Yes, as long as your variable isn’t optimised away. For example, using ls with debug symbols: gdb ls >>> break main >>> run >>> print argv $1 = (char **) 0x7fffffffdd78 In this case, argv is a pointer itself. If you want the address of a non-pointer variable, or the address of a pointer, use & as you would in C; gdb will give you the address, as above, or tell you if the variable isn’t stored in memory: >>> printf &argc Address requested for identifier "argc" which is in register $rdi On x86, the contents of SP will tell you where the stack is: >>> i r sp sp 0x7fffffffdc98 0x7fffffffdc98
Can we get address of a variable in a C program using GDB?
1,603,617,720,000
For gdb debugger (gdb) p &buffer This command is used to print the content of starting of buffer (stack), or print the address? If it is content, how to print the address?
It depends on what type the buffer is. Most likely buffer is a pointer to the start of the buffer. The C-style declaration for it might be struct stackElement *buffer; or something similar (note the asterisk!). In that case: p &buffer prints the address where the pointer itself is stored (i.e. "the address of the address of the buffer") p buffer should print the value of the buffer pointer variable, which is the address of the buffer. p *buffer should print the contents of the buffer. If bufferis some structure type, and not a pointer (example C declaration might be struct stackElement buffer; with no asterisk), then: p &buffer prints the address where the structure is, i.e. the address of the buffer p buffer prints the contents of this structure (= if this is a stack, probably the first stack element) p *buffer is an error.
GDB command to print the address of starting of buffer (stack)
1,603,617,720,000
From the documentation link here: https://www.gnu.org/software/emacs/manual/html_node/emacs/Other-GDB-Buffers.html When gdb-many-windows is non-nil, the locals buffer shares its window with the registers buffer, just like breakpoints and threads buffers. To switch from one to the other, click with mouse-1 on the relevant button in the header line. I am in the pure text-based terminal and don't have a mouse. Is there a built-in keyboard shortcut to switch between those two buffers (without having to add any extra configurations in init file)? Either keyboard shortcut or command (like M-x ...) is preferred.
The answer seems to be a simple tab. The source gdb-mi.el has a defvar for gdb-locals-mode-map which seems to in place in the locals buffer. Look for use of the symbol header-line which is what the feature is called, and then symbols gdb-locals-buffer, gdb-locals-mode.
Switching between shared buffers in emacs gdb mode without mouse (in text-terminal)
1,603,617,720,000
Is it possible to dump the assembly language code of a binary using GDB? I tried to use the "l" command but it says No symbol table is loaded. Use the "file" command.. I use the file command and it says the load symbol was loaded but when I try the "l" command again I see the same message. All I need is the whole assembly language code from that binary.
First off, don't apologize for a question. If you did prior research then you are fine. If you didn't take the time to google it, then do that first. If you want the assembly of a program, then gdb might not be your program, instead try objdump; however, if you want to view the assembly while debugging use the gdb command disassemble once you've stopped at a given frame.
Obtaining a code dump from a binary
1,603,617,720,000
I am porting the driver for a USB device to Rocky Linux 9.3. Once the module is inserted, new logins by ssh are unresponsive. Blacklisting the module and rebooting restores normal functionality. https://github.com/izot/lon-driver With the module inserted, lsmod|grep u50 "Used By" goes from 0 to 1 about every 7 secs. Then, when an SSH login is attempted, Used By goes between 3 and 2. Stop the SSH login, modprobe -r u50, try again... Now SSH gets motd but no prompt and says "PTY allocation request failed on channel 0." ssh SITE "/bin/bash -i" (login succeeds) (same with module inserted) modinfo u50 filename: /lib/modules/5.14.0-362.18.1.el9_3.x86_64/kernel/drivers/lon/u50.ko description: U50 SMIP Driver 1.4 L2 alias: tty-ldisc-28 license: GPL rhelversion: 9.3 srcversion: 311B898EC0CC268466EA85B depends: retpoline: Y name: u50 vermagic: 5.14.0-362.18.1.el9_3.x86_64 SMP preempt mod_unload modversions With the module removed, journalctl shows "error: openpty: Cannot allocate memory" and "error: session_pty_req: session 0 alloc failed." With the module inserted again and an SSH attempt, journalctl shows "NetworkManager ... manager: (lon10): new Generic device (/org/freedesktop/NetworkManager/Devices/743)" and "NetworkManager ... manager: (lon11): new Generic device (/org/freedesktop/NetworkManager/Devices/744)." When the SSH attempt is canceled, journalctl shows "Removing a network device that was not added" twice. I finally have gdb setup to remotely debug. I copied the src to the host running gdb. I can break at a function in the loadable module but that is too late. I need to break when the module is loaded and it kills new ssh logins. This module is for USB and is not related to ssh. I can break at do_init_module() and step until exit_to_user_mode_loop() then it says "Cannot find bounds of current function." Setting a breakpoint at module_init() for a future load does not break.
I didn't port this line properly and it failed to set the .num field in struct tty_ldisc_ops. //FIXME err = tty_register_ldisc(N_U50, &u50_ldisc); err = tty_register_ldisc(&u50_ldisc); The other part of my question (breaking at module_init()) was solved by setting a breakpoint in rtnl_link_register().
How can I break at module_init()? This loadable kernel module is preventing SSH logins
1,603,617,720,000
Every time I start gdb, the following information is displayed: GNU gdb (Ubuntu 9.2-0ubuntu1~20.04.1) 9.2 Copyright (C) 2020 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-linux-gnu". Type "show configuration" for configuration details. For bug reporting instructions, please see: --Type <RET> for more, q to quit, c to continue without paging-- <http://www.gnu.org/software/gdb/bugs/>. Find the GDB manual and other documentation resources online at: <http://www.gnu.org/software/gdb/documentation/>. For help, type "help". Type "apropos word" to search for commands related to "word". I have to enter Enter to proceed. This prevents me from automating gdb script execution. Is there a way to skip the help information?
Yes, add the --silent option (aka -q, -quiet, --quiet, -silent and abbreviations thereof): $ gdb --silent >>>
Start GDB with No Help Information
1,656,615,160,000
I would like to find out which method my 32bit QEMU guest is using for system calls. There's an excellent article explaining linux-gate.dso (http://www.trilithium.com/johan/2005/08/linux-gate/). However, I can 't seem to get any of the commands to work on my newer system. It appears that current security features don't allow me to dump the virtual dso: [root@qemu ~]# dd if=/proc/self/mem of=linux-gate.dso bs=4096 skip=1048574 count=1 dd: ‘/proc/self/mem’: cannot skip to specified offset dd: error reading ‘/proc/self/mem’: Input/output error 0+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0332587 s, 0.0 kB/s
In order to read /proc/[pid]/mem, a process must now PTRACE_ATTACH to it. A commonly available utility that does this is gdb Pick a running process (in my case I just opened cat in another window), then attach gdb to that process: [root@qemu ~]# gdb --pid 423 #MORE OUTPUT 0xb771dbac in __kernel_vsyscall () As part of its output while loading symbols, gdb should output the line I included above. If it doesn't, you can search the symbol table for it: (gdb) info functions vsyscall All functions matching regular expression "vsyscall": Non-debugging symbols: 0xb771db9c __kernel_vsyscall Now that we have the address of __kernel_vsyscall, we can either use gdb to inspect the system call method used: (gdb) disassemble 0xb771db9c Dump of assembler code for function __kernel_vsyscall: 0xb771db9c <+0>: push %ecx 0xb771db9d <+1>: push %edx 0xb771db9e <+2>: push %ebp 0xb771db9f <+3>: mov %esp,%ebp 0xb771dba1 <+5>: sysenter 0xb771dba3 <+7>: nop 0xb771dba4 <+8>: nop 0xb771dba5 <+9>: nop 0xb771dba6 <+10>: nop 0xb771dba7 <+11>: nop 0xb771dba8 <+12>: nop 0xb771dba9 <+13>: nop 0xb771dbaa <+14>: int $0x80 => 0xb771dbac <+16>: pop %ebp 0xb771dbad <+17>: pop %edx 0xb771dbae <+18>: pop %ecx 0xb771dbaf <+19>: ret End of assembler dump. or we can dump linux-gate.dso as originally requested: (gdb) dump memory ./linux-gate.dso 0xb771d000 0xb771e000 Basically, we know that linux-gate.dso takes one full page. Since this system has a page size of 4K=0x1000, I just rounded down from the address of __kernel_vsyscall and added 0x1000 to get the end. Outside of gdb, we can see that the file is recognized as a shared library: [root@qemu ~]# objdump -T ./linux-gate.dso |grep syscall 00000b9c g DF .text 00000014 LINUX_2.5 __kernel_vsyscall and we can find sysenter again: [root@arch-qemu ~]# objdump -d --start-address=0x00000b9c --stop-address=0x00000bac linux-gate.dso linux-gate.dso: file format elf32-i386 Disassembly of section .text: 00000b9c <__kernel_vsyscall>: b9c: 51 push %ecx b9d: 52 push %edx b9e: 55 push %ebp b9f: 89 e5 mov %esp,%ebp ba1: 0f 34 sysenter ba3: 90 nop ba4: 90 nop ba5: 90 nop ba6: 90 nop ba7: 90 nop ba8: 90 nop ba9: 90 nop baa: cd 80 int $0x80
How do I get linux-gate.dso on a newer linux system?
1,656,615,160,000
I have these lines in my .vimrc file: :map <F9> :exe ':!gdbset bp "%:'.line(".").'"'<CR><CR> :map <F8> :exe ':!gdbset clear bp "%:'.line(".").'"'<CR><CR> They work great for adding and removing break points in gdb! Only one problem (that I know of)... for some reason line numbers in the 80's don't work. If I put my cursor on line 85 and press F9 then it should put a breakpoint on line 85. If I put my cursor on line 75 and press F9 it should put a breakpoint on line 75. The resulting breakpoints look like this: b myfile.cc:5 b myfile.cc:75 Line 85 did not work. I've tested the 80's. None of them work. All other lines seem to work. Why? I'm sure one of you VIM experts can explain this to me. It's almost like the ":8" are being interpreted as some other command or something.
%:8 is a valid filename-modifier, so it is being interpreted by Vim as a part of the :! command. You can use expand('%') to manually expand %, and then properly quote it with shellescape(…,1): :map <F9> :exe '!gdbset bp' shellescape(expand('%').':'.line('.'),1)<CR><CR> :map <F8> :exe '!gdbset clear bp' shellescape(expand('%').':'.line('.'),1)<CR><CR>
vimrc mapping line numbers
1,656,615,160,000
I'm encountering a weird behaviour with my endeavourOS system. For context, it seems to have started happening after a malformed svg file made inkscape and my system crash, after which I needed to hard-reboot it. With several apps (to list the last few I've tested : flameshot, keepassxc, quiterss) I've had a Bus error (core dumped) message in the terminal and nothing more. With a bit of googling, I understood I need to use gdb but I cannot make sense of its output : $ gdb flameshot [...] For help, type "help". Type "apropos word" to search for commands related to "word"... Reading symbols from flameshot... This GDB supports auto-downloading debuginfo from the following URLs: <https://debuginfod.archlinux.org> Enable debuginfod for this session? (y or [n]) y Debuginfod has been enabled. To make this setting permanent, add 'set debuginfod enabled on' to .gdbinit. Reading symbols from /home/user/.cache/debuginfod_client/b8258803335f21d12df1003c59200a5afb4dc585/debuginfo... (gdb) run flameshot Starting program: /usr/bin/flameshot flamshot Downloading separate debug info for system-supplied DSO at 0x7ffff7fc6000 Download failed: Connection reset by peer. Continuing without separate debug info for system-supplied DSO at 0x7ffff7fc6000. Program received signal SIGBUS, Bus error. memset () at ../sysdeps/x86_64/multiarch/../multiarch/memset-vec-unaligned-erms.S:244 244 VMOVU %VMM(0), (%rdi) (gdb) backtrace #0 memset () at ../sysdeps/x86_64/multiarch/../multiarch/memset-vec-unaligned-erms.S:244 #1 0x00007ffff7fcf524 in _dl_map_segments (loader=0x7fffffffd300, has_holes=<optimized out>, maplength=<optimized out>, nloadcmds=<optimized out>, loadcmds=<optimized out>, type=<optimized out>, header=0x8, fd=<optimized out>, l=0x7ffff7f89530) at ./dl-map-segments.h:176 #2 _dl_map_object_from_fd (name=name@entry=0x555555569361 "libQt5Widgets.so.5", origname=origname@entry=0x0, fd=<optimized out>, fbp=fbp@entry=0x7fffffffd3a0, realname=<optimized out>, loader=loader@entry=0x7ffff7ffe2e0, l_type=<optimized out>, mode=<optimized out>, stack_endp=<optimized out>, nsid=<optimized out>) at dl-load.c:1258 #3 0x00007ffff7fd0b01 in _dl_map_object (loader=<optimized out>, name=0x555555569361 "libQt5Widgets.so.5", type=1, trace_mode=<optimized out>, mode=0, nsid=<optimized out>) at dl-load.c:2249 #4 0x00007ffff7fca865 in openaux (a=a@entry=0x7fffffffd950) at dl-deps.c:64 #5 0x00007ffff7fc94e1 in __GI__dl_catch_exception (exception=exception@entry=0x7fffffffd930, operate=operate@entry=0x7ffff7fca830 <openaux>, args=args@entry=0x7fffffffd950) at dl-catch.c:237 #6 0x00007ffff7fcacc5 in _dl_map_object_deps (map=map@entry=0x7ffff7ffe2e0, preloads=<optimized out>, npreloads=npreloads@entry=0, trace_mode=<optimized out>, open_mode=open_mode@entry=0) at dl-deps.c:232 #7 0x00007ffff7fe695e in dl_main (phdr=<optimized out>, phnum=<optimized out>, user_entry=<optimized out>, auxv=<optimized out>) at rtld.c:1965 #8 0x00007ffff7fe3583 in _dl_sysdep_start (start_argptr=start_argptr@entry=0x7fffffffe180, dl_main=dl_main@entry=0x7ffff7fe5040 <dl_main>) at ../sysdeps/unix/sysv/linux/dl-sysdep.c:140 #9 0x00007ffff7fe4d6e in _dl_start_final (arg=0x7fffffffe180) at rtld.c:494 #10 _dl_start (arg=0x7fffffffe180) at rtld.c:581 #11 0x00007ffff7fe3b68 in _start () from /lib64/ld-linux-x86-64.so.2 #12 0x0000000000000002 in ?? () #13 0x00007fffffffe58e in ?? () #14 0x00007fffffffe5a1 in ?? () #15 0x0000000000000000 in ?? () (gdb) edit : full backtrace : https://pastebin.com/grMUQiWH Where do I look now to solve this issue ?
fixed it, Program received signal SIGBUS, Bus error. memset () at ../sysdeps/x86_64/multiarch/../multiarch/memset-vec-unaligned-erms.S:244 led me to this github repo I reinstalled glibc with : sudo pacman -Syu glibc --overwrite "*" and now everything is back to normal edit : command modified thanks to killertofus' comment
Bus error (core dumped) on several apps
1,656,615,160,000
I'm implementing some CTF challenges. The flags are in some text files, that get read from the programs. To protect the flags I have changed the owner of the files, but have set the setuid to the executables to be able to read the files. It works when I run my programs outside gdb, and the flags are read, but inside gdb I get Permission denied. I'm running the exercises inside a Linux virutal machine in VirtualBox. I have created a normal user that is not in the sudoers file, and the flags files belong to root. -rwsr-xr-x 1 root user 15260 Mar 13 13:22 exercise6 -rw-r--r-- 1 user user 3270 Mar 13 06:10 'Exercise 6.c' -rwsr-xr-x 1 root user 15700 Mar 14 03:28 exercise7 -rw-r--r-- 1 user user 4372 Mar 13 06:10 'Exercise 7.c' -rwS------ 1 root root 28 Mar 13 06:10 admin_flag.txt -rwS------ 1 root root 20 Mar 13 06:24 exercise1.txt -rwS------ 1 root root 27 Mar 13 06:24 exercise2.txt -rws------ 1 root user 18 Mar 13 10:34 exercise3.txt -rwS------ 1 root root 22 Mar 13 06:24 exercise4.txt -rwS------ 1 root root 19 Mar 13 06:10 user_flag.txt
The security contract of setuid¹ is that it grants the executable program extra privileges. Those privileges are only granted to the program. They must not allow the invoking user to do anything that the program won't do. This makes setuid incompatible with tracing (the ptrace system call on most Unix variants). If the invoking user can observe the internal operation of the program, that gives them access to any confidential data that the program has access to. But this may be confidential data that the user shouldn't be allowed to see, which the program doesn't normally reveal. Perhaps more obviously, if the invoking user can change what the program is doing (also ptrace), this could allow them to completely have all the privileges of the setuid user, which completely defeats the objective of only granting permission to run one specific program. Tracing is the functionality that a debugger such as GDB uses to inspect and control the execution of the program. As an example, consider the program unix_chkpwd, whose job is to check the password of the invoking user. This program must be able to read confidential data (the password hash database). But the invoking user must not be allowed to read the whole database. The invoking user must only be able to query the entry for that user, and only with a yes/no answer (no extraction of the password hash itself), and with a rate limitation to prevent brute-force cracking. If you could run gdb unix_chkpwd and print out the content of the password hash database, it would completely break the security of that database. In order to maintain security: When a program is already being traced when it starts, the setuid bit is ignored. The executable starts with no extra privileges. Only root is allowed to trace a program that was started with extra privileges. ¹ This also applies to setgid, setcap or any other similar mechanism to run a program with elevated privileges.
Permission denied when opening a file in gdb
1,656,615,160,000
P.S. English is not my native language; please excuse typing errors. I've (maybe) understand the basically main idea of symbol in ELF file in dynamic link. Refering to textbooks, if I need dynamic link to .so or something like that then I need the link target's function name(Say,If we only talk about the functions). Then the loader do something to find the real location of your target. Then do something else to load it. The function's name is some we can called it a symbol. But, in debugging, the followings confused me. I tried to install pwndbg(a plugging for GDB) on Arch linux and got some problems. Following these instructions[1] [2], I've sovled the problems. But didn't quite understand how did the solution work. The poster of the solution provider, also the plugging's author said that Arch's glibc doesn't have "debugging symbol" so you need to install it manually, while Ubuntu's glibc has "debugging symbol" -- You don't need to install it manually. So here comes some question really confused me. Why I can INSTALL a symbol for a lib, such as glibc. If a .so (ELF) file didn't have symbol, and you put symbol into it. This will destroy the ELF file fomat, since ELF is based on file relative offset, isn't it? So what did the INSTALL actually do? OR what actually "symbol" means in such context? What does gcc -g("gcc -g generates debug information to be used by GDB debugger") actually generate? Are they(the thing I installed) same? If I need do "generates debug information" then I must need the source code, is that right?
In a number of distributions (Debian, Ubuntu, Fedora, etc.; but not Arch, as far as I can tell from the corresponding wiki page), programs are built with debugging information (see below), but that debugging information is then detached into separate files. These separate files are shipped in debug packages and/or through a debug info server, and can be installed alongside the files they help debug. gcc -g stores information which essentially allows a debugger to go back from the binary code produced by the compiler, to the source code. Using this information, the debugger can translate a position in the executable or in memory to the corresponding source code: for example, a variable location can be linked to the relevant declaration, and a position in executable code can be linked to the relevant source line. Michael J. Eager’s Introduction to the DWARF Debugging Format gives a good explanation of the role of debugging information. See also What is the purpose of /usr/lib/.build-id/ dir?
What does debug symbol actually mean on Arch linux for gdb debugging?
1,656,615,160,000
I know that I can use CTRL+ALT+J in gdb to get vim keybindings but how do I get gdb to start in vi mode by default ?
Put set editing-mode vi in a .inputrc file in your home directory. bash, gdb, and other programs using readline will be in vi--mode by default. Note that zsh does not use readline as a line editing library but zle and therefore you will need to set bindkey -v or set -o vi in your ~/.zshrc : (https://unix.stackexchange.com/questions/373322/make-zsh-use-readline-instead-of-zle)
How to have gdb start in vi mode by default?
1,656,615,160,000
I have a java program that executes several shell files (one by each iteration). The shell file only has one command, start cross-gdb with a path to a gdbinit file. The program works fine, but (from NetBeans output window) the java program finishes its work and exits, but the terminal window never closes. I have tested a lot of commands like confirm off and several quit commands in the sh file, but all of them have not worked. This is the java code: ProcessBuilder pbuilder = new ProcessBuilder("/usr/bin/x-terminal-emulator", "-e", "/shell"); try { Process p = pbuilder.start(); p.waitFor(); p.destroy(); } catch (Exception e) {} And the shell file is: #! /bin/sh export PATH=gcc-arm-8.2-2019.01-x86_64-arm-linux-gnueabi/bin:$PATH arm-linux-gnueabi-gdb --command=/home/null/Desktop/Gem5/gem5/patch.gdbinit #kill pgrep -f arm-linux-gnueabi-gdb #$kill -9 $(pgrep -f x-terminal-emulator) #killall /usr/bin/x-terminal-emulator from the shell code, all the # commands, have not worked.
You can try this ProcessBuilder pbuilder = new ProcessBuilder("/shell"); and in the shell script #! /bin/sh export PATH=gcc-arm-8.2-2019.01-x86_64-arm-linux-gnueabi/bin:$PATH nohup arm-linux-gnueabi-gdb --command=/home/null/Desktop/Gem5/gem5/patch.gdbinit &>/dev/null What nohup does is it makes a command immune to hangups, with output to a non-tty.
How to kill an orphan Terminal process
1,656,615,160,000
Ive added gdbserver in the inetd.conf and etc/services yet when I attempt to connect as follows I immediately get Remote communication error. Target disconnected.: Broken pipe. (gdb) target extended-remote rtx5:8010 Remote debugging using rtx5:8010 Remote communication error. Target disconnected.: Broken pipe. 8010 is what I have configured gdbserver to run on. However if I manually start gdbserver from the target with 8011 I can get them communicating. I tried adding "--multi" and the port to the inetd.conf file and reloaded it to no avail. is this possible?
I managed to get it working by doing the following: In inetd.conf "gdbserver --multi -" Using the dash apparently directs the server to use stdin and out. I am interested to know why exactly this works.
gdbserver as an inetdamon broken pipe
1,656,615,160,000
I'm trying to get only the raw binary data from the gdb disassemble output. My current output is the following: $ gdb -batch -ex "disassemble/r btif_set_adapter_property" libbluetooth_qti.so | column -ts $'\t' Dump of assembler code for function _Z25btif_set_adapter_propertyPK13bt_property_t: 0x0011e8c1 <+0>: e9 f0 41 c8 b0 jmp 0xb0da2ab6 0x0011e8c6 <+5>: 04 46 add $0x46,%al 0x0011e8c8 <+7>: 67 48 addr16 dec %eax 0x0011e8ca <+9>: c0 ef 50 shr $0x50,%bh 0x0011e8cd <+12>: 00 00 add %al,(%eax) 0x0011e8cf <+14>: 21 78 44 and %edi,0x44(%eax) 0x0011e8d2 <+17>: 07 pop %es 0x0011e8d3 <+18>: 68 38 68 47 90 push $0x90476838 0x0011e8d8 <+23>: 02 a8 40 f9 cd 0a add 0xacdf940(%eax),%ch 0x0011e8de <+29>: 01 60 08 add %esp,0x8(%eax) 0x0011e8e1 <+32>: a8 f9 test $0xf9,%al 0x0011e8e3 <+34>: 21 51 f1 and %edx,-0xf(%ecx) 0x0011e8e6 <+37>: 62 (bad) 0x0011e8e7 <+38>: ea 60 48 d4 e9 00 23 ljmp $0x2300,$0xe9d44860 The strange thing is, that it still doesn't delimit the columns by tabs but by whitespaces which looks like tabs. So I'm unable to use | awk '{print $2}' here. Next problem is that the raw binary data has different length and columns 2..8 might contain the raw binary data I need. Maybe I'm thinking too complicated and there is a built-in way in gdb but wasn't able to find any. So the output I want is this (everything in one line): e9 f0 41 c8 b0 04 46 67 48 c0 ef 50 00 00 21 78 44 07 68 38 68 47 90 02 a8 40 f9 cd 0a 01 60 08 a8 f9 21 51 f1 62 ea 60 48 d4 e9 00 23 EDIT: Output of gdb itself: $ gdb -batch -ex "disassemble/r btif_set_adapter_property" libbluetooth_qti.so Dump of assembler code for function _Z25btif_set_adapter_propertyPK13bt_property_t: 0x0011e8c1 <+0>: e9 f0 41 c8 b0 jmp 0xb0da2ab6 0x0011e8c6 <+5>: 04 46 add $0x46,%al 0x0011e8c8 <+7>: 67 48 addr16 dec %eax 0x0011e8ca <+9>: c0 ef 50 shr $0x50,%bh 0x0011e8cd <+12>: 00 00 add %al,(%eax) 0x0011e8cf <+14>: 21 78 44 and %edi,0x44(%eax) 0x0011e8d2 <+17>: 07 pop %es 0x0011e8d3 <+18>: 68 38 68 47 90 push $0x90476838 0x0011e8d8 <+23>: 02 a8 40 f9 cd 0a add 0xacdf940(%eax),%ch 0x0011e8de <+29>: 01 60 08 add %esp,0x8(%eax) 0x0011e8e1 <+32>: a8 f9 test $0xf9,%al 0x0011e8e3 <+34>: 21 51 f1 and %edx,-0xf(%ecx) 0x0011e8e6 <+37>: 62 (bad) 0x0011e8e7 <+38>: ea 60 48 d4 e9 00 23 ljmp $0x2300,$0xe9d44860
You may be lucky with piping the GDB output through the following awk program: awk '{for (i=1;i<=NF;i++) if ($i~/^[a-f0-9]{2}$/) printf("%s%s",$i,OFS)} END{print ""}' This will check all "words" (space-separated chunks of text) of the incoming lines of GDB output and check if they are two-digit hex numbers. If so, it will print them. If not, it will do nothing. The printed hex numbers thus found will be separated by the output field separator OFS, a space by default. A newline is only printed at end-of-input (the print statement automatically appends the "output record separator", by default a newline, so printing "nothing" is equivalent to outputting a newline), so all these hex numbers will appear in one large space-separated stream. It worked for the example GDB output you provided, but beware that if there are "stray" two-digit hex numbers elsewhere, they will also end up in the output.
GDB Disassemble: Print only raw binary data (using column and awk)
1,656,615,160,000
ERROR: type should be string, got "\nhttps://git.postgresql.org/cgit/postgresql.git/tree/src/test/modules/delay_execution/delay_execution.c\nhttps://stackoverflow.com/questions/11967440/stepping-into-specific-function-in-gdb\nI loaded the module delay_execution.\nthen gdb -p $proc\nquite new to gdb. can I let gdb execute directly up to the beginning of delay_execution_planner?\nthere are many steps, press step by step seems not so good.\n"
That's what breakpoints are for. A simple break delay_execution_planner continue does what you need.
gdb execute through to a specific function
1,656,615,160,000
I have a binary that I usually run as follows: $ xvfb-run ./bin --param1 foo However, now that I need to debug it using GDB, I'm not able to do: $ gdb --args xvfb-run ./bin --param1 foo because "/usr/bin/xvfb-run": not in executable format: file format not recognized. Is there a way to do this? For example, by using Xvfb? TIA!
xvfb-run is a shell script! Hence, when you want to run all of xvfb-run it in gdb, you'd gdb --args sh $(which xvfb-run) ./bin --param1 foo But that's probably not what you want! Doesn't sound to me like you're interested in debugging the Xvfb X server itself. More like you're interested in debugging ./bin. You would rather xvfb-run gdbserver localhost:9999 ./bin --param1 foo Which starts a gdbserver with your program ./bin loaded, and you can then attach to it with gdb $ gdb ./bin (gdb) target remote localhost:9999 (gdb) run
How to debug (gdb) a binary that is invoked with xvfb-run?
1,656,615,160,000
I searched all over the internet but couldn't find proper steps to debug linux module remotely using gdb. I am tring qemu but facing many issues there. Is there any other tool that I can use or if not then can you provide me proper steps to debug linux module remotely?
Shouldn't be that hard. From the official kernel documentation (don't search "all over the internet". Search the official documentation and you'll find less bad information): Have a kernel that has KGBD enabled, and also make sure that during building the config option CONFIG_GDB_SCRIPTS is on. (Refer to documentation for CentOS on how to build a kernel package; that's the easiest way) run make scripts_gdb Copy that kernel (vmlinux) into your host system, so that it's easy to know the kernel symbols locally Run the Linux distro of your choice in QEMU with QEMU's GDB stub enabled, and listening on some port, i.e., run qemu with -gdb tcp::$SOMEPORT, where $SOMEPORT is the port number you want to use (should be > 1024, < 2¹⁶). Alternatively, run with -s, which is identical to -gdb tcp::1234. Make sure that QEMU doesn't boot the machine instantly by supplying the -S option on the host, run gdb /path/to/the/kernel/vmlinux in gdb, attach to your QEMU stub: target remote :$SOMEPORT. you can now run continue to boot the VM
How can I remotely debug linux module using GDB?
1,656,615,160,000
Please see what does `handle SIG33 pass nostop noprint` does when used in gdb . I am guessing from the answer shared by Stephen Kitt, that info. about signals is in the source code somewhere. If I download the source code of a particular app, say leafpad http://tarot.freeshell.org/leafpad/ how can I search for which signals are present. The idea is to do better debugging.
To find the signals that a given application handles, on its own, look for sigaction and signal calls in the source code. Libraries can also set up signal handlers, so you really need to look at those too... Without looking at the source code, you can look for those using strace which has specific support for signal-related syscalls: strace -e trace=signal ... This will run your program and dump details of all signal-related syscalls. From that you will be able to determine which signals are used.
is there a way to know if signals are present in your application and which signals are there? [closed]
1,656,615,160,000
I earlier had RDP access to a remote machine (a typical physical desktop PC) using which I launched a sudo apt dist-upgrade inside a GUI gnome-terminal. Since then I have lost the RDP connection and only have SSH[1]. Since there was no 'assume yes' in the apt command, inspecting cat /var/log/dist-upgrade/screenlog.0 over SSH has revealed that the upgrade, still running, is stuck on a prompt[2], seeking the user to hit Enter or select yes (and then hit Enter). Note that at this point dist-upgrade has been running for about an hour (excluding the await since) and has installed lots of packages. The objective now is to pass Enter to that running dist-upgrade (or teminate the upgrade altogether but this may corrupt the system). An LLM has suggested using gdb -p pid /usr/libexec/gnome-terminal-server followed by call (int)write(0, "\n", 1) where pid and executable are obtained from ps -ex | grep terminal. However that errs as No symbol "write" in current context. since No debugging symbols found in /usr/libexec/gnome-terminal-server). Another approach is to use xdotool to generate a programmatic click. Alas, it's not already installed on remote and installing it now isn't possible since an update is running. This is true of all programs - at this point we have to work with what we've got (standard bash) and nothing new can be installed. What to do? [1]: Actually 'remote:xrdp:SSHTunnel:Remmina:client' based RDP access still works except that we are dropped into a black screen and nothing apparently can be done to wake the remote's display. [2]: The prompt in question is simply #tail /var/log/dist-upgrade/screenlog.0 Package configuration Upgrade to the firefox snap Starting in Ubuntu 22.04, all new releases of firefox are only available to Ubuntu users through the snap package. This package update will transition your system over to the snap by installing it.It is recommended to close all open firefox windows before proceeding to the upgrade. <-- 0:jammy -- time-stamp -- Apr/01/24 15:45:57 --
I'm not entirely familiar with apt, but the name of that logfile (screenlog.0) makes me wonder if the process is already running under the control of screen? If that were the case, screen -ls should show an active session, and screen -R should re-attach to it. If the log filename is a red herring, you may be able to reptyr, which is a tool that allows you to attach an existing program to a new terminal.
Pass Enter key to dist-upgrade's prompt, running inside a GUI terminal on a remote machine, accessible henceforth only over SSH
1,656,615,160,000
source:https://git.postgresql.org/cgit/postgresql.git/tree/src/backend/optimizer/plan/planner.c (gdb) n 3556 if (root->group_pathkeys) (gdb) s 3558 else if (root->window_pathkeys) (gdb) print root->group_pathkeys==NULL No symbol "NULL" in current context. (gdb) s 3559 root->query_pathkeys = root->window_pathkeys; (gdb) s query_planner (root=root@entry=0x55ffb53fdb70, qp_callback=qp_callback@entry=0x55ffb3299ee0 <standard_qp_callback>, qp_extra=qp_extra@entry=0x7ffc5db45260) at ../../Desktop/pg_sources/main/postgres/src/backend/optimizer/plan/planmain.c:219 219 fix_placeholder_input_needed_levels(root); based on above I can guess root->group_pathkeys == NULL and root->query_pathkeys != NULL is possible to evaluate a expression use print. something like print (root->group_pathkeys==NULL) then return 1.
print root->group_pathkeys is normally enough to see if it is zero. NULL is normally not known to the debugger because it is not declared but defined (may be #define NULL 0 or with cast to void as jian said.). (This may be changed by size-expensive compilation flags.) So you can replace NULL by 0. (Moreover, the debugger does not care much about the types.) The above holds if == is supposed to be just the normal ==. Otherwise one one need a pretty funny things to write.
gdb print out a pointer is null or not
1,656,615,160,000
This is similar to the issue posted here and here. I want to reverse engineer a binary called gpslogger but before debugging it using GDB, I wish to simply emulate it using QEMU (qemu-aarch64) since when I run file gpslogger I get gpslogger: ELF 64-bit LSB executable, ARM aarch64, version 1 (SYSV), dynamically linked, interpreter /lib/ld-musl-aarch64.so.1, not stripped. I start by downloading the exact interpreter file and pasting it in my Ubuntu 16.04 x86_64 /lib folder and then other errors show up asking for other .so files, e.g., libgps.so. I then download those .so files for the AARCH64 architecture and paste them in the /lib folder of my Ubuntu. Once all the .so errors, i.e., no such file or directory are gone, I'm left with Error relocating /lib/libgps.so: __strdup: symbol not found Error relocating /lib/libgps.so: __fdelt_chk: symbol not found Error relocating /lib/libgps.so: __fprintf_chk: symbol not found Error relocating /lib/libgps.so: __snprintf_chk: symbol not found Error relocating /lib/libgps.so: __isnan: symbol not found Error relocating /lib/libgps.so: __syslog_chk: symbol not found Error relocating /lib/libgps.so: __vsnprintf_chk: symbol not found Error relocating /lib/libdbus-1.so.3: __snprintf_chk: symbol not found Error relocating /lib/libdbus-1.so.3: __vsnprintf_chk: symbol not found Error relocating /lib/libdbus-1.so.3: __strncpy_chk: symbol not found Error relocating /lib/libdbus-1.so.3: __vfprintf_chk: symbol not found Error relocating /lib/libdbus-1.so.3: __fprintf_chk: symbol not found Error relocating /lib/libdbus-1.so.3: __vsprintf_chk: symbol not found Error relocating /lib/libsystemd.so.0: __sprintf_chk: symbol not found Error relocating /lib/libsystemd.so.0: reallocarray: symbol not found Error relocating /lib/libsystemd.so.0: __register_atfork: symbol not found Error relocating /lib/libsystemd.so.0: __memcpy_chk: symbol not found Error relocating /lib/libsystemd.so.0: __snprintf_chk: symbol not found Error relocating /lib/libsystemd.so.0: __vsnprintf_chk: symbol not found Error relocating /lib/libsystemd.so.0: __strncpy_chk: symbol not found Error relocating /lib/libsystemd.so.0: __vasprintf_chk: symbol not found Error relocating /lib/libsystemd.so.0: __open64_2: symbol not found Error relocating /lib/libsystemd.so.0: __asprintf_chk: symbol not found Error relocating /lib/libsystemd.so.0: __fprintf_chk: symbol not found Error relocating /lib/libsystemd.so.0: __ppoll_chk: symbol not found Error relocating /lib/libsystemd.so.0: fcntl64: symbol not found Error relocating /lib/libsystemd.so.0: __explicit_bzero_chk: symbol not found Error relocating /lib/libsystemd.so.0: parse_printf_format: symbol not found Error relocating /lib/libsystemd.so.0: __openat64_2: symbol not found Error relocating /lib/libgcrypt.so.20: __memcpy_chk: symbol not found Error relocating /lib/libgcrypt.so.20: __snprintf_chk: symbol not found Error relocating /lib/libgcrypt.so.20: __fdelt_chk: symbol not found Error relocating /lib/libgcrypt.so.20: __vfprintf_chk: symbol not found Error relocating /lib/libgcrypt.so.20: __memset_chk: symbol not found Error relocating /lib/libgcrypt.so.20: __fprintf_chk: symbol not found Error relocating /lib/libgcrypt.so.20: __read_chk: symbol not found Error relocating /lib/libgcrypt.so.20: __syslog_chk: symbol not found Error relocating /lib/libgpg-error.so.0: __sprintf_chk: symbol not found Error relocating /lib/libgpg-error.so.0: __fdelt_chk: symbol not found Error relocating /lib/libgpg-error.so.0: __vfprintf_chk: symbol not found Error relocating /lib/libgpg-error.so.0: __memset_chk: symbol not found Error relocating /lib/libgpg-error.so.0: __fprintf_chk: symbol not found Error relocating gpslogger: GPSNMEA: symbol not found Except for the last relocation error, I believe all the other functions should be implemented in glibc. Therefore, I simply downloaded the libc-2.32.so file from here for the AARCH64 architecture and pasted it in the /lib folder of my Ubuntu. However, the errors didn't go away. Please let me know if more information is needed. I appreciate any help on the issue. Edit: readelf -d gpslogger | grep 'NEEDED' returns: 0x0000000000000001 (NEEDED) Shared library: [libgps.so] 0x0000000000000001 (NEEDED) Shared library: [libc.musl-aarch64.so.1] Does this mean that the libc is coming from musl and is not glibc?
“interpreter /lib/ld-musl-aarch64.so.1” in file’s output indicates that gpslogger was built with musl. This means that you need not only the musl dynamic linker (ld-musl-aarch64.so.1), but you also need musl variants of every single library used by gpslogger. The missing symbols you list indicate that the libraries you’ve installed were built for glibc.
Emulating an AARCH64 Binary calling libgps on x86_64 Ubuntu using QEMU gives "Error relocating: symbol not found" Errors
1,656,615,160,000
Can you recommend me OS mentioned in Shellcoder's Handbook because I'm having frequent issues on running ELF files mentioned there(See the errors below). I know that to overcome those error I have to enter commands or arguments but I did that too and I'm still not getting same output as in the book like on the assembly level. I'm running one file to demonstrate on ubuntu 4.15.0-106-generic(testing environment I'm using) and a lot of the thing on assembly level is different. This following dissimilarity will help you understand my problem. The below code is from the book is focused on int 0x80 instruction. CODE: main() { exit(0); } This is the o/p from book: [slap@0day root] gdb exit GNU gdb Red Hat Linux (5.3post-0.20021129.18rh) Copyright 2003 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type “show copying” to see the conditions. There is absolutely no warranty for GDB. Type “show warranty” for details. This GDB was configured as “i386-redhat-linux-gnu”... (gdb) disas _exit Dump of assembler code for function _exit: 0x0804d9bc <_exit+0>: mov 0x4(%esp,1),%ebx 0x0804d9c0 <_exit+4>: mov $0xfc,%eax 0x0804d9c5 <_exit+9>: int $0x80 0x0804d9c7 <_exit+11>: mov $0x1,%eax 0x0804d9cc <_exit+16>: int $0x80 0x0804d9ce <_exit+18>: hlt 0x0804d9cf <_exit+19>: nop End of assembler dump. This is o/p from my testing enviroment(ubuntu 4.15.0-106-generic 16.04.1): GNU gdb (Ubuntu 7.11.1-0ubuntu1~16.5)7.11.1 This GDB was configured as "i686-linux-gnu" gdb-peda$ disas exit Dump of assembler code for function exit@plt: 0x080482e0 <+0>: jmp DWORD PTR ds:0x804a00c 0x080482e6 <+6>: push 0x0 0x080482eb <+11>: jmp 0x80482d0 End of assembler dump. As you can see there is no int 0x80 instruction on testing environment unlike from book. Errors : stack-smashing detected --- to overcome this error I used (-fno-stack-protector) and it works sometimes only. or Also Segmentation fault (core dumped) --- I'm getting this error when its not even mentioned in the book I know its the Linux version I'm using which must be patched for things from book. So can you recommend me environment/OS mentioned in the book or is there any way to compile the binaries mentioned in the book to run on my testing environment(Linux 4.15.0-106-generic #107~16.04.1-Ubuntu)? EDIT: command using to compile elf file: gcc -m32 -fno-stack-protector exit.c -o exit also tried this, gcc -static -m32 -fno-stack-protector exit.c -o exit Adding -static gave this in assembly: gdb-peda$ disas exit Dump of assembler code for function exit: 0x0804e440 <+0>: sub esp,0x10 0x0804e443 <+3>: push 0x1 0x0804e445 <+5>: push 0x80eb070 0x0804e44a <+10>: push DWORD PTR [esp+0x1c] 0x0804e44e <+14>: call 0x804e320 <__run_exit_handlers> End of assembler dump.
In the book output, you show that they disassemble _exit: This GDB was configured as “i386-redhat-linux-gnu”... (gdb) disas _exit But in your experiment, you disassemble exit (notice the missing leading underscore): This GDB was configured as "i686-linux-gnu" gdb-peda$ disas exit Those are two separate functions, so make sure you're using _exit. This answer explains the difference between the two: https://unix.stackexchange.com/a/5375/90691 Also, in your output I noticed exit@plt; that "plt" stands for "Procedure Linkage Table", and it's part of resolving dynamically-linked symbols. If you compile with -static, that'll cause the compiler to statically link (instead of dynamically link) you program, so you won't end up with that level of indirection. This answer provides a more detailed explanation: https://unix.stackexchange.com/a/256852/90691 If you don't compile with -static and try to disassemble the program from the book, you might see: (gdb) disassemble _exit No symbol "_exit" in current context. That's because nothing in your program referenced the symbol _exit. Compiling with -static may resolve that problem. If not, you could modify the program to call _exit instead of exit. Finally, i386-redhat-linux-gnu vs i686-linux-gnu. The former is for a 386 processor; the latter is for a 686 processor. Both are 32-bits, so with any luck you should be fine using the 686 toolchain.
Red hat vs Ubuntu compile and assembly Problem(Book reference)
1,656,615,160,000
Typing apt-get upgrade returns the following error: dpkg: error processing package gdb (--configure): package is in a very bad inconsistent state; you should reinstall it before attempting configuration Errors were encountered while processing: gdb E: Sub-process /usr/bin/dpkg returned an error code (1) I tried a lot of solution, but still the error occurs. Please help me to get out from this problem.
The error message gives some indication of what’s going on and how to fix it: package is in a very bad inconsistent state; you should reinstall it before attempting configuration The problem is that the package state as described in dpkg’s “database” (the files under /var/lib/dpkg/info doesn’t match what’s on the system. This can happen because the files under /var/lib/dpkg/info got corrupted, or because the files installed by the package were changed without involving dpkg. The appropriate fix is to reinstall gdb: sudo apt --reinstall install gdb This replaces the files on the system, including the dpkg database files, with the files in the package, if necessary downloading it again. As a result, the database ends up in sync with the file system again (at least, as far as gdb’s files are concerned).
Error in dpkg when executing apt-get upgrade ( most of the commands )
1,656,615,160,000
If I had several hundred core dumps in a directory and want to filter it down to just ones generated by a specific signal without having to manually open each one in GDB one at a time, is there a way to do that? GDB does allow you to pass in commands via -ex flag, but GDB's output doesn't go the console, so i can't just run that on all the files and grep the results.
Partial answer: I note you are using a conditional clause, so if the core dumps are not already generated, the easiest way is to include the signal in the name when they are generated. See man 5 core for details. If you already have them, have a look at the details of the core format (see e.g. here). I'd assume the signal number is in the various siginfo_t note entries (but didn't verify this), so extract them in whatever way is fast enough for you (custom C program if necessary), and filter for the signals you want.
Filter hundreds of coredumps by signal
1,656,615,160,000
I'm debuging with gdb and need to define some helper commands. Basically I want my customized command to operate differently depending on the number of args given. So I have to test whether $arg* is given, see the code below: define pgdir set $pgdir = $arg0 if ($arg1) { // show the corresponding PDE } else { // show the whole page directory } end Is it possible to test whether a variable is void?
You can use the convenience function $_isvoid(). It returns 1 if the variable is void. (gdb) set $v = 1 (gdb) print $_isvoid($v) $1 = 0 (gdb) print $_isvoid($v2) $2 = 1
gdb-customize command, how to test whether a variable is set?
1,656,615,160,000
Running pstack on a process sometimes causes gdb to attach to that process on one of my Linux servers. Why would pstack launch gdb, and how can I prevent that? Details: gdb is running as: /user/bin/gdb --quiet -nx /proc/1234/exe 1234 the parent process of gdb is: /bin/sh /user/bin/pstack 1234
Recent versions of pstack are standalone, but older versions (e.g. pstack-gdb, or the version of pstack in RHEL 5) are wrappers around gdb. Presumably “one of your servers” has an older distribution and its version of pstack is one of the gdb wrappers. To prevent that, you’d have to install a newer version of pstack.
Why does pstack launch gdb (and how to prevent it)?
1,386,277,472,000
I have been trying to parallelize the following script, specifically each of the three FOR loop instances, using GNU Parallel but haven't been able to. The 4 commands contained within the FOR loop run in series, each loop taking around 10 minutes. #!/bin/bash kar='KAR5' runList='run2 run3 run4' mkdir normFunc for run in $runList do fsl5.0-flirt -in $kar"deformed.nii.gz" -ref normtemp.nii.gz -omat $run".norm1.mat" -bins 256 -cost corratio -searchrx -90 90 -searchry -90 90 -searchrz -90 90 -dof 12 fsl5.0-flirt -in $run".poststats.nii.gz" -ref $kar"deformed.nii.gz" -omat $run".norm2.mat" -bins 256 -cost corratio -searchrx -90 90 -searchry -90 90 -searchrz -90 90 -dof 12 fsl5.0-convert_xfm -concat $run".norm1.mat" -omat $run".norm.mat" $run".norm2.mat" fsl5.0-flirt -in $run".poststats.nii.gz" -ref normtemp.nii.gz -out $PWD/normFunc/$run".norm.nii.gz" -applyxfm -init $run".norm.mat" -interp trilinear rm -f *.mat done
Why don't you just fork (aka. background) them? foo () { local run=$1 fsl5.0-flirt -in $kar"deformed.nii.gz" -ref normtemp.nii.gz -omat $run".norm1.mat" -bins 256 -cost corratio -searchrx -90 90 -searchry -90 90 -searchrz -90 90 -dof 12 fsl5.0-flirt -in $run".poststats.nii.gz" -ref $kar"deformed.nii.gz" -omat $run".norm2.mat" -bins 256 -cost corratio -searchrx -90 90 -searchry -90 90 -searchrz -90 90 -dof 12 fsl5.0-convert_xfm -concat $run".norm1.mat" -omat $run".norm.mat" $run".norm2.mat" fsl5.0-flirt -in $run".poststats.nii.gz" -ref normtemp.nii.gz -out $PWD/normFunc/$run".norm.nii.gz" -applyxfm -init $run".norm.mat" -interp trilinear } for run in $runList; do foo "$run" & done In case that's not clear, the significant part is here: for run in $runList; do foo "$run" & done ^ Causing the function to be executed in a forked shell in the background. That's parallel.
Parallelize a Bash FOR Loop
1,386,277,472,000
I'm confused about the difference or advantage (if any) of running a set of tasks in a .sh script using GNU parallel E.g. Ole Tange's answer: parallel ./pngout -s0 {} R{} ::: *.png rather than say looping through them putting them in the background &. E.g. frostschutz's answer: #copied from the link for illustration for stuff in things do ( something with stuff ) & done wait # for all the something with stuff In short are they just syntactically or practically different? And if practically different when should I use each?
Putting multiple jobs in the background is a good way of using the multiple cores of a single machine. parallel however, allows you to spread jobs across multiple servers of your network. From man parallel: GNU parallel is a shell tool for executing jobs in parallel using one or more computers. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. Even when running on a single computer, parallel gives you far greater control over how your jobs are parallelized. Take this example from the manpage: To convert *.wav to *.mp3 using LAME running one process per CPU core run: parallel lame {} -o {.}.mp3 ::: *.wav OK, you could do the same with for i in *wav; do lame "$i" -o "${i%.wav}.mp3" & done However, that is longer and more cumbersome and, more importantly, will launch as many jobs as there are .wav files. If you run this on a few thousand files, it is likely to bring a normal laptop to its knees. parallel on the other hand, will launch one job per CPU core and keep everything nice and tidy. Basically, parallel offers you the ability to fine tune how your jobs are run and how much of available resources they should use. If you really want to see the power of this tool, go through its manual or, at the very least, the examples it offers. Simple backgrounding really has nowhere near the level of sophistication to be compared to parallel. As for how parallel differs from xargs, the GNU crowd give a nice breakdown here. Some of the more salient points are: xargs deals badly with special characters (such as space, ' and "). xargs can run a given number of jobs in parallel, but has no support for running number-of-cpu-cores jobs in parallel. xargs has no support for grouping the output, therefore output may run together, e.g. the first half of a line is from one process and the last half of the line is from another process. xargs has no support for keeping the order of the output, therefore if running jobs in parallel using xargs the output of the second job cannot be postponed till the first job is done. xargs has no support for running jobs on remote computers. xargs has no support for context replace, so you will have to create the arguments.
GNU parallel vs & (I mean background) vs xargs -P
1,386,277,472,000
I have been using a rsync script to synchronize data at one host with the data at another host. The data has numerous small-sized files that contribute to almost 1.2TB. In order to sync those files, I have been using rsync command as follows: rsync -avzm --stats --human-readable --include-from proj.lst /data/projects REMOTEHOST:/data/ The contents of proj.lst are as follows: + proj1 + proj1/* + proj1/*/* + proj1/*/*/*.tar + proj1/*/*/*.pdf + proj2 + proj2/* + proj2/*/* + proj2/*/*/*.tar + proj2/*/*/*.pdf ... ... ... - * As a test, I picked up two of those projects (8.5GB of data) and executed the command above. Being a sequential process, it took 14 minutes and 58 seconds to complete. So, for 1.2TB of data, it would take several hours. If I would could have multiple rsync processes in parallel (using &, xargs or parallel), it would save me time. I tried with below command with parallel (after cding to the source directory) and it took 12 minutes 37 seconds to execute: parallel --will-cite -j 5 rsync -avzm --stats --human-readable {} REMOTEHOST:/data/ ::: . This should have taken 5 times less time, but it didn't. I think, I'm going wrong somewhere. How can I run multiple rsync processes in order to reduce the execution time?
Following steps did the job for me: Run the rsync --dry-run first in order to get the list of files those would be affected. $ rsync -avzm --stats --safe-links --ignore-existing --dry-run \ --human-readable /data/projects REMOTE-HOST:/data/ > /tmp/transfer.log I fed the output of cat transfer.log to parallel in order to run 5 rsyncs in parallel, as follows: $ cat /tmp/transfer.log | \ parallel --will-cite -j 5 rsync -avzm --relative \ --stats --safe-links --ignore-existing \ --human-readable {} REMOTE-HOST:/data/ > result.log Here, --relative option (link) ensured that the directory structure for the affected files, at the source and destination, remains the same (inside /data/ directory), so the command must be run in the source folder (in example, /data/projects).
Parallelise rsync using GNU Parallel
1,386,277,472,000
echo 'echo "hello, world!";sleep 3;' | parallel This command does not output anything until it has completed. Parallel's man page claims: GNU parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. I guess the devil is in the phrasing: you get the same output as if you would run it normally, but not the output the same as if you would run it normally. I've looked for an option that will do this, for example --results /dev/stdout, but that does not work. My use-case is seeing real-time progress output from the command that I'm running. It's not about how many tasks have completed, which parallel could display for me, but about the progress output of each command individually that I want to see. I would use a bash loop (for i in $x; do cmd & done;) but I want to be able to stop all tasks with a single Ctrl+C, which parallel allows me to do. Is it possible to do this in parallel? If not, is there another tool?
I think you're looking for --ungroup. The man page says: --group Group output. Output from each jobs is grouped together and is only printed when the command is finished. --group is the default. Can be reversed with -u. -u of course is a synonym for --ungroup.
Can GNU parallel output stdout before the program has exited?
1,386,277,472,000
I have a shell scripting problem where I'm given a directory full of input files (each file containing many input lines), and I need to process them individually, redirecting each of their outputs to a unique file (aka, file_1.input needs to be captured in file_1.output, and so on). Pre-parallel, I would just iterate over each file in the directory and perform my command, while doing some sort of timer/counting technique to not overwhelm the processors (assuming that each process had a constant runtime). However, I know that won't always be the case, so using a "parallel" like solution seems the best way to get shell script multi-threading without writing custom code. While I have thought of some ways to whip up parallel to process each of these files (and allowing me to manage my cores efficiently), they all seem hacky. I have what I think is a pretty easy use case, so would prefer to keep it as clean as possible (and nothing in the parallel examples seem to jump out as being my problem. Any help would be appreciated! input directory example: > ls -l input_files/ total 13355 location1.txt location2.txt location3.txt location4.txt location5.txt Script: > cat proces_script.sh #!/bin/sh customScript -c 33 -I -file [inputFile] -a -v 55 > [outputFile] Update: After reading Ole's answer below, I was able to put together the missing pieces for my own parallel implementation. While his answer is great, here is my addition research and notes I took: Instead of running my full process, I figured to start with a proof of concept command to prove out his solution in my environment. See my two different implementations (and notes): find /home/me/input_files -type f -name *.txt | parallel cat /home/me/input_files/{} '>' /home/me/output_files/{.}.out Uses find (not ls, that can cause issues) to find all applicable files within my input files directory, and then redirects their contents to a separate directory and file. My issue from above was reading and redirecting (the actual script was simple), so replacing the script with cat was a fine proof of concept. parallel cat '>' /home/me/output_files/{.}.out ::: /home/me/input_files/* This second solution uses parallel's input variable paradigm to read the files in, however for a novice, this was much more confusing. For me, using find a and pipe met my needs just fine.
GNU Parallel is designed for this kind of tasks: parallel customScript -c 33 -I -file {} -a -v 55 '>' {.}.output ::: *.input or: ls | parallel customScript -c 33 -I -file {} -a -v 55 '>' {.}.output It will run one jobs per CPU core. You can install GNU Parallel simply by: wget https://git.savannah.gnu.org/cgit/parallel.git/plain/src/parallel chmod 755 parallel cp parallel sem Watch the intro videos for GNU Parallel to learn more: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
using parallel to process unique input files to unique output files
1,386,277,472,000
In a larger script to post-process some simulation data I had the following line: parallel bnzip2 -- *.bz2 Which, if I understand parallel correctly (and I may not), should run n-core threads of the program over all files with the listed extension. You may notice that I misspelled the command bunzip2. I would expect a warning or error message here, but it fails silently. Is this intended? How do I not get bit by this in the future? Update: It is possible that I have a different parallel installed than I think I do: > parallel --version` parallel: invalid option -- '-' parallel [OPTIONS] command -- arguments for each argument, run command with argument, in parallel parallel [OPTIONS] -- commands run specified commands in parallel A man page of parallel on my system gives: parallel(1) parallel(1) NAME parallel - run programs in parallel .... AUTHOR Tollef Fog Heen Which seems this is not the GNU version.
You have been hit by the confusion with Tollef's parallel from moreutils. See https://www.gnu.org/software/parallel/history.html You can install GNU Parallel simply by: wget http://git.savannah.gnu.org/cgit/parallel.git/plain/src/parallel chmod 755 parallel cp parallel sem Watch the intro videos for GNU Parallel to learn more: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Why does (GNU?) parallel fail silently, and how do I fix it?
1,386,277,472,000
I know that GNU Parallel buffers std/stderr because it doesn't want jobs output to be mangled, but if I run my jobs with parallel do_something ::: task_1 task_2 task_3, is there anyway for task_1's output to be displayed immediately, then after task_1 finishes, task_2's up to its current output, etc. If Parallel cannot solve this problem, is there any other similar program that could?
From version 20160422 you can do: parallel -k --lb do_something ::: task_1 task_2 task_3
GNU Parallel: immediately display job stderr/stdout one-at-a-time by jobs order
1,386,277,472,000
So I have a while loop: cat live_hosts | while read host; do \ sortstuff.sh -a "$host" > sortedstuff-"$host"; done But this can take a long time. How would I use GNU Parallel for this while loop?
You don't use a while loop. parallel "sortstuff.sh -a {} > sortedstuff-{}" <live_hosts Note that this won't work if you have paths in your live_hosts (e.g. /some/dir/file) as it would expand to sortstuff.sh -a /some/dir/file > sortedstuff-/some/dir/file (resulting in no such file or directory); for those cases use {//} and {/} (see gnu-parallel manual for details): parallel "sortstuff.sh -a {} > {//}/sortedstuff-{/}" <live_hosts
How would I use GNU Parallel for this while loop?
1,386,277,472,000
% echo -e '1\n2' | parallel "bash -c 'echo :\$1' '' {}" :1 :2 % echo -e '1\n2' | parallel bash -c 'echo :\$1' '' {} % I'd expect the second line to act the same.
parallel runs the command in a shell already (which shell it is is determined by parallel using heuristics (the intention being to invoke the same shell as the one parallel was invoked from). You can set the $PARALLEL_SHELL variable to fix the shell). It's not a command you're passing to parallel like you would for the env or xargs command, but a shell command line (like you would for the eval command). Like for eval, in parallel arg1 arg2, parallel is concatenating those arguments with spaces in between (so it becomes arg1 arg2) and that string is passed to <the-shell> -c. For the arguments that are passed on parallel's stdin, parallel quotes them in the format expected by that particular shell (a difficult and error prone task which is why you'll find there have been a lot of bugs fixed around that in parallel's Changelog (some are still not fixed as of 2017-03-06)) and appends it to that command line. So for instance, if called from within bash, echo "foo'bar" | parallel echo foo Would have parallel call bash -c with echo foo foo\'bar as the command line. And if called from within rc (or with PARALLEL_SHELL=rc) rc -c with echo foo foo''''bar. In your: parallel bash -c 'echo :\$1' '' {} parallel concatenates those arguments which gives: bash -c echo :$1 {} And with the {} expanded and quoted in the right format for the shell you're calling parallel from, passes that to <that-shell> -c which will call bash -c echo with :$1 in $0 and the current argument in $1. It's not how parallel works. Here, you'd probably want: printf '1\n2\n' | PARALLEL_SHELL=bash parallel 'echo :{}' To see what parallel does, you can run it under strace -fe execve (or the equivalent on your system if not Linux). Here, you could use GNU xargs instead of parallel to get a simpler processing closer to what you're expecting: printf '1\n2\n' | xargs -rn1 -P4 bash -c 'echo ":$1"' '' See also the discussion at https://lists.gnu.org/archive/html/bug-parallel/2015-05/msg00005.html Note that in bash -c 'echo foo' '' foo, you're making $0 the empty string for that inline-script. I would avoid that as that $0 is also used in error messages. Compare: $ bash -c 'echo x > "$1"' '' / : /: Is a directory with. $ bash -c 'echo x > "$1"' bash / bash: /: Is a directory Also note that leaving variables unquoted has a very special meaning in bash and that echo can generally not be used for arbitrary data.
Why doesn't GNU parallel work with "bash -c"?
1,386,277,472,000
I have a script that uses gnu parallel. I want to pass two parameters for each "iteration" in serial run I have something like: for (( i=0; i<=10; i++ )) do a = tmp1[$i] b = tmp2[$i] done And I want to make this parallel as func pf() { a=$1 b=$2 } export -f pf parallel --jobs 5 --linebuffer pf ::: <what to write here?>
Omitting your other parallel flags just to stay focused... parallel --link pf ::: A B ::: C D This will run your function first with a=A, b=C followed by a=B, b=D or a=A b=C a=B b=D Without --link you get full combination like this: a=A b=C a=A b=D a=B b=C a=B b=D Update: As Ole Tange metioned in a comment [since deleted - Ed.] there is another way to do this: use the :::+ operator. However, there is an important difference between the two alternatives if the number of arguments is not the same in each param position. An example will illustrate. parallel --link pf ::: A B ::: C D E output: a=A b=C a=B b=D a=A b=E parallel pf ::: A B :::+ C D E output: a=A b=C a=B b=D So --link will "wrap" such that all arguments are consumed while :::+ will ignore the extra argument. (In the general case I prefer --link since the alternative is in some sense silently ignoring input. YMMV.)
GNU parallel - two parameters from array as parameter
1,386,277,472,000
I have script I'd always like to run 'x' instances in parallel. The code looks a like that: for A in do for B in do (script1.sh $A $B;script2.sh $A $B) & done #B done #A The scripts itself run DB queries, so it would benefit from parallel running. Problem is 1) 'wait' doesn't work (because it finished all background jobs and starts new ones (even if I include a threadcounter), that wastes lots of time. 2) I couldn't figure out how to get parallel to do that. I only found examples where the same script gets run multiple times, but not with different parameters. 3) the alternative solution would be: for A in do for B in do while threadcount>X do sleep 60 done (script1.sh $A $B;script2.sh $A $B) & done #B done #A But I didn't really figure out how to get the thread count reliable. Some hints into the right direction are very much welcomed. I'd love to use parallel, but the thing just doesn't work as the documentation tells me. I do parallel echo ::: A B C ::: D E F (from the doc) and it tells me parallel: Input is read from the terminal. Only experts do this on purpose. Press CTRL-D to exit. and that is just the simplest example of the man pages.
Using GNU Parallel it looks like this: parallel script1.sh {}';' script2.sh {} ::: a b c ::: d e f It will spawn one job per CPU. GNU Parallel is a general parallelizer and makes is easy to run jobs in parallel on the same machine or on multiple machines you have ssh access to. It can often replace a for loop. If you have 32 different jobs you want to run on 4 CPUs, a straight forward way to parallelize is to run 8 jobs on each CPU: GNU Parallel instead spawns a new process when one finishes - keeping the CPUs active and thus saving time: Installation If GNU Parallel is not packaged for your distribution, you can do a personal installation, which does not require root access. It can be done in 10 seconds by doing this: (wget -O - pi.dk/3 || curl pi.dk/3/ || fetch -o - http://pi.dk/3) | bash For other installation options see http://git.savannah.gnu.org/cgit/parallel.git/tree/README Learn more See more examples: http://www.gnu.org/software/parallel/man.html Watch the intro videos: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1 Walk through the tutorial: http://www.gnu.org/software/parallel/parallel_tutorial.html Sign up for the email list to get support: https://lists.gnu.org/mailman/listinfo/parallel
How to run x instances of a script parallel?
1,386,277,472,000
I am having an issue trying to use parallel command on Ubuntu 10.04. I looked up the parallel documentation and few of the commands seem to run. In all cases I just get the command prompt back without any action being taken. e.g. I was trying to compress a bunch of files using bzip2 17:32 farhat HarshaNaveen$ parallel bzip2 ::: *fastq 17:33 farhat HarshaNaveen$ ls *fastq|parallel bzip2 {} Neither of these commands worked. Nor was there any error. The example given in the man file worked fine though. 18:58farhat HarshaNaveen$ parallel sh -c "echo hi; sleep 2; echo bye" -- 1 2 3 hi hi hi bye bye bye 18:58farhat HarshaNaveen$ What am I doing wrong?
Your first try is closest to being correct, but why the :::? If you change ::: to --, it will do what you want. parallel has a specific, unusual structure to its command line. In the first half, you give it the command you want to run multiple times, and the part of the command line that will be the same every time. In the second half, you give it the parts that will be different each time the command is run. These halves are separated by --. Some experimentation shows that if parallel doesn't find the second half, it doesn't actually run any commands. It's probably worth re-reading the man page carefully. Man pages have a terse, information-dense style that can take some getting used to. Also try reading some pages for commands you're already familiar with.
Using parallel on Ubuntu
1,386,277,472,000
I have a words.txt with 10000 words (one to a line). I have 5,000 documents. I want to see which documents contain which of those words (with a regex pattern around the word). I have a script.sh that greps the documents and outputs hits. I want to (1) split my input file into smaller files (2) feed each of the files to script.sh as a parameter and (3) run all of this in parallel. My attempt based on the tutorial is hitting errors $parallel ./script.sh ::: split words.txt # ./script.sh: line 22: split: No such file or directory My script.sh looks like this #!/usr/bin/env bash line 1 while read line line 2 do some stuff line 22 done < $1 I guess I could output split to a directory loop thru the files in the directory launching grep commands -- but how can do this elegantly and concisely (using parallel)?
You can use the split tool: split -l 1000 words.txt words- will split your words.txt file into files with no more than 1000 lines each named words-aa words-ab words-ac ... words-ba words-bb ... If you omit the prefix (words- in the above example), split uses x as the default prefix. For using the generated files with parallel you can make use of a glob: split -l 1000 words.txt words- parallel ./script.sh ::: words-[a-z][a-z]
split a file, pass each piece as a param to a script, run each script in parallel
1,386,277,472,000
GNU Parallel quotes replacement strings by default so that they are not expanded by the shell. But in certain cases you really want the replacement string to be interpreted by the shell. E.g. $ cat variables.txt --var1 0.1 --var2 0.2 --var1 0.11 --var3 0.03 Here I want GNU Parallel to run: myprogram --var1 0.1 --var2 0.2 myprogram --var1 0.11 --var3 0.03 How is that done? How is it done, if only some of the replacement strings should be interpreted: E.g. $ ls My file1.txt My file2.txt And I want this run: myprogram --var1 0.1 --var2 0.2 'My file1.txt' myprogram --var1 0.11 --var3 0.03 'My file1.txt' myprogram --var1 0.1 --var2 0.2 'My file2.txt' myprogram --var1 0.11 --var3 0.03 'My file2.txt'
From version 20190722 you can use uq() in a perl replacement string to make that replacement unquoted: parallel myprogram '{=1 uq(); =}' {2} :::: variables.txt ::: My*.txt This can not be done in earlier versions. You can, however, unquote the full command with eval. This solves the first problem, but not the second. parallel eval myprogram {} :::: variables.txt If you prefer all replacement strings to be unquoted, you can do that by redefining them: parallel --rpl '{} uq()' echo {} ::: '*' You can put the --rpl in ~/.parallel/config to make them active by default (these are simply the definitions in the source code with uq() added): --rpl '{} uq()' --rpl '{#} 1 $_=$job->seq(); uq()' --rpl '{%} 1 $_=$job->slot(); uq()' --rpl '{/} s:.*/::; uq()' --rpl '{//} $Global::use{"File::Basename"} ||= eval "use File::Basename; 1;"; $_ = dirname($_); uq()' --rpl '{/.} s:.*/::; s:\.[^/.]*$::; uq; uq()' --rpl '{/.} s:.*/::; s:\.[^/.]*$::; uq()' --rpl '{.} s:\.[^/.]*$::; uq()'
How do I tell GNU Parallel to not quote the replacement string
1,386,277,472,000
I'm pulling VIN specifications from the National Highway Traffic Safety Administration API for approximately 25,000,000 VIN numbers. This is a great deal of data, and as I'm not transforming the data in any way, curl seemed like a more efficient and lightweight way of accomplishing the task than Python (seeing as Python's GIL makes parallel processing a bit of a pain). In the below code, vins.csv is a file containing a large sample of the 25M VINs, broken into chunks of 100 VINs. These are being passed to GNU Parallel which is using 4 cores. Everything is dumped into nhtsa_vin_data.csv at the end. $ cat vins.csv | parallel -j10% curl -s --data "format=csv" \ --data "data={1}" https://vpic.nhtsa.dot.gov/api/vehicles/DecodeVINValuesBatch/ \ >> /nas/BIGDATA/kemri/nhtsa_vin_data.csv This process was writing about 3,000 VINs a minute at the beginning and has been getting progressively slower with time (currently around 1,200/minute). My questions Is there anything in my command that would be subject to increasing overhead as nhtsa_vin_data.csv grows in size? Is this related to how Linux handles >> operations? UPDATE #1 - SOLUTIONS First solution per @slm - use parallel's tmp file options to write each curl output to its own .par file, combine at the end: $ cat vins.csv | parallel \ --tmpdir /home/kemri/vin_scraper/temp_files \ --files \ -j10% curl -s \ --data "format=csv" \ --data "data={1}" https://vpic.nhtsa.dot.gov/api/vehicles/DecodeVINValuesBatch/ > /dev/null cat <(head -1 $(ls *.par|head -1)) <(tail -q -n +2 *.par) > all_data.csv Second solution per @oletange - use --line-buffer to buffer output to memory instead of disk: $ cat test_new_mthd_vins.csv | parallel \ --line-buffer \ -j10% curl -s \ --data "format=csv" \ --data "data={1}" https://vpic.nhtsa.dot.gov/api/vehicles/DecodeVINValuesBatch/ \ >> /home/kemri/vin_scraper/temp_files/nhtsa_vin_data.csv Performance considerations I find both the solutions suggested here very useful and interesting and will definitely be using both versions more in the future (both for comparing performance and additional API work). Hopefully I'll be able to run some tests to see which one performs better for my use case. Additionally, running some sort of throughput test like @oletange and @slm suggested would be wise, seeing as the likelihood of the NHTSA being the bottleneck here is non-negligible.
My suspicion is that the >> is causing you contention on the file nhtsa_vin_data.csv among the curl commands that parallel is forking off to collect the API data. I would adjust your application like this: $ cat p.bash #!/bin/bash cat vins.csv | parallel --will-cite -j10% --progress --tmpdir . --files \ curl -s --data "format=csv" \ --data "data={1}" https://vpic.nhtsa.dot.gov/api/vehicles/DecodeVINValuesBatch/ This will give your curl commands their own isolated file to write their data. Example I took these 3 VINs, 1HGCR3F95FA017875;1HGCR3F83HA034135;3FA6P0T93GR335818;, that you provided me and put them into a file called vins.csv. I then replicated them a bunch of times so that this file ended up having these characteristics: VINs per line $ tail -1 vins.csv | grep -o ';' | wc -l 26 Number of lines $ wc -l vins.csv 15 vins.csv I then ran my script using this data: $ ./p.bash Computers / CPU cores / Max jobs to run 1:local / 1 / 1 Computer:jobs running/jobs completed/%of started jobs/Average seconds to complete local:1/0/100%/0.0s ./pard9QD3.par local:1/1/100%/10.0s ./paruwK9L.par local:1/2/100%/8.5s ./parT6rCS.par local:1/3/100%/7.3s ./pardzT2g.par local:1/4/100%/6.8s ./parDAsaO.par local:1/5/100%/6.8s ./par9X2Na.par local:1/6/100%/6.7s ./par6aRla.par local:1/7/100%/6.7s ./parNR_r4.par local:1/8/100%/6.4s ./parVoa9k.par local:1/9/100%/6.1s ./parXJQTc.par local:1/10/100%/6.0s ./parDZZrp.par local:1/11/100%/6.0s ./part0tlA.par local:1/12/100%/5.9s ./parydQlI.par local:1/13/100%/5.8s ./par4hkSL.par local:1/14/100%/5.8s ./parbGwA2.par local:0/15/100%/5.4s Putting things together When the above is done running you can then cat all the files together to get a single .csv file ala: $ cat *.par > all_data.csv Use care when doing this since every file has it's own header row for the CSV data that's contained within. To deal with taking the headers out of the result files: $ cat <(head -1 $(ls *.par|head -1)) <(tail -q -n +2 *.par) > all_data.csv Your slowing down performance In my testing it does look like the DOT website is throttling queries as they continue to access their API. The above timing I saw in my experiments, though small, were decreasing as each query was sent to the API's website. My performance on my laptop was as follows: $ seq 5 | parallel --will-cite --line-buffer 'yes {} | head -c 1G' | pv >> /dev/null 5GiB 0:00:51 [99.4MiB/s] [ <=> ] NOTE: The above was borrowed from Ole Tange's answer and modified. It writes 5GB of data through parallel and pipes it to pv >> /dev/null. pv is used so we can monitor the throughput through the pipe and arrive at a MB/s type of measurement. My laptop was able to muster ~100MB/s of throughput. FAQ for NHTSA API API For the ‘Decode VIN (flat format) in a Batch’ is there a sample on making this query by URL, similar to the other actions? For this particular API you just have to put a set of VINs within the box that are separated by a “;”. You can also indicate the model year prior to the “;” separated by a “,”. There is an upper limit on the number of VINs you can put through this service. Example in the box is the sample: 5UXWX7C5*BA,2011; 5YJSA3DS*EF Source: https://vpic.nhtsa.dot.gov/MfrPortal/home/faq searched for "rate" The above mentions that there's a upper limit when using the API: There is an upper limit on the number of VINs you can put through this service. References GNU Parallel man page GNU Parallel Tutorial Print a file skipping first X lines in Bash
Standard Out Append to File Size Limitations
1,386,277,472,000
I have a bash program that I run via command line (Ubuntu) like this: ./extract_field.sh ABC001 where ABC001 is the field ID that I want to extract from a given shapefile. To run this script with multiple IDs, I first save one ID per line in a list.txt file: ABC001 ABC014 ABC213 ABC427 and then invoke the script using parallel: parallel -a list.txt ./extract_field.sh So far so good. However, I plan to change extract_field.sh so it takes two arguments rather than only one. Will the above workflow still work if I just change my text file to accommodate two arguments per line like this? ABC001 arg2a ABC014 arg2b ABC213 arg2c ABC427 arg2d With this change, I would expect parallel -a list.txt ./extract_field.sh to behave like ./extract_field.sh ABC001 arg2a ./extract_field.sh ABC014 arg2b and so on. Is that right? I could just test it before asking, but I decided to ask first since this change in the script will probably take me a couple of hours to finish (though it sounds like a simple change).
You can provide multiple arguments to a single command with parallel by specifying a column delimiter in your command syntax To use your example: parallel --colsep ' ' -a list.txt ./extractfield.sh {1} {2} Will provide the result of ./extract_field.sh ABC001 arg2a ./extract_field.sh ABC014 arg2b Given that your file list.txt contains ABC001 arg2a ABC014 arg2b You can test this with cp or mv since these both require multiple positional parameters. Useful bit of parallel's manpage
Running a program with multiple arguments from list using parallel
1,386,277,472,000
Some time ago I learned about the software parallel: https://forums.servethehome.com/index.php?threads/mdadm-create-raid-0-quick-format.41161/#post-389210 Now I have almost the same goal as previous, but I want to try and do it differently. For setup I have: Dell SFF desktop with single port SAS2 HBA Total of 14 NetApp SAS3 disk shelf Each disk shelf has 24x 960GB 12G SAS SSD's mounted The drives run 520 sector size The goal is to run sg_format --format -e /dev/sgXX on all the drives, preferably using parallel. It could be done one by one but I'd like to do it using parallel to learn how it functions and potentially get this done faster for all future projects like this. I use Ubuntu 23.10, added universe repository, used apt update and apt install parallel. Parallel is now installed as well as sg3-utils and smartmontools. My HBA can handle 128 devices, where device 1 (or 0) is the HBA itself, so I have a total of 5 disk shelves running in a chain, which means I have 120 drives attached. They start at /dev/sg2 and end at /dev/sg126. The command I used is: parallel sg_format --format -e ::: /dev/sg[2-126] When running it, the terminal freeze up for a few seconds, which is to be expected I guess. However it is only /dev/sg2 and /dev/sg6 that are actually running, see image below. All the drives are exactly the same, but only 2 of them want to actually start the format. I get no errors or any reason why all of them are not starting. I have tried to start the drives "manually" so just using the sg_format command, and all of them start with no issues here. If I run the command parallel sg_format --format -e ::: /dev/sg[2-10] It runs sg_format on /dev/sg2 and then on /dev/sg0, only those two. Surely I have a problem when entering multiple devices. Does anyone know?
That's because you are using /dev/sg[2-126]. Unfortunately, ranges don't handle multi-digit numbers. They are character ranges, and 126 is not a character. So while you can have [0-9], you cannot have [0-10] since that would mean "the characters from 0 to 1, and then 0". So your [2-126] actually means "the number between 2 and 1" which is meaningless since 2 is after 1, and then the specific numbers 2 and 6. This is why you are only matching sg2 and sg6. So your first command, parallel sg_format --format -e ::: /dev/sg[2-126], is running on sg2 and sg6. And your second command, parallel sg_format --format -e ::: /dev/sg[2-10] runs on /dev/sg2 and /dev/sg0 because [2-1] is interpreted as the single character 2 since there are no other characters between 2 and 1, and then the 0 is taken as itself. What you want is to use brace expansion instead: parallel sg_format --format -e ::: /dev/sg{2..126} Or, to run as many jobs as possible at once, use: parallel -j0 sg_format --format -e ::: /dev/sg{2..126}
Using parallel to sg_format 300+ drives
1,386,277,472,000
What I'm really trying to do is run X number of jobs, with X amount in parallel for testing an API race condition. I've come up with this echo {1..10} | xargs -n1 | parallel -m 'echo "{}"'; which prints 7 8 9 10 4 5 6 1 2 3 but what I really want to see is (note order doesn't actually matter). 1 2 3 4 5 6 7 8 9 10 and those would be processed in parallel 4 at a time (or whatever number of cpus/cores, I have, e.g. --jobs 4). For a total of 10 separate executions. I tried this echo {1..10} | xargs -n1 | parallel --semaphore --block 3 -m 'echo -n "{} "; but it only ever seems to print once. bonus points if your solution doesn't need xargs which seems like a hack around the idea that the default record separator is a newline, but I haven't been able to get a space to work like I want either. 10 is a reasonably small number, but lets say it's much larger, 1000 echo {1..1000} | xargs -n1 | parallel -j1000 prints parallel: Warning: Only enough file handles to run 60 jobs in parallel. parallel: Warning: Running 'parallel -j0 -N 60 --pipe parallel -j0' or parallel: Warning: raising 'ulimit -n' or 'nofile' in /etc/security/limits.conf parallel: Warning: or /proc/sys/fs/file-max may help. I don't actually want 1000 processes, I want 4 processes at a time, each process should process 1 record, thus by the time I'm done it will have executed 1000 times.
I want 4 processes at a time, each process should process 1 record parallel -j4 -k --no-notice 'echo "{}"' ::: {1..10} -j4 - number of jobslots. Run up to 4 jobs in parallel -k - keep sequence of output same as the order of input. Normally the output of a job will be printed as soon as the job completes ::: - arguments The output: 1 2 3 4 5 6 7 8 9 10
How can I run GNU parallel in record per job, with 1 process per core