date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,387,949,048,000
I would like to monitor one process's memory / cpu usage in real time. Similar to top but targeted at only one process, preferably with a history graph of some sort.
On Linux, top actually supports focusing on a single process, although it naturally doesn't have a history graph: top -p PID This is also available on Mac OS X with a different syntax: top -pid PID
How to monitor CPU/memory usage of a single process?
1,387,949,048,000
I would like to avoid doing this by launching the process from a monitoring app.
On Linux with the ps from procps(-ng) (and most other systems since this is specified by POSIX): ps -o etime= -p "$$" Where $$ is the PID of the process you want to check. This will return the elapsed time in the format [[dd-]hh:]mm:ss. Using -o etime tells ps that you just want the elapsed time field, and the = at the end of that suppresses the header (without, you get a line which says ELAPSED and then the time on the next line; with, you get just one line with the time). Or, with newer versions of the procps-ng tool suite (3.3.0 or above) on Linux or on FreeBSD 9.0 or above (and possibly others), use: ps -o etimes= -p "$$" (with an added s) to get time formatted just as seconds, which is more useful in scripts. On Linux, the ps program gets this from /proc/$$/stat, where one of the fields (see man proc) is process start time. This is, unfortunately, specified to be the time in jiffies (an arbitrary time counter used in the Linux kernel) since the system boot. So you have to determine the time at which the system booted (from /proc/stat), the number of jiffies per second on this system, and then do the math to get the elapsed time in a useful format. It turns out to be ridiculously complicated to find the value of HZ (that is, jiffies per second). From comments in sysinfo.c in the procps package, one can A) include the kernel header file and recompile if a different kernel is used, B) use the posix sysconf() function, which, sadly, uses a hard-coded value compiled into the C library, or C) ask the kernel, but there's no official interface to doing that. So, the ps code includes a series of kludges by which it determines the correct value. Wow. So it's convenient that ps does that all for you. :) (Note: stat -c%X /proc/$$ does not work. See this answer from Stéphane Chazelas to a related question.)
How to check how long a process has been running?
1,387,949,048,000
What command(s) can one use to find out the current working directory (CWD) of a running process? These would be commands you could use externally from the process.
There are 3 methods that I'm aware of: pwdx $ pwdx <PID> lsof $ lsof -p <PID> | grep cwd /proc $ readlink -e /proc/<PID>/cwd Examples Say we have this process. $ pgrep nautilus 12136 Then if we use pwdx: $ pwdx 12136 12136: /home/saml Or you can use lsof: $ lsof -p 12136 | grep cwd nautilus 12136 saml cwd DIR 253,2 32768 10354689 /home/saml Or you can poke directly into the /proc: $ readlink -e /proc/12136/cwd/ /home/saml
Find out current working directory of a running process?
1,387,949,048,000
In ps xf 26395 pts/78 Ss 0:00 \_ bash 27016 pts/78 Sl+ 0:04 | \_ unicorn_rails master -c config/unicorn.rb 27042 pts/78 Sl+ 0:00 | \_ unicorn_rails worker[0] -c config/unicorn.rb In htop, it shows up like: Why does htop show more process than ps?
By default, htop lists each thread of a process separately, while ps doesn't. To turn off the display of threads, press H, or use the "Setup / Display options" menu, "Hide userland threads". This puts the following line in your ~/.htoprc or ~/.config/htop/htoprc (you can alternatively put it there manually): hide_userland_threads=1 (Also hide_kernel_threads=1, toggled by pressing K, but it's 1 by default.) Another useful option is “Display threads in a different color” in the same menu (highlight_threads=1 in .htoprc), which causes threads to be shown in a different color (green in the default theme). In the first line of the htop display, there's a line like “Tasks: 377, 842 thr, 161 kthr; 2 running”. This shows the total number of processes, userland threads, kernel threads, and threads in a runnable state. The numbers don't change when you filter the display, but the indications “thr” and “kthr” disappear when you turn off the inclusion of user/kernel threads respectively. When you see multiple processes that have all characteristics in common except the PID and CPU-related fields (NIce value, CPU%, TIME+, ...), it's highly likely that they're threads in the same process.
Why does `htop` show more process than `ps`
1,387,949,048,000
I get the message There are stopped jobs. when I try to exit a bash shell sometimes. Here is a reproducible scenario in python 2.x: ctrl+c is handled by the interpreter as an exception. ctrl+z 'stops' the process. ctrl+d exits python for reals. Here is some real-world terminal output: example_user@example_server:~$ python Python 2.7.3 (default, Sep 26 2013, 20:03:06) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> ctrl+z [1]+ Stopped python example_user@example_server:~$ exit logout There are stopped jobs. Bash did not exit, I must exit again to exit the bash shell. Q: What is a 'stopped job', or what does this mean? Q: Can a stopped process be resumed? Q: Does the first exit kill the stopped jobs? Q: Is there a way to exit the shell the first time? (without entering exit twice)
A stopped job is one that has been temporarily put into the background and is no longer running, but is still using resources (i.e. system memory). Because that job is not attached to the current terminal, it cannot produce output and is not receiving input from the user. You can see jobs you have running using the jobs builtin command in bash, probably other shells as well. Example: user@mysystem:~$ jobs [1] + Stopped python user@mysystem:~$ You can resume a stopped job by using the fg (foreground) bash built-in command. If you have multiple commands that have been stopped you must specify which one to resume by passing jobspec number on the command line with fg. If only one program is stopped, you may use fg alone: user@mysystem:~$ fg 1 python At this point you are back in the python interpreter and may exit by using control-D. Conversely, you may kill the command with either it's jobspec or PID. For instance: user@mysystem:~$ ps PID TTY TIME CMD 16174 pts/3 00:00:00 bash 17781 pts/3 00:00:00 python 18276 pts/3 00:00:00 ps user@mysystem:~$ kill 17781 [1]+ Killed python user@mysystem:~$ To use the jobspec, precede the number with the percent (%) key: user@mysystem:~$ kill %1 [1]+ Terminated python If you issue an exit command with stopped jobs, the warning you saw will be given. The jobs will be left running for safety. That's to make sure you are aware you are attempting to kill jobs you might have forgotten you stopped. The second time you use the exit command the jobs are terminated and the shell exits. This may cause problems for some programs that aren't intended to be killed in this fashion. In bash it seems you can use the logout command which will kill stopped processes and exit. This may cause unwanted results. Also note that some programs may not exit when terminated in this way, and your system could end up with a lot of orphaned processes using up resources if you make a habit of doing that. Note that you can create background process that will stop if they require user input: user@mysystem:~$ python & [1] 19028 user@mysystem:~$ jobs [1]+ Stopped python You can resume and kill these jobs in the same way you did jobs that you stopped with the Ctrl-z interrupt.
There are stopped jobs (on bash exit)
1,387,949,048,000
The Linux proc(5) man page tells me that /proc/$pid/mem “can be used to access the pages of a process's memory”. But a straightforward attempt to use it only gives me $ cat /proc/$$/mem /proc/self/mem cat: /proc/3065/mem: No such process cat: /proc/self/mem: Input/output error Why isn't cat able to print its own memory (/proc/self/mem)? And what is this strange “no such process” error when I try to print the shell's memory (/proc/$$/mem, obviously the process exists)? How can I read from /proc/$pid/mem, then?
/proc/$pid/maps /proc/$pid/mem shows the contents of $pid's memory mapped the same way as in the process, i.e., the byte at offset x in the pseudo-file is the same as the byte at address x in the process. If an address is unmapped in the process, reading from the corresponding offset in the file returns EIO (Input/output error). For example, since the first page in a process is never mapped (so that dereferencing a NULL pointer fails cleanly rather than unintendedly accessing actual memory), reading the first byte of /proc/$pid/mem always yield an I/O error. The way to find out what parts of the process memory are mapped is to read /proc/$pid/maps. This file contains one line per mapped region, looking like this: 08048000-08054000 r-xp 00000000 08:01 828061 /bin/cat 08c9b000-08cbc000 rw-p 00000000 00:00 0 [heap] The first two numbers are the boundaries of the region (addresses of the first byte and the byte after last, in hexa). The next column contain the permissions, then there's some information about the file (offset, device, inode and name) if this is a file mapping. See the proc(5) man page or Understanding Linux /proc/id/maps for more information. Here's a proof-of-concept script that dumps the contents of its own memory. #! /usr/bin/env python import re maps_file = open("/proc/self/maps", 'r') mem_file = open("/proc/self/mem", 'rb', 0) output_file = open("self.dump", 'wb') for line in maps_file.readlines(): # for each mapped region m = re.match(r'([0-9A-Fa-f]+)-([0-9A-Fa-f]+) ([-r])', line) if m.group(3) == 'r': # if this is a readable region start = int(m.group(1), 16) end = int(m.group(2), 16) mem_file.seek(start) # seek to region start chunk = mem_file.read(end - start) # read region contents output_file.write(chunk) # dump contents to standard output maps_file.close() mem_file.close() output_file.close() /proc/$pid/mem [The following is for historical interest. It does not apply to current kernels.] Since version 3.3 of the kernel, you can access /proc/$pid/mem normally as long as you access only access it at mapped offsets and you have permission to trace it (same permissions as ptrace for read-only access). But in older kernels, there were some additional complications. If you try to read from the mem pseudo-file of another process, it doesn't work: you get an ESRCH (No such process) error. The permissions on /proc/$pid/mem (r--------) are more liberal than what should be the case. For example, you shouldn't be able to read a setuid process's memory. Furthermore, trying to read a process's memory while the process is modifying it could give the reader an inconsistent view of the memory, and worse, there were race conditions that could trace older versions of the Linux kernel (according to this lkml thread, though I don't know the details). So additional checks are needed: The process that wants to read from /proc/$pid/mem must attach to the process using ptrace with the PTRACE_ATTACH flag. This is what debuggers do when they start debugging a process; it's also what strace does to a process's system calls. Once the reader has finished reading from /proc/$pid/mem, it should detach by calling ptrace with the PTRACE_DETACH flag. The observed process must not be running. Normally calling ptrace(PTRACE_ATTACH, …) will stop the target process (it sends a STOP signal), but there is a race condition (signal delivery is asynchronous), so the tracer should call wait (as documented in ptrace(2)). A process running as root can read any process's memory, without needing to call ptrace, but the observed process must be stopped, or the read will still return ESRCH. In the Linux kernel source, the code providing per-process entries in /proc is in fs/proc/base.c, and the function to read from /proc/$pid/mem is mem_read. The additional check is performed by check_mem_permission. Here's some sample C code to attach to a process and read a chunk its of mem file (error checking omitted): sprintf(mem_file_name, "/proc/%d/mem", pid); mem_fd = open(mem_file_name, O_RDONLY); ptrace(PTRACE_ATTACH, pid, NULL, NULL); waitpid(pid, NULL, 0); lseek(mem_fd, offset, SEEK_SET); read(mem_fd, buf, _SC_PAGE_SIZE); ptrace(PTRACE_DETACH, pid, NULL, NULL); I've already posted a proof-of-concept script for dumping /proc/$pid/mem on another thread.
How do I read from /proc/$pid/mem under Linux?
1,387,949,048,000
I accidentally "stopped" my telnet process. Now I can neither "switch back" into it, nor can I kill it (it won't respond to kill 92929, where 92929 is the processid.) So, my question is, if you have a stopped process on linux command line, how do you switch back into it, or kill it, without having to resort to kill -9?
The easiest way is to run fg to bring it to the foreground: $ help fg fg: fg [job_spec] Move job to the foreground. Place the job identified by JOB_SPEC in the foreground, making it the current job. If JOB_SPEC is not present, the shell's notion of the current job is used. Exit Status: Status of command placed in foreground, or failure if an error occurs. Alternatively, you can run bg to have it continue in the background: $ help bg bg: bg [job_spec ...] Move jobs to the background. Place the jobs identified by each JOB_SPEC in the background, as if they had been started with `&'. If JOB_SPEC is not present, the shell's notion of the current job is used. Exit Status: Returns success unless job control is not enabled or an error occurs. If you have just hit Ctrl Z, then to bring the job back just run fg with no arguments.
If you ^Z from a process, it gets "stopped". How do you switch back in?
1,387,949,048,000
How does one find large files that have been deleted but are still open in an application? How can one remove such a file, even though a process has it open? The situation is that we are running a process that is filling up a log file at a terrific rate. I know the reason, and I can fix it. Until then, I would like to rm or empty the log file without shutting down the process. Simply doing rm output.log removes only references to the file, but it continues to occupy space on disk until the process is terminated. Worse: after rming I now have no way to find where the file is or how big it is! Is there any way to find the file, and possibly empty it, even though it is still open in another process? I specifically refer to Linux-based operating systems such as Debian or RHEL.
If you can't kill your application, you can truncate instead of deleting the log file to reclaim the space. If the file was not open in append mode (with O_APPEND), then the file will appear as big as before the next time the application writes to it (though with the leading part sparse and looking as if it contained NUL bytes), but the space will have been reclaimed (that does not apply to HFS+ file systems on Apple OS/X that don't support sparse files though). To truncate it: : > /path/to/the/file.log If it was already deleted, on Linux, you can still truncate it by doing: : > "/proc/$pid/fd/$fd" Where $pid is the process id of the process that has the file opened, and $fd one file descriptor it has it opened under (which you can check with lsof -p "$pid". If you don't know the pid, and are looking for deleted files, you can do: lsof -nP | grep '(deleted)' lsof -nP +L1, as mentioned by @user75021 is an even better (more reliable and more portable) option (list files that have fewer than 1 link). Or (on Linux): find /proc/*/fd -ls | grep '(deleted)' Or to find the large ones with zsh: ls -ld /proc/*/fd/*(-.LM+1l0) An alternative, if the application is dynamically linked is to attach a debugger to it and make it call close(fd) followed by a new open("the-file", ....).
Find and remove large files that are open but have been deleted
1,387,949,048,000
I have detached a process from my terminal, like this: $ process & That terminal is now long closed, but process is still running, and I want to send some commands to that process's stdin. Is that possible?
Yes, it is. First, create a pipe: mkfifo /tmp/fifo. Use gdb to attach to the process: gdb -p PID Then close stdin: call close (0); and open it again: call open ("/tmp/fifo", 0600) Finally, write away (from a different terminal, as gdb will probably hang): echo blah > /tmp/fifo
How do I attach a terminal to a detached process?
1,387,949,048,000
I want to see list of process created by specific user or group of user in Linux Can I do it using ps command or is there any other command to achieve this?
To view only the processes owned by a specific user, use the following command: top -U [username] Replace the [username] with the required username If you want to use ps then ps -u [username] OR ps -ef | grep <username> OR ps -efl | grep <username> for the extended listing Check out the man ps page for options Another alternative is to use pstree wchich prints the process tree of the user pstree <username or pid>
How to see process created by specific user in Unix/linux
1,387,949,048,000
For Windows, I think Process Explorer shows you all the threads under a process. Is there a similar command line utility for Linux that can show me details about all the threads a particular process is spawning? I think I should have made myself more clear. I do not want to see the process hierarcy, but a list of all the threads spawned by a particular process See this screenshot How can this be achieved in Linux? Thanks!
The classical tool top shows processes by default but can be told to show threads with the H key press or -H command line option. There is also htop, which is similar to top but has scrolling and colors; it shows all threads by default (but this can be turned off). ps also has a few options to show threads, especially H and -L. There are also GUI tools that can show information about threads, for example qps (a simple GUI wrapper around ps) or conky (a system monitor with lots of configuration options). For each process, a lot of information is available in /proc/12345 where 12345 is the process ID. Information on each thread is available in /proc/12345/task/67890 where 67890 is the kernel thread ID. This is where ps, top and other tools get their information.
Is there a way to see details of all the threads that a process has in Linux?
1,387,949,048,000
Sometimes, I would like to unmount a usb device with umount /run/media/theDrive, but I get a drive is busy error. How do I find out which processes or programs are accessing the device?
Use lsof | grep /media/whatever to find out what is using the mount. Also, consider umount -l (lazy umount) to prevent new processes from using the drive while you clean up.
How do I find out which processes are preventing unmounting of a device?
1,387,949,048,000
Are there any relatively strightforward options with top to track a specific process? Ideally by identifying the process by a human readable value? e.g. chrome or java. In other words, I want to view all the typical information top provides, but for the results to be filtered to the parameters provided i.e.. 'chrome' or 'java'
You can simply use grep: NAME grep, egrep, fgrep, rgrep - print lines matching a pattern SYNOPSIS grep [OPTIONS] PATTERN [FILE...] grep [OPTIONS] [-e PATTERN | -f FILE] [FILE...] DESCRIPTION grep searches the named input FILEs (or standard input if no files are named, or if a single hyphen-minus (-) is given as file name) for lines containing a match to the given PATTERN. By default, grep prints the matching lines. Run following command to get output which you want (ex-chrome): top | grep chrome Here we are using grep with pipelines | so top & grep run parallel ; top output given to grep (as input) and grep chrome filters matching lines chrome until top stopped.
How to view a specific process in top
1,387,949,048,000
In Unix whenever we want to create a new process, we fork the current process, creating a new child process which is exactly the same as the parent process; then we do an exec system call to replace all the data from the parent process with that for the new process. Why do we create a copy of the parent process in the first place and not create a new process directly?
The short answer is, fork is in Unix because it was easy to fit into the existing system at the time, and because a predecessor system at Berkeley had used the concept of forks. From The Evolution of the Unix Time-sharing System (relevant text has been highlighted): Process control in its modern form was designed and implemented within a couple of days. It is astonishing how easily it fitted into the existing system; at the same time it is easy to see how some of the slightly unusual features of the design are present precisely because they represented small, easily-coded changes to what existed. A good example is the separation of the fork and exec functions. The most common model for the creation of new processes involves specifying a program for the process to execute; in Unix, a forked process continues to run the same program as its parent until it performs an explicit exec. The separation of the functions is certainly not unique to Unix, and in fact it was present in the Berkeley time-sharing system, which was well-known to Thompson. Still, it seems reasonable to suppose that it exists in Unix mainly because of the ease with which fork could be implemented without changing much else. The system already handled multiple (i.e. two) processes; there was a process table, and the processes were swapped between main memory and the disk. The initial implementation of fork required only 1) Expansion of the process table 2) Addition of a fork call that copied the current process to the disk swap area, using the already existing swap IO primitives, and made some adjustments to the process table. In fact, the PDP-7's fork call required precisely 27 lines of assembly code. Of course, other changes in the operating system and user programs were required, and some of them were rather interesting and unexpected. But a combined fork-exec would have been considerably more complicated, if only because exec as such did not exist; its function was already performed, using explicit IO, by the shell. Since that paper, Unix has evolved. fork followed by exec is no longer the only way to run a program. vfork was created to be a more efficient fork for the case where the new process intends to do an exec right after the fork. After doing a vfork, the parent and child processes share the same data space, and the parent process is suspended until the child process either execs a program or exits. posix_spawn creates a new process and executes a file in a single system call. It takes a bunch of parameters that let you selectively share the caller's open files and copy its signal disposition and other attributes to the new process.
Why do we need to fork to create new processes?
1,387,949,048,000
How can I get the command arguments or the whole command line from a running process using its process name? For example this process: # ps PID USER TIME COMMAND 1452 root 0:00 /sbin/udhcpc -b -T 1 -A 12 -i eth0 -p /var/run/udhcpc.eth0.pid And what I want is /sbin/udhcpc -b -T 1 -A 12 -i eth0 -p /var/run/udhcpc.eth0.pid or the arguments. I know the process name and want its arguments. I'm using Busybox on SliTaz.
You could use the -o switch to specify your output format: $ ps -eo args From the man page: Command with all its arguments as a string. Modifications to the arguments may be shown. [...] You may also use the -p switch to select a specific PID: $ ps -p [PID] -o args pidof may also be used to switch from process name to PID, hence allowing the use of -p with a name: $ ps -p $(pidof dhcpcd) -o args Of course, you may also use grep for this (in which case, you must add the -e switch): $ ps -eo args | grep dhcpcd | head -n -1 GNU ps will also allow you to remove the headers (of course, this is unnecessary when using grep): $ ps -p $(pidof dhcpcd) -o args --no-headers On other systems, you may pipe to AWK or sed: $ ps -p $(pidof dhcpcd) -o args | awk 'NR > 1' $ ps -p $(pidof dhcpcd) -o args | sed 1d Edit: if you want to catch this line into a variable, just use $(...) as usual: $ CMDLINE=$(ps -p $(pidof dhcpcd) -o args --no-headers) or, with grep : $ CMDLINE=$(ps -eo args | grep dhcpcd | head -n -1)
How to get whole command line from a process?
1,387,949,048,000
Suppose I have a thousand or more instances of any process (for example, vi) running. How do I kill them all in one single shot/one line command/one command?
What's wrong with the good old, for pid in $(ps -ef | grep "some search" | awk '{print $2}'); do kill -9 $pid; done There are ways to make that more efficient, for pid in $(ps -ef | awk '/some search/ {print $2}'); do kill -9 $pid; done and other variations, but at the basic level, it's always worked for me.
Kill many instances of a running process with one command
1,387,949,048,000
I'm looking for somthing like top is to CPU usage. Is there a command line argument for top that does this? Currently, my memory is so full that even 'man top' fails with out of memory :)
From inside top you can try the following: Press SHIFT+f Press the Letter corresponding to %MEM Press ENTER You might also try: $ ps -eo pmem,pcpu,vsize,pid,cmd | sort -k 1 -nr | head -5 This will give the top 5 processes by memory usage.
How to find which processes are taking all the memory?
1,387,949,048,000
I recently came across this in a shell script. if ! kill -0 $(cat /path/to/file.pid); then ... do something ... fi What does kill -0 ... do?
This one is a little hard to glean but if you look in the following 2 man pages you'll see the following notes: kill(1) $ man 1 kill ... If sig is 0, then no signal is sent, but error checking is still performed. ... kill(2) $ man 2 kill ... If sig is 0, then no signal is sent, but error checking is still performed; this can be used to check for the existence of a process ID or process group ID. ... So signal 0 will not actually in fact send anything to your process's PID, but will check whether you have permissions to do so. Where might this be useful? One obvious place would be if you were trying to determine if you had permissions to send signals to a running process via kill. You could check prior to sending the actual kill signal that you want, by wrapping a check to make sure that kill -0 <PID> was first allowed. Example Say a process was being run by root as follows: $ sudo sleep 2500 & [1] 15693 Now in another window if we run this command we can confirm that that PID is running. $ pgrep sleep 15693 Now let's try this command to see if we have access to send that PID signals via kill. $ if ! kill -0 $(pgrep sleep); then echo "You're weak!"; fi bash: kill: (15693) - Operation not permitted You're weak! So it works, but the output is leaking a message from the kill command that we don't have permissions. Not a big deal, simply catch STDERR and send it to /dev/null. $ if ! kill -0 $(pgrep sleep) 2>/dev/null; then echo "You're weak!"; fi You're weak! Complete example So then we could do something like this, killer.bash: #!/bin/bash PID=$(pgrep sleep) if ! kill -0 $PID 2>/dev/null; then echo "you don't have permissions to kill PID:$PID" exit 1 fi kill -9 $PID Now when I run the above as a non-root user: $ ~/killer.bash you don't have permissions to kill PID:15693 $ echo $? 1 However when it's run as root: $ sudo ~/killer.bash $ echo $? 0 $ pgrep sleep $
What does `kill -0` do?
1,387,949,048,000
Given an X11 window ID, is there a way to find the ID of the process that created it? Of course this isn't always possible, for example if the window came over a TCP connection. For that case I'd like the IP and port associated with the remote end. The question was asked before on Stack Overflow, and a proposed method was to use the _NET_WM_PID property. But that's set by the application. Is there a way to do it if the application doesn't play nice?
Unless your X-server supports XResQueryClientIds from X-Resource v1.2 extension I know no easy way to reliably request process ID. There're other ways however. If you just have a window in front of you and don't know its ID yet — it's easy to find it out. Just open a terminal next to the window in question, run xwininfo there and click on that window. xwininfo will show you the window-id. So let's assume you know a window-id, e.g. 0x1600045, and want to find, what's the process owning it. The easiest way to check who that window belongs to is to run XKillClient for it i.e.: xkill -id 0x1600045 and see which process just died. But only if you don't mind killing it of course! Another easy but unreliable way is to check its _NET_WM_PID and WM_CLIENT_MACHINE properties: xprop -id 0x1600045 That's what tools like xlsclients and xrestop do. Unfortunately this information may be incorrect not only because the process was evil and changed those, but also because it was buggy. For example after some firefox crash/restart I've seen orphaned windows (from flash plugin, I guess) with _NET_WM_PID pointing to a process, that died long time ago. Alternative way is to run xwininfo -root -tree and check properties of parents of the window in question. That may also give you some hints about window origins. But! While you may not find what process have created that window, there's still a way to find where that process have connected to X-server from. And that way is for real hackers. :) The window-id 0x1600045 that you know with lower bits zeroed (i.e. 0x1600000) is a "client base". And all resource IDs, allocated for that client are "based" on it (0x1600001, 0x1600002, 0x1600003, etc). X-server stores information about its clients in clients[] array, and for each client its "base" is stored in clients[i]->clientAsMask variable. To find X-socket, corresponding to that client, you need to attach to X-server with gdb, walk over clients[] array, find client with that clientAsMask and print its socket descriptor, stored in ((OsCommPtr)(clients[i]->osPrivate))->fd. There may be many X-clients connected, so in order to not check them all manually, let's use a gdb function: define findclient set $ii = 0 while ($ii < currentMaxClients) if (clients[$ii] != 0 && clients[$ii]->clientAsMask == $arg0 && clients[$ii]->osPrivate != 0) print ((OsCommPtr)(clients[$ii]->osPrivate))->fd end set $ii = $ii + 1 end end When you find the socket, you can check, who's connected to it, and finally find the process. WARNING: Do NOT attach gdb to X-server from INSIDE the X-server. gdb suspends the process it attaches to, so if you attach to it from inside X-session, you'll freeze your X-server and won't be able to interact with gdb. You must either switch to text terminal (Ctrl+Alt+F2) or connect to your machine over ssh. Example: Find the PID of your X-server: $ ps ax | grep X 1237 tty1 Ssl+ 11:36 /usr/bin/X :0 vt1 -nr -nolisten tcp -auth /var/run/kdm/A:0-h6syCa Window id is 0x1600045, so client base is 0x1600000. Attach to X-server and find client socket descriptor for that client base. You'll need debug information installed for X-server (-debuginfo package for rpm-distributions or -dbg package for deb's). $ sudo gdb (gdb) define findclient Type commands for definition of "findclient". End with a line saying just "end". > set $ii = 0 > while ($ii < currentMaxClients) > if (clients[$ii] != 0 && clients[$ii]->clientAsMask == $arg0 && clients[$ii]->osPrivate != 0) > print ((OsCommPtr)(clients[$ii]->osPrivate))->fd > end > set $ii = $ii + 1 > end > end (gdb) attach 1237 (gdb) findclient 0x1600000 $1 = 31 (gdb) detach (gdb) quit Now you know that client is connected to a server socket 31. Use lsof to find what that socket is: $ sudo lsof -n | grep 1237 | grep 31 X 1237 root 31u unix 0xffff810008339340 8512422 socket (here "X" is the process name, "1237" is its pid, "root" is the user it's running from, "31u" is a socket descriptor) There you may see that the client is connected over TCP, then you can go to the machine it's connected from and check netstat -nap there to find the process. But most probably you'll see a unix socket there, as shown above, which means it's a local client. To find a pair for that unix socket you can use the MvG's technique (you'll also need debug information for your kernel installed): $ sudo gdb -c /proc/kcore (gdb) print ((struct unix_sock*)0xffff810008339340)->peer $1 = (struct sock *) 0xffff810008339600 (gdb) quit Now that you know client socket, use lsof to find PID holding it: $ sudo lsof -n | grep 0xffff810008339600 firefox 7725 username 146u unix 0xffff810008339600 8512421 socket That's it. The process keeping that window is "firefox" with process-id 7725 2017 Edit: There are more options now as seen at Who's got the other end of this unix socketpair?. With Linux 3.3 or above and with lsof 4.89 or above, you can replace points 3 to 5 above with: lsof +E -a -p 1237 -d 31 to find out who's at the other end of the socket on fd 31 of the X-server process with ID 1237.
What process created this X11 window?
1,387,949,048,000
I'm writing an application. It has the ability to spawn various external processes. When the application closes, I want any processes it has spawned to be killed. Sounds easy enough, right? Look up my PID, and recursively walk the process tree, killing everything in sight, bottom-up style. Except that this doesn't work. In one specific case, I spawn foo, but foo just spawns bar and then immediately exits, leaving bar running. There is now no record of the fact that bar was once part of the application's process tree. And hence, the application has no way of knowing that it should kill bar. I'm pretty sure I can't be the first person on Earth to try to do this. So what's the standard solution? I guess really I'm looking for some way to "tag" a process in such a way that any process it spawns will unconditionally inherit the same tag. (So far, the best I can come up with is running the application as a different user. That way, you can just indescriminently kill all processes beloning to that user. But this has all sorts of access permission problems...)
Update This is one of those ones where I clearly should have read the question more carefully (though seemingly this is the case with most answers on to this question). I have left the original answer intact because it gives some good information, even though it clearly misses the point of the question. Using SID I think the most general, robust approach here (at least for Linux) is to use SID (Session ID) rather than PPID or PGID. This is much less likely to be changed by child processes and, in the case of shell script, the setsid command can be used to start a new session. Outside of the shell the setuid system call can be used. For a shell that is a session leader, you can kill all the other processes in the session by doing (the shell won't kill itself): kill $(ps -s $$ -o pid=) Note: The trailing equals sign in argument pid= removes the PID column header. Otherwise, using system calls, call getsid for each process seems like the only way. Using a PID namespace This is the most robust approach, however the downsides are that it is Linux only and that it needs root privileges. Also the shell tools (if used) are very new and not widely available. For a more detailed discussion of PID namespaces, please see this question - Reliable way to jail child processes using `nsenter:`. The basic approach here is that you can create a new PID namespace by using the CLONE_NEWPID flag with the clone system call (or via the unshare command). When a process in a PID namespace is orphaned (ie when it parent process finishes), it is re-parented to the top level PID namespace process rather than the init. This means that you can always identify all the descendants of the top level process by walking the process tree. In the case of a shell script the PPID approach below would then reliably kill all descendants. Further reading on PID namespaces: Namespaces in operation, part 3: PID namespaces Namespaces in operation, part 4: more on PID namespaces Original Answer Killing child processes The easy way to do this in a shell script, provided pkill is available is: pkill -P $$ This kills all children of the current given process ($$ expands to the PID of the current shell). Killing all descendent processes Another situation is that you may want to kill all the descendants of the current shell process as well as just the direct children. In this case you can use the recursive shell function below to list all the descendant PIDs, before passing them as arguments to kill: list_descendants () { local children=$(ps -o pid= --ppid "$1") for pid in $children do list_descendants "$pid" done echo "$children" } kill $(list_descendants $$) Double forks One thing to beware of, which might prevent the above from working as expected is the double fork() technique. This is commonly used when daemonising a process. As the name suggests the process that is to be started runs in the second fork of the original process. Once the process is started, the first fork then exits meaning that the process becomes orphaned. In this case it will become a child of the init process instead of the original process that it was started from. There is no robust way to identify which process was the original parent, so if this is the case, you can't expect to be able to kill it without having some other means of identification (a PID file for example). However, if this technique has been used, you shouldn't try to kill the process without good reason. Further Reading: Why fork() twice What is the reason for performing a double fork when creating a daemon?
Kill all descendant processes [duplicate]
1,387,949,048,000
Is it possible to block the (outgoing) network access of a single process?
With Linux 2.6.24+ (considered experimental until 2.6.29), you can use network namespaces for that. You need to have the 'network namespaces' enabled in your kernel (CONFIG_NET_NS=y) and util-linux with the unshare tool. Then, starting a process without network access is as simple as: unshare -n program ... This creates an empty network namespace for the process. That is, it is run with no network interfaces, including no loopback. In below example we add -r to run the program only after the current effective user and group IDs have been mapped to the superuser ones (avoid sudo): $ unshare -r -n ping 127.0.0.1 connect: Network is unreachable If your app needs a network interface you can set a new one up: $ unshare -n -- sh -c 'ip link set dev lo up; ping 127.0.0.1' PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data. 64 bytes from 127.0.0.1: icmp_seq=1 ttl=32 time=0.066 ms Note that this will create a new, local loopback. That is, the spawned process won't be able to access open ports of the host's 127.0.0.1. If you need to gain access to the original networking inside the namespace, you can use nsenter to enter the other namespace. The following example runs ping with network namespace that is used by PID 1 (it is specified through -t 1): $ nsenter -n -t 1 -- ping -c4 example.com PING example.com (93.184.216.119) 56(84) bytes of data. 64 bytes from 93.184.216.119: icmp_seq=1 ttl=50 time=134 ms 64 bytes from 93.184.216.119: icmp_seq=2 ttl=50 time=134 ms 64 bytes from 93.184.216.119: icmp_seq=3 ttl=50 time=134 ms 64 bytes from 93.184.216.119: icmp_seq=4 ttl=50 time=139 ms --- example.com ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3004ms rtt min/avg/max/mdev = 134.621/136.028/139.848/2.252 ms
Block network access of a process?
1,387,949,048,000
I read here that the purpose of export in a shell is to make the variable available to sub-processes started from the shell. However, I have also read here and here that "Processes inherit their environment from their parent (the process which started them)." If this is the case, why do we need export? What am I missing? Are shell variables not part of the environment by default? What is the difference?
Your assumption is that shell variables are in the environment. This is incorrect. The export command is what defines a name to be in the environment at all. Thus: a=1 b=2 export b results in the current shell knowing that $a expands to 1 and $b to 2, but subprocesses will not know anything about a because it is not part of the environment (even in the current shell). Some useful tools: set: Useful for viewing the current shell's parameters, exported-or-not set -k: Sets assigned args in the environment. Consider f() { set -k; env; }; f a=1 set -a: Tells the shell to put any name that gets set into the environment. Like putting export before every assignment. Useful for .env files, as in set -a; . .env; set +a. export: Tells the shell to put a name in the environment. Export and assignment are two entirely different operations. env: As an external command, env can only tell you about the inherited environment, thus, it's useful for sanity checking. env -i: Useful for clearing the environment before starting a subprocess. Alternatives to export: name=val command # Assignment before command exports that name to the command. declare/local -x name # Exports name, particularly useful in shell functions when you want to avoid exposing the name to outside scope. set -a # Exports every following assignment. Motivation So why do shells need to have their own variables and environment that is different? I'm sure there are some historical reasons, but I think the main reason is scoping. The environment is for subprocesses, but there are lots of operations you can do in the shell without forking a subprocess. Suppose you loop: for i in {0..50}; do somecommand done Why waste memory for somecommand by including i, making its environment any bigger than it needs to be? What if the variable name you chose in the shell just happens to mean something unintended to the program? (Personal favorites of mine include DEBUG and VERBOSE. Those names are used everywhere and rarely namespaces adequately.) What is the environment if not the shell? Sometimes to understand Unix behavior you have to look at the syscalls, the basic API for interacting with the kernel and OS. Here, we're looking at the exec family of calls, which is what the shell uses when it creates a subprocess. Here's a quote from the manpage for exec(3) (emphasis mine): The execle() and execvpe() functions allow the caller to specify the environment of the executed program via the argument envp. The envp argument is an array of pointers to null-terminated strings and must be terminated by a NULL pointer. The other functions take the environment for the new process image from the external variable environ in the calling process. So writing export somename in the shell would be equivalent to copying the name to the global dictionary environ in C. But assigning somename without exporting it would be just like assigning it in C, without copying it to the environ variable.
If processes inherit the parent's environment, why do we need export?
1,387,949,048,000
What are session leaders, as in ps -d which selects all processes except session leaders?
In Linux, every process has several IDs associated with it, including: Process ID (PID) This is an arbitrary number identifying the process. Every process has a unique ID, but after the process exits and the parent process has retrieved the exit status, the process ID is freed to be reused by a new process. Parent Process ID (PPID) This is just the PID of the process that started the process in question. If the parent process exits before the child does, the child's PPID is changed to another process (usually PID 1). Process Group ID (PGID) This is just the PID of the process group leader. If PID == PGID, then this process is a process group leader. Session ID (SID) This is just the PID of the session leader. If PID == SID, then this process is a session leader. Sessions and process groups are just ways to treat a number of related processes as a unit. All the members of a process group always belong to the same session, but a session may have multiple process groups. Normally, a shell will be a session leader, and every pipeline executed by that shell will be a process group. This is to make it easy to kill the children of a shell when it exits. (See exit(3) for the gory details.) I don't think there is a special term for a member of a session or process group that isn't the leader.
What are "session leaders" in `ps`?
1,387,949,048,000
How can I verify whether a running process will catch a signal, or ignore it, or block it? Ideally I'd like to see a list of signals, or at least not have to actually send the signal to check.
Under Linux, you can find the PID of your process, then look at /proc/$PID/status. It contains lines describing which signals are blocked (SigBlk), ignored (SigIgn), or caught (SigCgt). # cat /proc/1/status ... SigBlk: 0000000000000000 SigIgn: fffffffe57f0d8fc SigCgt: 00000000280b2603 ... The number to the right is a bitmask. If you convert it from hex to binary, each 1-bit represents a caught signal, counting from right to left starting with 1. So by interpreting the SigCgt line, we can see that my init process is catching the following signals: 00000000280b2603 ==> 101000000010110010011000000011 | | | || | || |`-> 1 = SIGHUP | | | || | || `--> 2 = SIGINT | | | || | |`----------> 10 = SIGUSR1 | | | || | `-----------> 11 = SIGSEGV | | | || `--------------> 14 = SIGALRM | | | |`-----------------> 17 = SIGCHLD | | | `------------------> 18 = SIGCONT | | `--------------------> 20 = SIGTSTP | `----------------------------> 28 = SIGWINCH `------------------------------> 30 = SIGPWR (I found the number-to-name mapping by running kill -l from bash.) EDIT: And by popular demand, a script, in POSIX sh. sigparse () { i=0 # bits="$(printf "16i 2o %X p" "0x$1" | dc)" # variant for busybox bits="$(printf "ibase=16; obase=2; %X\n" "0x$1" | bc)" while [ -n "$bits" ] ; do i="$(expr "$i" + 1)" case "$bits" in *1) printf " %s(%s)" "$(kill -l "$i")" "$i" ;; esac bits="${bits%?}" done } grep "^Sig...:" "/proc/$1/status" | while read a b ; do printf "%s%s\n" "$a" "$(sigparse "$b")" done # | fmt -t # uncomment for pretty-printing
How can I check what signals a process is listening to?
1,387,949,048,000
How to limit process to one cpu core ? Something similar to ulimit or cpulimit would be nice. (Just to ensure: I do NOT want to limit percentage usage or time of execution. I want to force app (with all it's children, processes (threads)) to use one cpu core (or 'n' cpu cores)).
Under Linux, execute the sched_setaffinity system call. The affinity of a process is the set of processors on which it can run. There's a standard shell wrapper: taskset. For example, to pin a process to CPU #0 (you need to choose a specific CPU): taskset -c 0 mycommand --option # start a command with the given affinity taskset -c -pa 0 1234 # set the affinity of a running process There are third-party modules for both Perl (Sys::CpuAffinity) and Python (affinity) to set a process's affinity. Both of these work on both Linux and Windows (Windows may require other third-party modules with Sys::CpuAffinity); Sys::CpuAffinity also works on several other unix variants. If you want to set a process's affinity from the time of its birth, set the current process's affinity immediately before calling execve. Here's a trivial wrapper that forces a process to execute on CPU 0. #!/usr/bin/env perl use POSIX; use Sys::CPUAffinity; Sys::CpuAffinity::setAffinity(getpid(), [0]); exec $ARGV[0] @ARGV
How to limit a process to one CPU core in Linux? [duplicate]
1,387,949,048,000
It often baffles me that, although I have been working professionally with computers for several decades and Linux for a decade, I actually treat most of the OS' functionality as a black box, not unlike magic. Today I thought about the kill command, and while I use it multiple times per day (both in its "normal" and -9 flavor) I must admit that I have absolutely no idea how it works behind the scenes. From my viewpoint, if a running process is "hung", I call kill on its PID, and then it suddenly isn't running anymore. Magic! What really happens there? Manpages talk about "signals" but surely that's just an abstraction. Sending kill -9 to a process doesn't require the process' cooperation (like handling a signal), it just kills it off. How does Linux stop the process from continuing to take up CPU time? Is it removed from scheduling? Does it disconnect the process from its open file handles? How is the process' virtual memory released? Is there something like a global table in memory, where Linux keeps references to all resources taken up by a process, and when I "kill" a process, Linux simply goes through that table and frees the resources one by one? I'd really like to know all that!
Sending kill -9 to a process doesn't require the process' cooperation (like handling a signal), it just kills it off. You're presuming that because some signals can be caught and ignored they all involve cooperation. But as per man 2 signal, "the signals SIGKILL and SIGSTOP cannot be caught or ignored". SIGTERM can be caught, which is why plain kill is not always effective – generally this means something in the process's handler has gone awry.1 If a process doesn't (or can't) define a handler for a given signal, the kernel performs a default action. In the case of SIGTERM and SIGKILL, this is to terminate the process (unless its PID is 1; the kernel will not terminate init)2 meaning its file handles are closed, its memory returned to the system pool, its parent receives SIGCHILD, its orphan children are inherited by init, etc., just as if it had called exit (see man 2 exit). The process no longer exists – unless it ends up as a zombie, in which case it is still listed in the kernel's process table with some information; that happens when its parent does not wait and deal with this information properly. However, zombie processes no longer have any memory allocated to them and hence cannot continue to execute. Is there something like a global table in memory where Linux keeps references to all resources taken up by a process and when I "kill" a process Linux simply goes through that table and frees the resources one by one? I think that's accurate enough. Physical memory is tracked by page (one page usually equalling a 4 KB chunk) and those pages are taken from and returned to a global pool. It's a little more complicated in that some freed pages are cached in case the data they contain is required again (that is, data which was read from a still existing file). Manpages talk about "signals" but surely that's just an abstraction. Sure, all signals are an abstraction. They're conceptual, just like "processes". I'm playing semantics a bit, but if you mean SIGKILL is qualitatively different than SIGTERM, then yes and no. Yes in the sense that it can't be caught, but no in the sense that they are both signals. By analogy, an apple is not an orange but apples and oranges are, according to a preconceived definition, both fruit. SIGKILL seems more abstract since you can't catch it, but it is still a signal. Here's an example of SIGTERM handling, I'm sure you've seen these before: #include <stdio.h> #include <signal.h> #include <unistd.h> #include <string.h> void sighandler (int signum, siginfo_t *info, void *context) { fprintf ( stderr, "Received %d from pid %u, uid %u.\n", info->si_signo, info->si_pid, info->si_uid ); } int main (void) { struct sigaction sa; memset(&sa, 0, sizeof(sa)); sa.sa_sigaction = sighandler; sa.sa_flags = SA_SIGINFO; sigaction(SIGTERM, &sa, NULL); while (1) sleep(10); return 0; } This process will just sleep forever. You can run it in a terminal and send it SIGTERM with kill. It spits out stuff like: Received 15 from pid 25331, uid 1066. 1066 is my UID. The PID will be that of the shell from which kill is executed, or the PID of kill if you fork it (kill 25309 & echo $?). Again, there's no point in setting a handler for SIGKILL because it won't work.3 If I kill -9 25309 the process will terminate. But that's still a signal; the kernel has the information about who sent the signal, what kind of signal it is, etc. 1. If you haven't looked at the list of possible signals, see kill -l. 2. Another exception, as Tim Post mentions below, applies to processes in uninterruptible sleep. These can't be woken up until the underlying issue is resolved, and so have ALL signals (including SIGKILL) deferred for the duration. A process can't create that situation on purpose, however. 3. This doesn't mean using kill -9 is a better thing to do in practice. My example handler is a bad one in the sense that it doesn't lead to exit(). The real purpose of a SIGTERM handler is to give the process a chance to do things like clean up temporary files, then exit voluntarily. If you use kill -9, it doesn't get this chance, so only do that if the "exit voluntarily" part seems to have failed.
How does Linux "kill" a process?
1,387,949,048,000
I'm going through this book, Advanced Linux Programming by Mark Mitchell, Jeffrey Oldham, and Alex Samuel. It's from 2001, so a bit old. But I find it quite good anyhow. However, I got to a point when it diverges from what my Linux produces in the shell output. On page 92 (116 in the viewer), the chapter 4.5 GNU/Linux Thread Implementation begins with the paragraph containing this statement: The implementation of POSIX threads on GNU/Linux differs from the thread implementation on many other UNIX-like systems in an important way: on GNU/Linux, threads are implemented as processes. This seems like a key point and is later illustrated with a C code. The output in the book is: main thread pid is 14608 child thread pid is 14610 And in my Ubuntu 16.04 it is: main thread pid is 3615 child thread pid is 3615 ps output supports this. I guess something must have changed between 2001 and now. The next subchapter on the next page, 4.5.1 Signal Handling, builds up on the previous statement: The behavior of the interaction between signals and threads varies from one UNIX-like system to another. In GNU/Linux, the behavior is dictated by the fact that threads are implemented as processes. And it looks like this will be even more important later on in the book. Could someone explain what's going on here? I've seen this one Are Linux kernel threads really kernel processes?, but it doesn't help much. I'm confused. This is the C code: #include <pthread.h> #include <stdio.h> #include <unistd.h> void* thread_function (void* arg) { fprintf (stderr, "child thread pid is %d\n", (int) getpid ()); /* Spin forever. */ while (1); return NULL; } int main () { pthread_t thread; fprintf (stderr, "main thread pid is %d\n", (int) getpid ()); pthread_create (&thread, NULL, &thread_function, NULL); /* Spin forever. */ while (1); return 0; }
I think this part of the clone(2) man page may clear up the difference re. the PID: CLONE_THREAD (since Linux 2.4.0-test8) If CLONE_THREAD is set, the child is placed in the same thread group as the calling process. Thread groups were a feature added in Linux 2.4 to support the POSIX threads notion of a set of threads that share a single PID. Internally, this shared PID is the so-called thread group identifier (TGID) for the thread group. Since Linux 2.4, calls to getpid(2) return the TGID of the caller. The "threads are implemented as processes" phrase refers to the issue of threads having had separate PIDs in the past. Basically, Linux originally didn't have threads within a process, just separate processes (with separate PIDs) that might have had some shared resources, like virtual memory or file descriptors. CLONE_THREAD and the separation of process ID(*) and thread ID make the Linux behaviour look more like other systems and more like the POSIX requirements in this sense. Though technically the OS still doesn't have separate implementations for threads and processes. Signal handling was another problematic area with the old implementation, this is described in more detail in the paper @FooF refers to in their answer. As noted in the comments, Linux 2.4 was also released in 2001, the same year as the book, so it's not surprising the news didn't get to that print.
Are threads implemented as processes on Linux?
1,387,949,048,000
I know that pkill has more filtering rules than killall. My question is, what is the difference between: pkill [signal] name and killall [signal] name I've read that killall is more effective and kill all processes and subprocesses (and recursively) that match with name program. pkill doesn't do this too?
The pgrep and pkill utilities were introduced in Sun's Solaris 7 and, as g33klord noted, they take a pattern as argument which is matched against the names of running processes. While pgrep merely prints a list of matching processes, pkill will send the specified signal (or SIGTERM by default) to the processes. The common options and semantics between pgrep and pkill comes in handy when you want to be careful and first review the list matching processes with pgrep, then proceed to kill them with pkill. pgrep and pkill are provided by the the procps package, which also provides other /proc file system utilities, such as ps, top, free, uptime among others. The killall command is provided by the psmisc package, and differs from pkill in that, by default, it matches the argument name exactly (up to the first 15 characters) when determining the processes signals will be sent to. The -e, --exact option can be specified to also require exact matches for names longer than 15 characters. This makes killall somewhat safer to use compared to pkill. If the specified argument contains slash (/) characters, the argument is interpreted as a file name and processes running that particular file will be selected as signal recipients. killall also supports regular expression matching of process names, via the -r, --regexp option. There are other differences as well. The killall command for instance has options for matching processes by age (-o, --older-than and -y, --younger-than), while pkill can be told to only kill processes on a specific terminal (via the -t option). Clearly then, the two commands have specific niches. Note that the killall command on systems descendant from Unix System V (notably Sun's Solaris, IBM's AIX and HP's HP-UX) kills all processes killable by a particular user, effectively shutting down the system if run by root. The Linux psmisc utilities have been ported to BSD (and in extension Mac OS X), hence killall there follows the "kill processes by name" semantics.
What's the difference between pkill and killall?
1,387,949,048,000
Most shells provide functions like && and ; to chain the execution of commands in certain ways. But what if a command is already running, can I still somehow add another command to be executed depending on the result of the first one? Say I ran $ /bin/myprog some output... but I really wanted /bin/myprog && /usr/bin/mycleanup. I can't kill myprog and restart everything because too much time would be lost. I can Ctrl+Z it and fg/bg if necessary. Does this allow me to chain in another command? I'm mostly interested in bash, but answers for all common shells are welcome!
You should be able to do this in the same shell you're in with the wait command: $ sleep 30 & [1] 17440 $ wait 17440 && echo hi ...30 seconds later... [1]+ Done sleep 30 hi excerpt from Bash man page wait [n ...] Wait for each specified process and return its termination status. Each n may be a process ID or a job specification; if a job spec is given, all processes in that job's pipeline are waited for. If n is not given, all currently active child processes are waited for, and the return status is zero. If n specifies a non-existent process or job, the return status is 127. Otherwise, the return status is the exit status of the last process or job waited for.
Can I somehow add a "&& prog2" to an already running prog1?
1,387,949,048,000
I have been using this command successfully, which changes a variable in a config file and then executes a Python script within a loop: for((i=114;i<=255;i+=1)); do echo $i > numbers.txt; python DoMyScript.py; done As each DoMyScript.py instance takes about 30 seconds to run before terminating, I'd like to relegate them to the background while the next one can be spawned. I have tried what I am familiar with, by adding in an ampersand as below: for((i=114;i<=255;i+=1)); do echo $i > numbers.txt; python DoMyScript.py &; done However, this results in the below error: -bash: syntax error near unexpected token `;'
Drop the ; after &. This is a syntactic requirement for((i=114;i<=255;i+=1)); do echo $i > numbers.txt;python DoMyScript.py & done
Use & (ampersand) in single line bash loop
1,387,949,048,000
I want to run multiple commands (processes) on a single shell. All of them have own continuous output and don't stop. Running them in the background breaks Ctrl-C. I would like to run them as a single process (subshell, maybe?) to be able to stop all of them with Ctrl-C. To be specific, I want to run unit tests with mocha (watch mode), run server and run some file preprocessing (watch mode) and see output of each in one terminal window. Basically I want to avoid using some task runner. I can realize it by running processes in the background (&), but then I have to put them into the foreground to stop them. I would like to have a process to wrap them and when I stop the process it stops its 'children'.
To run commands concurrently you can use the & command separator. ~$ command1 & command2 & command3 This will start command1, then runs it in the background. The same with command2. Then it starts command3 normally. The output of all commands will be garbled together, but if that is not a problem for you, that would be the solution. If you want to have a separate look at the output later, you can pipe the output of each command into tee, which lets you specify a file to mirror the output to. ~$ command1 | tee 1.log & command2 | tee 2.log & command3 | tee 3.log The output will probably be very messy. To counter that, you could give the output of every command a prefix using sed. ~$ echo 'Output of command 1' | sed -e 's/^/[Command1] /' [Command1] Output of command 1 So if we put all of that together we get: ~$ command1 | tee 1.log | sed -e 's/^/[Command1] /' & command2 | tee 2.log | sed -e 's/^/[Command2] /' & command3 | tee 3.log | sed -e 's/^/[Command3] /' [Command1] Starting command1 [Command2] Starting command2 [Command1] Finished [Command3] Starting command3 This is a highly idealized version of what you are probably going to see. But its the best I can think of right now. If you want to stop all of them at once, you can use the build in trap. ~$ trap 'kill %1; kill %2' SIGINT ~$ command1 & command2 & command3 This will execute command1 and command2 in the background and command3 in the foreground, which lets you kill it with Ctrl+C. When you kill the last process with Ctrl+C the kill %1; kill %2 commands are executed, because we connected their execution with the reception of an INTerupt SIGnal, the thing sent by pressing Ctrl+C. They respectively kill the 1st and 2nd background process (your command1 and command2). Don't forget to remove the trap, after you're finished with your commands using trap - SIGINT. Complete monster of a command: ~$ trap 'kill %1; kill %2' SIGINT ~$ command1 | tee 1.log | sed -e 's/^/[Command1] /' & command2 | tee 2.log | sed -e 's/^/[Command2] /' & command3 | tee 3.log | sed -e 's/^/[Command3] /' You could, of course, have a look at screen. It lets you split your console into as many separate consoles as you want. So you can monitor all commands separately, but at the same time.
Run multiple commands and kill them as one in bash
1,387,949,048,000
$ ps -Awwo pid,comm,args PID COMMAND COMMAND 1 init /sbin/init 2 kthreadd [kthreadd] 3 ksoftirqd/0 [ksoftirqd/0] 5 kworker/u:0 [kworker/u:0] 6 migration/0 [migration/0] 7 cpuset [cpuset] 8 khelper [khelper] 9 netns [netns] 10 sync_supers [sync_supers] 11 bdi-default [bdi-default] 12 kintegrityd [kintegrityd] 13 kblockd [kblockd] 14 kacpid [kacpid] 15 kacpi_notify [kacpi_notify] 16 kacpi_hotplug [kacpi_hotplug] 17 ata_sff [ata_sff] 18 khubd [khubd] What do the brackets mean? Does args always return the full path to the process command (e.g. /bin/cat)?
Brackets appear around command names when the arguments to that command cannot be located. The ps(1) man page on FreeBSD explains why this typically happens to system processes and kernel threads: If the arguments cannot be located (usually because it has not been set, as is the case of system processes and/or kernel threads) the command name is printed within square brackets. The ps(1) man page on Linux states similarly: Sometimes the process args will be unavailable; when this happens, ps will instead print the executable name in brackets.
What do the brackets around processes mean?
1,387,949,048,000
Using flock, several processes can have a shared lock at the same time, or be waiting to acquire a write lock. How do I get a list of these processes? That is, for a given file X, ideally to find the process id of each process which either holds, or is waiting for, a lock on the file. It would be a very good start though just to get a count of the number of processes waiting for a lock.
lslocks, from the util-linux package, does exactly this. In the MODE column, processes waiting for a lock will be marked with a *.
How to list processes locking file?
1,387,949,048,000
Given file path, how can I determine which process creates it (and/or reads/writes to it)?
The lsof command (already mentioned in several answers) will tell you what process has a file open at the time you run it. lsof is available for just about every unix variant. lsof /path/to/file lsof won't tell you about file that were opened two microseconds ago and closed one microsecond ago. If you need to watch a particular file and react when it is accessed, you need different tools. If you can plan a little in advance, you can put the file on a LoggedFS filesystem. LoggedFS is a FUSE stacked filesystem that logs all accesses to files in a hierarchy. The logging parameters are highly configurable. FUSE is available on all major unices. You'll want to log accesses to the directory where the file is created. Start with the provided sample configuration file and tweak it according to this guide. loggedfs -l /path/to/log_file -c /path/to/config.xml /path/to/directory tail -f /path/to/log_file Many unices offer other monitoring facilities. Under Linux, you can use the relatively new audit subsystem. There isn't much literature about it (but more than about loggedfs); you can start with this tutorial or a few examples or just with the auditctl man page. Here, it should be enough to make sure the daemon is started, then run auditctl: auditctl -w /path/to/file (I think older systems need auditctl -a exit,always -w /path/to/file) and watch the logs in /var/log/audit/audit.log.
How to determine which process is creating a file? [duplicate]
1,387,949,048,000
Given a shell process (e.g. sh) and its child process (e.g. cat), how can I simulate the behavior of Ctrl+C using the shell's process ID? This is what I've tried: Running sh and then cat: [user@host ~]$ sh sh-4.3$ cat test test Sending SIGINT to cat from another terminal: [user@host ~]$ kill -SIGINT $PID_OF_CAT cat received the signal and terminated (as expected). Sending the signal to the parent process does not seem to work. Why is the signal not propagated to cat when sent to its parent process sh? This does not work: [user@host ~]$ kill -SIGINT $PID_OF_SH
How CTRL+C works The first thing is to understand how CTRL+C works. When you press CTRL+C, your terminal emulator sends an ETX character (end-of-text / 0x03). The TTY is configured such that when it receives this character, it sends a SIGINT to the foreground process group of the terminal. This configuration can be viewed by doing stty -a and looking at intr = ^C;. The POSIX specification says that when INTR is received, it should send a SIGINT to the foreground process group of that terminal. What is the foreground process group? So, now the question is, how do you determine what the foreground process group is? The foreground process group is simply the group of processes which will receive any signals generated by the keyboard (SIGTSTP, SIGINT, etc). Simplest way to determine the process group ID is to use ps: ps ax -O tpgid The second column will be the process group ID. How do I send a signal to the process group? Now that we know what the process group ID is, we need to simulate the POSIX behavior of sending a signal to the entire group. This can be done with kill by putting a - in front of the group ID. For example, if your process group ID is 1234, you would use: kill -INT -1234 Simulate CTRL+C using the terminal number. So the above covers how to simulate CTRL+C as a manual process. But what if you know the TTY number, and you want to simulate CTRL+C for that terminal? This becomes very easy. Lets assume $tty is the terminal you want to target (you can get this by running tty | sed 's#^/dev/##' in the terminal). kill -INT -$(ps h -t $tty -o tpgid | uniq) This will send a SIGINT to whatever the foreground process group of $tty is.  
Why is SIGINT not propagated to child process when sent to its parent process?
1,387,949,048,000
I'm looking for the process started in Linux which has process ID 0. I know init has PID 1 , which is the first process in Linux, is there any process with PID 0?
From the wikipedia page titled: Process identifier: There are two tasks with specially distinguished process IDs: swapper or sched has process ID 0 and is responsible for paging, and is actually part of the kernel rather than a normal user-mode process. Process ID 1 is usually the init process primarily responsible for starting and shutting down the system. Originally, process ID 1 was not specifically reserved for init by any technical measures: it simply had this ID as a natural consequence of being the first process invoked by the kernel. More recent Unix systems typically have additional kernel components visible as 'processes', in which case PID 1 is actively reserved for the init process to maintain consistency with older systems. You can see the evidence of this if you look at the parent PIDs (PPID) of init and kthreadd: $ ps -eaf UID PID PPID C STIME TTY TIME CMD root 1 0 0 Jun24 ? 00:00:02 /sbin/init root 2 0 0 Jun24 ? 00:00:00 [kthreadd] kthreadd is the kernel thread daemon. All kthreads are forked from this thread. You can see evidence of this if you look at other processes using ps and seeing who their PPID is: $ ps -eaf root 3 2 0 Jun24 ? 00:00:57 [ksoftirqd/0] root 4 2 0 Jun24 ? 00:01:19 [migration/0] root 5 2 0 Jun24 ? 00:00:00 [watchdog/0] root 15 2 0 Jun24 ? 00:01:28 [events/0] root 19 2 0 Jun24 ? 00:00:00 [cpuset] root 20 2 0 Jun24 ? 00:00:00 [khelper] Notice they're all 2.
Which process has PID 0?
1,387,949,048,000
I found that pidstat would be a good tool to monitor processes. I want to calculate the average memory usage of a particular process. Here is some example output: 02:34:36 PM PID minflt/s majflt/s VSZ RSS %MEM Command 02:34:37 PM 7276 2.00 0.00 349212 210176 7.14 scalpel (This is part of the output from pidstat -r -p 7276.) Should I use the Resident Set Size (RSS) or Virtual Size (VSZ) information to calculate the average memory consumption? I have read a few thing on Wikipedia and on forums but I am not sure to fully understand the differences. Plus, it seems that none of them are reliable. So, how can I monitor a process to get its memory usage? Any help on this matter would be useful.
RSS is how much memory this process currently has in main memory (RAM). VSZ is how much virtual memory the process has in total. This includes all types of memory, both in RAM and swapped out. These numbers can get skewed because they also include shared libraries and other types of memory. You can have five hundred instances of bash running, and the total size of their memory footprint won't be the sum of their RSS or VSZ values. If you need to get a more detailed idea about the memory footprint of a process, you have some options. You can go through /proc/$PID/map and weed out the stuff you don't like. If it's shared libraries, the calculation could get complex depending on your needs (which I think I remember). If you only care about the heap size of the process, you can always just parse the [heap] entry in the map file. The size the kernel has allocated for the process heap may or may not reflect the exact number of bytes the process has asked to be allocated. There are minute details, kernel internals and optimisations which can throw this off. In an ideal world, it'll be as much as your process needs, rounded up to the nearest multiple of the system page size (getconf PAGESIZE will tell you what it is — on PCs, it's probably 4,096 bytes). If you want to see how much memory a process has allocated, one of the best ways is to forgo the kernel-side metrics. Instead, you instrument the C library's heap memory (de)allocation functions with the LD_PRELOAD mechanism. Personally, I slightly abuse valgrind to get information about this sort of thing. (Note that applying the instrumentation will require restarting the process.) Please note, since you may also be benchmarking runtimes, that valgrind will make your programs very slightly slower (but probably within your tolerances).
Need explanation on Resident Set Size/Virtual Size
1,387,949,048,000
I want to determine which process has the other end of a UNIX socket. Specifically, I'm asking about one that was created with socketpair(), though the problem is the same for any UNIX socket. I have a program parent which creates a socketpair(AF_UNIX, SOCK_STREAM, 0, fds), and fork()s. The parent process closes fds[1] and keeps fds[0] to communicate. The child does the opposite, close(fds[0]); s=fds[1]. Then the child exec()s another program, child1. The two can communicate back and forth via this socketpair. Now, let's say I know who parent is, but I want to figure out who child1 is. How do I do this? There are several tools at my disposal, but none can tell me which process is on the other end of the socket. I have tried: lsof -c progname lsof -c parent -c child1 ls -l /proc/$(pidof server)/fd cat /proc/net/unix Basically, I can see the two sockets, and everything about them, but cannot tell that they are connected. I am trying to determine which FD in the parent is communicating with which child process.
Since kernel 3.3, it is possible using ss or lsof-4.89 or above — see Stéphane Chazelas's answer. In older versions, according to the author of lsof, it was impossible to find this out: the Linux kernel does not expose this information. Source: 2003 thread on comp.unix.admin. The number shown in /proc/$pid/fd/$fd is the socket's inode number in the virtual socket filesystem. When you create a pipe or socket pair, each end successively receives an inode number. The numbers are attributed sequentially, so there is a high probability that the numbers differ by 1, but this is not guaranteed (either because the first socket was N and N+1 was already in use due to wrapping, or because some other thread was scheduled between the two inode allocations and that thread created some inodes too). I checked the definition of socketpair in kernel 2.6.39, and the two ends of the socket are not correlated except by the type-specific socketpair method. For unix sockets, that's unix_socketpair in net/unix/af_unix.c.
Who's got the other end of this unix socketpair?
1,387,949,048,000
In a VM on a cloud provider, I'm seeing a process with weird random name. It consumes significant network and CPU resources. Here's how the process looks like from pstree view: systemd(1)───eyshcjdmzg(37775)─┬─{eyshcjdmzg}(37782) ├─{eyshcjdmzg}(37783) └─{eyshcjdmzg}(37784) I attached to the process using strace -p PID. Here's the output I've got: https://gist.github.com/gmile/eb34d262012afeea82af1c21713b1be9. Killing the process does not work. It is somehow (via systemd?) resurrected. Here's how it looks from systemd point of view (note the weird IP address at the bottom): $ systemctl status 37775 ● session-60.scope - Session 60 of user root Loaded: loaded Transient: yes Drop-In: /run/systemd/system/session-60.scope.d └─50-After-systemd-logind\x2eservice.conf, 50-After-systemd-user-sessions\x2eservice.conf, 50-Description.conf, 50-SendSIGHUP.conf, 50-Slice.conf, 50-TasksMax.conf Active: active (abandoned) since Tue 2018-03-06 10:42:51 EET; 1 day 1h ago Tasks: 14 Memory: 155.4M CPU: 18h 56min 4.266s CGroup: /user.slice/user-0.slice/session-60.scope ├─37775 cat resolv.conf ├─48798 cd /etc ├─48799 sh ├─48804 who ├─48806 ifconfig eth0 ├─48807 netstat -an ├─48825 cd /etc ├─48828 id ├─48831 ps -ef ├─48833 grep "A" └─48834 whoami Mar 06 10:42:51 k8s-master systemd[1]: Started Session 60 of user root. Mar 06 10:43:27 k8s-master sshd[37594]: Received disconnect from 23.27.74.92 port 59964:11: Mar 06 10:43:27 k8s-master sshd[37594]: Disconnected from 23.27.74.92 port 59964 Mar 06 10:43:27 k8s-master sshd[37594]: pam_unix(sshd:session): session closed for user root What is going on?!
eyshcjdmzg is a Linux DDoS trojan (easily found through a Google search). You've likely been hacked. Take that server off-line now. It's not yours any longer. Please read the following ServerFault Q/A carefully: How to deal with a compromised server. Note that depending on who you are and where you are, you may additionally be legally obliged to report this incident to authorities. This is the case if you are working at a government agency in Sweden (e.g. a university), for example. Related: How can I kill minerd malware on an AWS EC2 instance? (compromised server) Need help understanding suspicious SSH commands
Process with weird random name consuming significant network and CPU resources. Is someone hacking me?
1,387,949,048,000
I tried ps with different kinds of switches e.g. -A, aux, ef, and so forth but I cannot seem to find the right combination of switches that will tell me the Process ID (PID), Parent Process ID (PPID), Process Group ID (PGID), and the Session ID (SID) of a process in the same output.
Here you go: $ ps xao pid,ppid,pgid,sid | head PID PPID PGID SID 1 0 1 1 2 0 0 0 3 2 0 0 6 2 0 0 7 2 0 0 21 2 0 0 22 2 0 0 23 2 0 0 24 2 0 0 If you want to see the process' name as well, use this: $ ps xao pid,ppid,pgid,sid,comm | head PID PPID PGID SID COMMAND 1 0 1 1 init 2 0 0 0 kthreadd 3 2 0 0 ksoftirqd/0 6 2 0 0 migration/0 7 2 0 0 watchdog/0 21 2 0 0 cpuset 22 2 0 0 khelper 23 2 0 0 kdevtmpfs 24 2 0 0 netns
'ps' arguments to display PID, PPID, PGID, and SID collectively
1,387,949,048,000
What is the maximum value of the Process ID? Also, is it possible to change a Process ID?
On Linux, you can find the maximum PID value for your system with this: $ cat /proc/sys/kernel/pid_max This value can also be written using the same file, however the value can only be extended up to a theoretical maximum of 32768 (2^15) for 32 bit systems or 4194304 (2^22) for 64 bit: $ echo 32768 > /proc/sys/kernel/pid_max It seems to be normative practice on most 64 bit systems to set this value to the same value as found on 32 bit systems, but this is by convention rather than a requirement. From man 5 proc: /proc/sys/kernel/pid_max This file (new in Linux 2.5) specifies the value at which PIDs wrap around (i.e., the value in this file is one greater than the maximum PID). The default value for this file, 32768, results in the same range of PIDs as on earlier kernels. On 32-bit platfroms, 32768 is the maximum value for pid_max. On 64-bit systems, pid_max can be set to any value up to 2^22 (PID_MAX_LIMIT, approximately 4 million). And no, you cannot change the PID of a running process. It gets assigned as a sequential number by the kernel at the time the process starts and that is it's identifier from that time on. The only thing you could do to get a new one is have your code fork a new process and terminate the old one.
What is the maximum value of the Process ID?
1,432,100,072,000
I want to run multiple commands (processes) on a single shell. All of them have own continuous output and don't stop. Running them in the background breaks Ctrl-C. I would like to run them as a single process (subshell, maybe?) to be able to stop all of them with Ctrl-C. To be specific, I want to run unit tests with mocha (watch mode), run server and run some file preprocessing (watch mode) and see output of each in one terminal window. Basically I want to avoid using some task runner. I can realize it by running processes in the background (&), but then I have to put them into the foreground to stop them. I would like to have a process to wrap them and when I stop the process it stops its 'children'.
To run commands concurrently you can use the & command separator. ~$ command1 & command2 & command3 This will start command1, then runs it in the background. The same with command2. Then it starts command3 normally. The output of all commands will be garbled together, but if that is not a problem for you, that would be the solution. If you want to have a separate look at the output later, you can pipe the output of each command into tee, which lets you specify a file to mirror the output to. ~$ command1 | tee 1.log & command2 | tee 2.log & command3 | tee 3.log The output will probably be very messy. To counter that, you could give the output of every command a prefix using sed. ~$ echo 'Output of command 1' | sed -e 's/^/[Command1] /' [Command1] Output of command 1 So if we put all of that together we get: ~$ command1 | tee 1.log | sed -e 's/^/[Command1] /' & command2 | tee 2.log | sed -e 's/^/[Command2] /' & command3 | tee 3.log | sed -e 's/^/[Command3] /' [Command1] Starting command1 [Command2] Starting command2 [Command1] Finished [Command3] Starting command3 This is a highly idealized version of what you are probably going to see. But its the best I can think of right now. If you want to stop all of them at once, you can use the build in trap. ~$ trap 'kill %1; kill %2' SIGINT ~$ command1 & command2 & command3 This will execute command1 and command2 in the background and command3 in the foreground, which lets you kill it with Ctrl+C. When you kill the last process with Ctrl+C the kill %1; kill %2 commands are executed, because we connected their execution with the reception of an INTerupt SIGnal, the thing sent by pressing Ctrl+C. They respectively kill the 1st and 2nd background process (your command1 and command2). Don't forget to remove the trap, after you're finished with your commands using trap - SIGINT. Complete monster of a command: ~$ trap 'kill %1; kill %2' SIGINT ~$ command1 | tee 1.log | sed -e 's/^/[Command1] /' & command2 | tee 2.log | sed -e 's/^/[Command2] /' & command3 | tee 3.log | sed -e 's/^/[Command3] /' You could, of course, have a look at screen. It lets you split your console into as many separate consoles as you want. So you can monitor all commands separately, but at the same time.
Run multiple commands and kill them as one in bash
1,432,100,072,000
In the man page, it says: kill [ -s signal | -p ] [ -a ] [ -- ] pid ... pid... Specify the list of processes that kill should signal. Each pid can be one of five things: 0 All processes in the current process group are signaled And I tried like this in bash: $ man kill & [1] 15247 $ [1]+ Stopped man kill $ kill 0 $ ps 15247 pts/41 00:00:00 man Here 0 is used as pid. As I understood, kill 0 will kill all processes in the current process, which includes pid15247. However, it didn't do anything in this example. Does anyone have ideas about how to use it?
Like it says, it sends the signal to all the members of the process group of the caller. Process groups are used to implement job control in the shell (they can be used for other things, but interactive shell job control is the main reason for their existence). You'll notice that when you type Ctrl-C, all the processes of the current jobs are killed, not only the one that started them. Also, that doesn't kill the background jobs. That is achieved with process groups. A job is a group of processes started by a shell which the shell can put in background or foreground (set as the foreground process group of the terminal or not), and kill as a whole. You can find out about process group ids and session ids with ps -j (j for Job control). To kill the process group of PGID $x, you do: kill -- "-$x" kill 0 kills the process group of the caller. Note that if you do: /bin/kill 0, the shell will start a new job to execute that kill command, so kill will only kill itself. kill is usually a shell builtin though, so kill will kill the process group of the shell. However, when the shell is interactive, it is the process managing process groups, so typically there's no other process in the process group of the shell. All the processes started by the shell, are in other process groups: $ sleep 1000 & [1] 22746 $ ps -j PID PGID SID TTY TIME CMD 22735 22735 22735 pts/23 00:00:00 zsh 22746 22746 22735 pts/23 00:00:00 sleep 22749 22749 22735 pts/23 00:00:00 ps Above, sleep and ps are in two different process groups, one in background, one in foreground and they are different from the process group of the shell. You could do though: (man kill & sleep 1; ps -j; kill 0) The interactive shell would start a new process group for that subshell, and both the subshell and man (and the other commands started by man like your pager, groff...) would be in the same process group, so kill 0 would work there. (the sleep above is to give enough time for the pager to start so we can see it in the ps -j output before we kill it).
What does kill 0 do actually? [closed]
1,432,100,072,000
In "https://stackoverflow.com/questions/13038143/how-to-get-pids-in-one-process-group-in-linux-os" I see all answers mentioning ps and none mentioning /proc. "ps" seems to be not very portable (Android and Busybox versions expect different arguments), and I want to be able list pids with pgids with simple and portable tools. In /proc/.../status I see Tgid: (thread group ID), Gid: (group id for security, not for grouping processes together), but not PGid:... What are other (not using ps) ways of getting pgid from pid?
You can look at field 5th in output of /proc/[pid]/stat. $ ps -ejH | grep firefox 3043 2683 2683 ? 00:00:21 firefox $ < /proc/3043/stat sed -n '$s/.*) [^ ]* [^ ]* \([^ ]*\).*/\1/p' 2683 From man proc: /proc/[pid]/stat Status information about the process. This is used by ps(1). It is defined in /usr/src/linux/fs/proc/array.c. The fields, in order, with their proper scanf(3) format specifiers, are: pid %d The process ID. comm %s The filename of the executable, in parentheses. This is visible whether or not the executable is swapped out. state %c One character from the string "RSDZTW" where R is running, S is sleeping in an interruptible wait, D is waiting in uninterruptible disk sleep, Z is zombie, T is traced or stopped (on a signal), and W is paging. ppid %d The PID of the parent. pgrp %d The process group ID of the process. session %d The session ID of the process. Note that you cannot use: awk '{print $5}' Because that file is not a blank separated list. The second field (the process name may contain blanks or even newline characters). For instance, most of the threads of firefox typically have space characters in their name. So you need to print the 3rd field after the last occurrence of a ) character in there.
Is it possible to get process group ID from /proc?
1,432,100,072,000
I have read that a session's ID is the same as the pid of the process that created the session through the setsid() system call, but I haven't found any information about how a process group ID is set. Is the process group ID the same as the pid of the process that created the process group?
In general, yes, the process group ID is equal to the process ID of the process that created the process group — and that process created the process group by putting itself in the group. You can find this information in the documentation of the setpgid system call, and of its variant setpgrp. The details have historically varied between BSD and System V. The most common use cases are: A process puts itself into its own process group, and the new PGID is equal to the PID. This can be done with SysV setpgrp() or with setpgid(0, 0) in which either 0 can be replaced by an explicit getpid(). Note that while the process is putting itself into the group, in practice, this is often done by a launcher (shell, or daemon monitor) before executing the program, i.e. it is done by code in the launcher between fork and execve in the child process. A process puts itself into an existing process group in the same session. Shells do this for pipelines: to run foo | bar in its own process group, a shell typically does something like this: Set up a pipe. Fork a process. The child puts itself in its own process group G, closes the read end of the pipe and moves the write end to stdout, then executes foo. Fork a process. The child puts itself into the existing process group G, closes the write end of the pipe and moves the red end to stdin, then executes bar. The call to setpgid may be performed in the parent process instead of or in addition to the child. Doing it in both avoids a race condition in case the second child's initialization overtakes the first child's. A shell with job control normally runs in its own process group. But before exiting or suspending, it returns to its original process group (i.e. it puts itself back into the process group that started it, assuming that group still exists). The POSIX specification for setpgid describes these use cases. It further explains that there isn't much else that is guaranteed to work. In particular, while old BSD systems allowed a process to join a process group in a different session or to make up a new PGID, this is not the case on most modern systems (including modern BSD).
How is a process group ID set?
1,432,100,072,000
So I keep reading everywhere that this command should terminate all child processes of the parent process: kill -- -$$ Using a negative ID with the kill command references a PGID and from the examples I have seen it appears the PGID of child processes should be the PID of the parent but its not the case on my system. On my system the PGID of the child is the same as the PGID of the parent script which turns out to be bash. What's going on here? Were the examples wrong or is my system set up differently? What I need to achieve is to terminate child processes without terminating the parent so I don't want to send a kill signal to the PGID the parent is in.
When a process is forked, it inherits its PGID from its parent. The PGID changes when a process becomes a process group leader, then its PGID is copied from its PID. From then on, the new child processes it spawns, and their descendants, inherit that PGID (unless they start new process groups of their own). In a shell with job control, such as most interactive shells, each job is put in its own process group. If you run a shell script, the shell process running the script will be the group leader, and the PGID will equal its PID. In a shell without job control, such as most shells used to run scripts, commands are run in the shell's process group. The syntax kill -- -N kills all the processes in the group with PGID = N. You can't use it with an arbitrary PID, only the PID of a process group leader, since that's the PGID. This is essentially how the shell's kill %jobid syntax works -- it internally translates %jobid to the PGID of the job and sends the signal to that PGID. There's no simple way to run a script in its own process group from another shell script. See How to set process group of a shell script for some suggestions, though.
Why is the PGID of my child processes not the PID of the parent?
1,432,100,072,000
What's the difference between a process group and a job? If I type pr * | lpr then is it both a process group as well a job? What exactly is the difference between a process group ID and a job ID? Edit: I know it appears similar to What is the difference between a job and a process?, but it is slightly different. Also, I didn't understand this concept from this thread.
A process group is a unix kernel concept. It doesn't come up very often. You can send a signal to all the processes in a group, by calling the kill system call or utility with a negative argument. When a process is created (with fork), it remains in the same process group as its parent. A process can move into another group by calling setpgid or setpgrp. This is normally performed by the shell when it starts an external process, before it executes execve to load the external program. The main use for process groups is that when you press Ctrl+C, Ctrl+Z or Ctrl+\ to kill or suspend programs in a terminal, the terminal sends a signal to a whole process group, the foreground process group. The details are fairly complex and mostly of interest to shell or kernel implementers; the General Terminal Interface chapter of the POSIX standard is a good presentation (you do need some unix programming background). Jobs are an internal concept to the shell. In the simple cases, each job in a shell corresponds to a process group in the kernel.
Difference between process group and job?
1,432,100,072,000
I would like to start a bash script from another bash script, but start it in its own process group just like when you run it from the terminal. There are a few similar questions, but I can't find an answer that matches my example. Take these two scripts $ cat main.sh #! /usr/bin/env bash set -e echo if this has been started in its own group the following will pass ps -o ppid,pgid,command $$ ps -o pgid | grep $$ echo passed $ cat wrapper.sh #! /usr/bin/env bash set -e ./main.sh When I run main.sh on its own, the terminal puts it in its own group: $ ./main.sh if this has been started in its own group the following will pass PPID PGID COMMAND 20553 1276 bash ./main.sh 1276 1276 1276 passed $ echo $? 0 but when I run main.sh from wrapper.sh my test fails (as expected) $ ./wrapper.sh if this has been started in its own group the following will pass PPID PGID COMMAND 2224 2224 bash ./main.sh $ echo $? 1 But what do I have to put into either wrapper.sh or main.sh to make it run main.sh in its own group the same way it would if it does when it is run straight from the terminal? (If it makes a difference, I am actually hoping to run ./main.sh & in wrapper.sh but it was easier to run it in the foreground for this experiment.) (I am using ubuntu 18)
Assuming Bash, the immediate answer would be to enable "monitor mode" (job control) with set -m, or -m on the command line: From the man page: -m Monitor mode. Job control is enabled. This option is on by default for interactive shells on systems that support it (see JOB CONTROL above). All processes run in a separate process group. When a background job completes, the shell prints a line containing its exit status. $ bash -m wrapper.sh if this has been started in its own group the following will pass PPID PGID COMMAND 12630 12631 bash ./main.sh 12631 12631 12631 passed However, the question is rather tight on the "why", and there might be reasons monitor mode is disabled by default in non-interactive shells, so YMMV.
how can I start a bash script in its own process group
1,432,100,072,000
(Re-posting in unix per the suggestion in https://stackoverflow.com/questions/13718394/what-should-interactive-shells-do-in-orphaned-process-groups) The short question is, what should a shell do if it is in an orphaned process group that doesn't own the tty? But I recommend reading the long question because it's amusing. Here is a fun and exciting way to turn your laptop into a portable space heater, using your favorite shell (unless you're one of those tcsh weirdos): #include <unistd.h> int main(void) { if (fork() == 0) { execl("/bin/bash", "/bin/bash", NULL); } return 0; } This causes bash to peg the CPU at 100%. zsh and fish do the same, while ksh and tcsh mumble something about job control and then keel over, which is a bit better, but not much. Oh, and it's a platform agnostic offender: OS X and Linux are both affected. My (potentially wrong) explanation is as follows: the child shell detects it is not in the foreground: tcgetpgrp(0) != getpgrp(). Therefore it tries to stop itself: killpg(getpgrp(), SIGTTIN). But its process group is orphaned, because its parent (the C program) was the leader and died, and SIGTTIN sent to an orphaned process group is just dropped (otherwise nothing could start it again). Therefore, the child shell is not stopped, but it's still in the background, so it does it all again, right away. Rinse and repeat. My question is, how can a command line shell detect this scenario, and what is the right thing for it to do? I have two solutions, neither of which is ideal: Try to signal the process whose pid matches our group ID. If that fails with ESRCH, it means we're probably orphaned. Try a non-blocking read of one byte from /dev/tty. If that fails with EIO, it means we're probably orphaned. (Our issue tracking this is https://github.com/fish-shell/fish-shell/issues/422 ) Thanks for your thoughts!
I agree with your analysis and I agree it sounds like you have to detect whether your process group is orphaned or not. tcsetattr is also meant to return EIO if the process group is orphaned (and we're not blocking/ignoring SIGTTOU. That might be a less intrusive way than a read on the terminal. Note that you can reproduce it with: (bash<&1 &) You need the redirection otherwise stdin is redirected to /dev/null when running a command in the background. (bash<&1 & sleep 2) Gives even weirder behaviour, as because you end up with two shells reading from the terminal. They are ignoring SIGTTIN and the new one is not detecting, once it's started that it is no longer in the foreground process group. ksh93's solution is not so bad: only go up to 20 times (instead of infinite) through that loop before giving up.
What should interactive shells do in orphaned process groups?
1,432,100,072,000
In POSIX, processes are “related” to each other through two basic hierarchies: The hierarchy of parent and child processes. The hierarchy of sessions and process groups. User processes have a great deal of control over the latter, via setpgid and setsid, but they have very little control over the former—the parent process ID is set when a process is spawned and altered by the kernel when the parent exits (usually to PID 1), but otherwise it does not change. Reflecting on that, I’ve been wondering how important the parent–child relationship really is. Here’s a summary of my understanding so far: Parent–child relationships are clearly important from the perspective of the parent process, since various syscalls, like wait and setpgid, are only allowed on child processes. The session–group–process relationship is clearly important to all processes, both the session leader and other processes in the session, since syscalls like kill operate on entire process groups, setpgid can only be used to join a group in the same session, and all processes in a session’s foreground process group are sent SIGHUP if the session leader exits. What’s more, the two hierarchies are clearly related from the perspective of the parent, since setsid only affects new children and setpgid can only be used on children, but they seem essentially unrelated from the perspective of the child (since a parent process dying has no impact whatsoever on a process’s group or session). Conspicuously absent, however, is any reason for a child process to care what its current parent is. Therefore, I have the following question: does the current value of getppid() have any importance whatsoever from the perspective of the child process, besides perhaps identifying whether or not its spawning process has exited? To put the same question another way, imagine the same program is spawned twice, from the same parent, in two different ways: The first child is spawned in the usual way, by fork() followed shortly by exec(). The second child is spawned indirectly: the parent process calls fork(), and then the child also calls fork(), and it’s the grandchild process that calls exec(). The immediate child then exits, so the grandchild is orphaned, and its PPID is reassigned to PID 1. In this hypothetical scenario, assuming all else is equal, do any reasonable programs have any reason to behave any differently? So far, my conclusion seems to be “no,” since the session is left unchanged, as are the process’s inherited file descriptors… but I’m not sure. Note: I do not consider “acquiring the parent PID to communicate with it” to be a valid answer to that question, since orphaned programs cannot in general rely on their PPID to be set to 1 (some systems set orphaned processes’ PPID to some other value), so the only way to avoid a race condition is to acquire the parent process ID via a call to getpid() before forking, then to use that value in the child.
When I saw this question, I was pretty interested because I know I've seen getppid used before..but I couldn't remember where. So, I turned to one of the projects that I figured has probably used every Linux syscall and then some: systemd. One GitHub search later, and I found two uses that portray some more general use cases (there are a few other uses as well, but they're more specific to systemd): In sd-notify. For some context: systemd needs to know when a service has started so it can proceed to start any that depend on it. This is normally done from a C program via the sd_notify API, which is a way for daemons to tell systemd their status. Of course, if you're using a shell script as a service...calling C functions isn't exactly doable. Therefore, systemd comes with the systemd-notify command, which is a small wrapper over the sd_notify API. One problem: systemd also needs to know the PID that is sending the message. For systemd-notify, this would be its own PID, which would be a short-lived process ID that immediately goes away. Not useful. You probably already know where I'm headed: getppid is used by systemd-notify to grab the parent process's PID, since that's usually the actual service process. In short, getppid can be used by a short-lived CLI application to send a message on behalf of the parent process. Once I found this, another unix tool that might use getppid like this came to mind: polkit, which is a process authentication framework used to gate stuff like sending D-Bus messages or running privileged applications. (At minimum, I'd guess you've seen the GUI password prompts that are displayed by polkit's auth agents.) polkit includes an executable named pkexec that can be used a bit like sudo, except now polkit is used for authorization. Now, polkit needs to know the PID of the process asking for authorization...yeah you get the idea, pkexec uses getppid to find that. (While looking at that, I also found out that polkit's TTY auth agent uses it too.) This one's a bit less interesting but still notable: getppid is used to emulate PR_SET_PDEATHSIG if the parent had died by the time that flag was set. (The flag is just a way for a child to be automatically sent a signal like SIGKILL if the parent dies.)
Does a process’s parent have any significance from the perspective of its child?
1,432,100,072,000
kill -TERM -PID is supposed to kill PID and all its child processes. but this doesn't work on openSUSE, it always tell me that no such process -PID no matter what PID i use. So if the negative PID option is not supported by this particular version of kill, what is the best way to kill a group of processes? background: I have a shell script running. inside the script, I use wget to download things. So the script is the parent process, wget is the child process. I want to kill them both using kill -TERM -PID_OF_SCRIPT
Does it say "no such PID" or is there an error, - as in does this work? kill -TERM -- -GPID Also note, as per (emphasize mine) man 1: "[…] When an argument of the form '-n' is given, and it is meant to denote a process group […]" man 2: "[…] If pid is less than -1, then sig is sent to every process in the process group whose ID is -pid. […]" man 3: "[…] If pid is negative, but not -1, sig shall be sent to all processes (excluding an unspecified set of system processes) whose process group ID is equal to the absolute value of pid, […]" As in, not PID but process group ID. Else perhaps you can have so fun with /proc/[pid]/stat ppid: awk '{gsub(/\([^)]+\)/,"_"); print $4}' /proc/3955/stat pgrp: awk '{gsub(/\([^)]+\)/,"_"); print $5}' /proc/3955/stat pkill -TERM -g PGRP
kill a group of processes with negative PID
1,432,100,072,000
Is there a way to change PID, PPID, SID of a running process? It would make sense for the answer to be no, but I'd like to make sure.
A process can set its own PGID and SID with the system calls setpgid setsid. The target group/session can't be chosen arbitrarily: setpgid can only move to another process group in the same session, or create a new process group whose PGID is equal to the PID; setsid can only move the process to its own session, making the SID equal to the PID. These calls are reserved to the process itself: a process cannot change another process's PGID or SID, with one exception: a process can change its children's PGID if they're still running the original process image (i.e. they haven't called execve to run a different program). Some systems may allow other behaviors but I don't think any modern Unix system deviates fundamentally. It is possible to indirectly change a process's PGID or SID by using a debugger to make the process call the setpgid or setsid system call (via ptrace). Since this requires ptrace permission, it must be done from another process running as root or as the same user, and there must not be any restriction on debugging (many modern Linux system require the debugger to be an ancestor of the debuggee). A process's PID never changes. A process's PPID can only change once, and only for one reason: when the parent dies, the PPID changes from the parent's PID to 1 (the process is adopted by init). Note that in some systems, a process can have different PID values (and consequently also PPID/PGID/SID since they all start as the PID of some process) depending on how you look at it. For example, with Linux namespaces, each process has a potentially different PID in each namespace where it's visible.
Is there a way to change the process group of a running process?
1,432,100,072,000
I'm learning about the relationship between processes, process groups (and sessions) in Linux. I compiled the following program... #include <iostream> #include <ctime> #include <unistd.h> int main( int argc, char* argv[] ) { char buf[128]; time_t now; struct tm* tm_now; while ( true ) { time( &now ); tm_now = localtime( &now ); strftime( buf, sizeof(buf), "%a, %d %b %Y %T %z", tm_now ); std::cout << buf << std::endl; sleep(5); } return 0; } ... to a.out and ran it as a background process like so... a.out & This website says the following... Every process is member of a unique process group, identified by its process group ID. (When the process is created, it becomes a member of the process group of its parent.) By convention, the process group ID of a process group equals the process ID of the first member of the process group, called the process group leader. Per my reading, the first sentence conflicts with the in-parentheses content: is a process a member of a unique process group, or is it a member of the process group of its parent? I tried to investigate with ps... ps xao pid,ppid,pgid,sid,command | grep "PGID\|a.out" PID PPID PGID SID COMMAND 24714 23890 24714 23890 ./a.out This tells me my a.out process is pid 24714, spawned from parent pid 23890 and part of program group 24714. To begin with, I don't understand why this pgid matches the pid. Next, I tried to investigate the parent process... ps xao pid,ppid,pgid,sid,command | grep "PGID\|23890" PID PPID PGID SID COMMAND 23890 11892 23890 23890 bash 24714 23890 24714 23890 ./a.out It makes sense to me that the parent process of my a.out is bash. At first I thought "bash's pid matches its pgid - that must be because it's the process group leader. Maybe that makes sense because bash is kind of the "first thing" that got run, from which I ran my process." But that reasoning doesn't make sense because a.out's pgid also matches its own pid. Why doesn't a.out's pgid equal bash's pgid? That's what I would have expected, from my understanding of the quote. Can someone clarify the relationship between pids and pgids?
There is no conflict; a process will by default be in a unique process group which is the process group of its parent: $ cat pg.c #include <stdio.h> #include <unistd.h> int main(void) { fork(); printf("pid=%d pgid=%d\n", getpid(), getpgrp()); } $ make pg cc pg.c -o pg $ ./pg pid=12495 pgid=12495 pid=12496 pgid=12495 $ The fork splits our process into parent (12495) and child (12496), and the child belongs to the unique process group of the parent (12495). bash departs from this because it issues additional system calls: $ echo $$ 12366 $ And then in another terminal we run: $ strace -f -o blah -p 12366 And then back in the first terminal: $ ./pg pid=12676 pgid=12676 pid=12677 pgid=12676 $ And then we control+c the strace, and inspect the system calls: $ egrep 'exec|pgid' blah 12366 setpgid(12676, 12676) = 0 12676 setpgid(12676, 12676 <unfinished ...> 12676 <... setpgid resumed> ) = 0 12676 execve("./pg", ["./pg"], [/* 23 vars */]) = 0 12676 write(1, "pid=12676 pgid=12676\n", 21 <unfinished ...> 12677 write(1, "pid=12677 pgid=12676\n", 21 <unfinished ...> bash has used the setpgid call to set the process group, thus placing our pg process into process group unrelated to that of the shell. (setsid(2) would be another way to tweak the process group, if you're hunting for system calls.)
Why is process not part of expected process group?
1,432,100,072,000
Based on what I have learned so far, a terminal has only one session, and a session has one or more process groups, and a process group has one or more processes. The following image illustrates this: I have two questions: How to move a process from one process group to another? How to list the processes in each process group? Edit: I mean how to do these two things from the terminal and not programmatically.
From a user's or even a typical programmer's perspective, you don't move processes from one group to another. Organizing process groups is the job of the shell. When you run a job interactively, the shell puts it in its own group. The primary intent of doing that is to kill the whole group (e.g. all the processes in a pipeline) when the user presses Ctrl+C. More generally, the one thing that is made possible by process groups is to atomically kill a set of processes. If you try to list some processes and then kill them, one of them may have forked in between. When you kill a process group, that kills all the processes in the group, even if they're busy forking. The one thing you may sometimes want to do as a user or application programmer is to run a new process in its own group. There's no user-level command to do just that. You can do it by starting an interactive shell. (See Timing out in a shell script for a complex example.) There are other commands such as the timeout utility from GNU coreutils and the Linux and the setsid utility from the util-linux suite that create a new process group as part of their oepration. The system call to move a process to a different process group is setpgid. (There's also a partial alias called setpgrp.) There are restrictions: it may only be called by the process itself or its parent, and the target group must be in the same session as the original group. You can't arbitrarily move a process from one group to another. There's no specific way to enumerate the processes in a group. All you can do is enumerate all the processes and select the ones in that specific group. You can list process groups in the ps output by including the pgid column (e.g. ps -e -o pid,ppid,pgid,args).
How to move a process from one process group to another, and how to list the processes in each process group?
1,432,100,072,000
I understand from Informit article that sessions and process groups are used to terminate descendant processes on exit and send signals to related processes with job control. I believe this information can be extracted at any point using the PPID of every process. Do these concepts exist in place just to have a data structure that enables getting descendants of a process quickly? Do session and process groups get employed in things other than job control and termination of descendants? do they store any context information? Any good references will be helpful.
Process groups exist primarily to determine which processes started from a terminal can access that terminal. Only processes in the foreground process group may read or write to their controlling terminal; background processes are stopped by a SIGTTIN or SIGTTOU signal. You can send a signal atomically to all the processes in a process group, by passing a negative PID argument to kill. This also happens when a signal is generated by the terminal driver in response to a special character (e.g. SIGINT for Ctrl+C). Sessions track which process groups are attached to a terminal. Only processes running in the same session as the controlling process are foreground or background processes. It is not possible to determine process groups or sessions from the PPID. You would have no way to know whether the parent of a process is in the same process group or a different one, and likewise for sessions.
What is the purpose of abstractions, session, session leader and process groups?
1,432,100,072,000
I have read that when you press Ctrl+C a SIGINT signal will be sent to the foreground process group. Can you give me an example of how I can have two or more processes in the foreground process group, because I want to see if all processes will terminate if I press Ctrl+C.
Since new processes all belong to the same process group, that of the parent process, have a process start a bunch of processes (fork), and then with appropriate logging and a delay, type Ctrl+C. They all eat a SIGINT. $ perl -E 'fork for 1..2;say "ima $$"; $SIG{INT}=sub{die "woe $$\n"}; sleep 999' ima 80920 ima 80922 ima 80921 ima 80923 ^Cwoe 80920 woe 80922 woe 80921 woe 80923 $ (Add strace or sysdig or such to see the system calls or signals involved.)
Can Ctrl+C send the SIGINT signal to multiple processes?
1,432,100,072,000
I have a statusbar (lemonbar) to which I pipe the output of a couple of scripts (time, battery, volume, etc.). These scripts, and the statusbar itself, are all started in a single bash script statusbar. When the statusbar process is killed, it cleans up after itself by attempting to kill its children, like so: trap "trap - SIGTERM && kill -- -$$" SIGINT SIGTERM EXIT This all works fine if I call statusbar in a terminal, and then quit it with a SIGTERM signal. However, when I start statusbar in my .xinitrc file like this: statusbar &, the statusbar script is not able to clean up after itself anymore. The reason for this is that it is in the same process group as the .xinitrc script, together with all the other processes that are started there. I discovered this by following this answer. The question is: can I put the statusbar process and all its children in their own process group from .xinitrc, so that it can clean up after itself nicely? Alternative, maybe there is a different way of killing all the children of statusbar? P.S.: I realize that wanting to cleanly kill a statusbar is not very common. However, I would like to do it so that I can restart it easily and eventually change my colour theme dynamically, without having to exit X.
You can try using setsid (part of the util-linux package) in the .xinitrc to start the script in a new session: setsid statusbar but will it still receive your signals?
Start new process group in .xinitrc
1,432,100,072,000
I have read that when you press Ctrl+C, then a SIGINT signal will be sent to the foreground process group. Now the accepted answer in this question says: Basically, your signal is received by all foreground processes, ie the shell and the program, I have executed cat within bash, and noticed that the PGID for bash and cat are different, so they do not belong to the same process group. So when you press Ctrl+C, only cat will receive the SIGINT signal (and so the answer I quoted is wrong), am I correct?
That question is about a bash script. You're running bash interactively. This makes a difference for process groups: that's the whole reason why process groups were invented. The intent of a process group is to capture all the processes that are involved in one interactively-started task. So an interactive shell starts each job in a separate process group, whereas a shell running a script doesn't create new process groups.
If the shell is running a program, will the shell also receive a SIGINT signal when Ctrl+C is pressed?
1,432,100,072,000
I have a script spawning two zombies. I can kill the group via kill -- -<parent-pid>, but when invoked by the PHP interpreter, that won’t work although killing every single process manually will. The script is #!/bin/bash sleep 1d& sleep 1d and the PHP file just invokes it: <?php exec("./spawn") ?> From the shell directly: $ ./spawn& [1] 19871 $ pstree -p 19871 spawn(19871)─┬─sleep(19872) └─sleep(19873) $ kill -- -19871 $ pstree -p 19871 [1]+ Terminated ./spawn ... and via PHP: $ php -f zomby.php & [1] 19935 $ pstree -p 19935 php(19935)───sh(19936)───spawn(19937)─┬─sleep(19938) └─sleep(19939) $ kill -- -19937 bash: kill: (-19937) - No matching process found $ kill -- -19936 bash: kill: (-19936) - No matching process found $ kill 19939 19938 19937 $ Terminated [1]+ Fertig php -f zomby.php only killing the PHP parent process will work: $ php -f zomby.php & [1] 20021 $ pstree -p 20021 php(20021)───sh(20022)───spawn(20023)─┬─sleep(20024) └─sleep(20025) $ kill -- -20021 $ pstree -p 20021 [1]+ Terminated php -f zomby.php Any ideas on that?
The kill command, when given a PID that is < -1, treats it as a process group ID (PGID), not as a process ID. This is documented in info kill: ‘PID < -1’ The process group whose identifier is −PID. If we take your example again: $ pstree -p 19935 php(19935)───sh(19936)───spawn(19937)─┬─sleep(19938) └─sleep(19939) The PGID is the PID of the topmost parent process of the process tree, in this case 19935. However, you tried to kill the processes belonging to the process group with ID 19937 and 19936, Neither of which are actually process group IDs. The PGID is 19935. You can perhaps see this more clearly with ps. If I run the same commands on my system: $ php -f ./zombie.php & [2] 12882 $ ps -o pid,ppid,pgid,command | grep -E '[P]GID|[1]2882' PID PPID PGID COMMAND 12882 1133 12882 php -f ./zombie.php 12883 12882 12882 /bin/bash ./spawn 12884 12883 12882 sleep 1d 12885 12883 12882 sleep 1d In the example above, the PGID of the group is 12882, so that's what I need to use if I want to kill everything in the group. When you run the command from the shell directly, the topmost parent process is the PID of the shell script, so you can kill all processes in its tree by running kill -- -PID: $ ./spawn & [3] 14213 terdon@tpad foo $ ps -o pid,ppid,pgid,command | grep -E '[P]GID|[1]4213' PID PPID PGID COMMAND 14213 1133 14213 /bin/bash ./spawn 14214 14213 14213 sleep 1d 14215 14213 14213 sleep 1d But that's because the PID of the shell script is the PGID of the group.
Cannot kill process group when invoked by PHP
1,432,100,072,000
I am trying to capture the PID of a function executed in the background, but I seem to get the wrong number. See the following script: $ cat test1.sh #!/bin/bash set -x child() { echo "Child thinks is $$" sleep 5m } child & child_pid="$!" echo "Parent thinks pid $child_pid" sleep 3 kill -- -"$child_pid" # but it is wrong, get "No such process" kill -- -"$$" wait I would expect the parent to terminate the child process of the function, but I get: $ ./test1.sh + child_pid=44551 + echo 'Parent thinks pid 44551' Parent thinks pid 44551 + sleep 3 + child + echo 'Child thinks is 44550' Child thinks is 44550 + sleep 5m + kill -- -44551 ./test1.sh: line 15: kill: (-44551) - No such process + kill -- -44550 Terminated I have read this question Get PID of a function executed in the background, but the answers seem to contradict what I am observing. So how can I fix the above code, to get the correct PID of the function from the parent? After some testing it seems that the command without the minus works kill -- "$child_pid" But this isn't sufficient for my need, because I want to terminate any subprocesses of child too when I kill it.
$! gives the correct value. $$ does not. Use $BASHPID instead See man bash: BASHPID Expands to the process ID of the current bash process. This differs from $$ under certain circumstances, such as subshells that do not require bash to be re-initialized. Assignments to BASHPID have no effect. Not sure, why your kill -- -PID is not working, cannot reproduce. You could use pkill -P PID instead.
PID of background function "$!" gives wrong value
1,432,100,072,000
In The Linux Programming Interface To see why orphaned process groups are important, we need to view things from the perspective of shell job control. Consider the following scenario based on Figure 34-3: Before the parent process exits, the child was stopped (perhaps because the parent sent it a stop signal). When the parent process exits, the shell removes the parent’s process group from its list of jobs. The child is adopted by init and becomes a background process for the terminal. The process group containing the child is orphaned. At this point, there is no process that monitors the state of the stopped child via wait(). Since the shell did not create the child process, it is not aware of the child’s existence or that the child is part of the same process group as the deceased parent. Furthermore, the init process checks only for a terminated child, and then reaps the resulting zombie process. Consequently, the stopped child might languish forever, since no other process knows to send it a SIGCONT signal in order to cause it to resume execution. Even if a stopped process in an orphaned process group has a still-living parent in a different session, that parent is not guaranteed to be able to send SIGCONT to the stopped child. A process may send SIGCONT to any other process in the same session, but if the child is in a different session, the normal rules for sending signals apply (Section 20.5), so the parent may not be able to send a signal to the child if the child is a privileged process that has changed its credentials. To prevent scenarios such as the one described above, SUSv3 specifies that if a process group becomes orphaned and has any stopped members, then all members of the group are sent a SIGHUP signal, to inform them that they have become disconnected from their session, followed by a SIGCONT signal, to ensure that they resume execution. If the orphaned process group doesn’t have any stopped members, no signals are sent. The default action to SIGHUP is termination. So does kernel implicitly sending SIGHUP to a process group that becomes orphaned and contains a stopped process mean that those processes in the group and without their own SIGHUP dispositions will be terminated? Will any stopped process in the group be first resumed by SIGCONT and terminated by SIGHUP? to make the processes in the group survive, they need to have their own SIGHUP dispositions or ignore SIGHUP? Thanks.
Please let me turn mosvy's comment into an answer: Yes & Yes. You can see the Notes in my answer here for link(s) to the linux source implementing that (the comments from the linux source and the actual code are much better than that prose or my glossing over it)
Does kernel sending SIGHUP to a process group that becomes orphaned and contains a stopped process terminate all the processes by default?
1,432,100,072,000
Is it possible that a past Progress Group Leader's PID gets reused by an other process and this latter process starts a new Process Group? In this case the first created process group and the second one have the same PGID, which situation should be avoided I consider. Does Linux avoid assigning a PID which is a valid PGID?
No, that's not possible. It's forbidden by the standard: The fork() function shall create a new process. The new process (child process) shall be an exact copy of the calling process (parent process) except as detailed below: The child process shall have a unique process ID. The child process ID also shall not match any active process group ID.
Process Group Leader's PID reused
1,432,100,072,000
From APUE: A process can set the process group ID of only itself or any of its children. Furthermore, it can’t change the process group ID of one of its children after that child has called one of the exec functions. Why can't it "change the process group ID of one of its children after that child has called one of the exec functions"? Thanks.
I do not know the "official" reason but I would guess that the idea is that a process shall not have to expect that its PGID is suddenly changed. So this is allowed after a fork so that shell pipelines can be set up but after the execve() the new binary find a certain state, and this shall be permanent (until the new binary decides to change it).
Why can't a process change the process group ID of one of its children after that child has called one of the exec functions?
1,432,100,072,000
I am looking at a scenario where I want to run a program / command with sudo as part of a software test. The commands are launched from a Python script based on the subprocess module. I am attempting to avoid having to run the entire test suite with super user privileges. Let's say for the purpose of this example, it's top. My command starts a few sub-processes of its own and may run into a deadlock. After a timeout, I want to kill it (and its children). The obvious solution appears to be to make my command head of a new session / process group, allowing me to kill it and its children altogether at once. What I can NOT figure out is how to make this work with sudo. In my case, sudo is always password protected without exception and I want keep it this way ... if possible. Works: setsid top Works, but does NOT spawn a new process group: sudo setsid top Problematic - hard to get the root password in in a safe and sound manner: setsid sudo top I did not manage to make (3) work in a clean way. I messed around with SUDO_ASKPASS. What surprised me was the fact that (2) actually runs but does NOT give me the desired new process group. systemd─┬─ ... ├─kdeinit5─┬─ ... │ └─yakuake─┬─2*[bash] │ ├─bash───sudo───top │ ├─bash───pstree ...
Scenario 2 can be fixed like this, without the use of setsid: sudo -b command This will create a new process group, directly below the system's init process, including the sudo command. One word of advise, though: If one starts a process group like this with Python's subprocess.Popen, the resulting object's PID (subprocess.Popen(...).pid) can NOT be used for determining the PGID for eventual use in a pattern like kill -9 -- -{PGID} (it will kill the Python interpreter instead of the newly spawned process group). My workaround (requires psutil): import os import psutil import subprocess def __get_pid__(cmd_line_list): for pid in psutil.pids(): proc = psutil.Process(pid) if cmd_line_list == proc.cmdline(): return proc.pid return None cmd = ['sudo', '-b', 'command'] cmd_proc = subprocess.Popen(cmd) print('Wrong PGID: %d' % os.getpgid(cmd_proc.pid)) print('Right PGID: %d' % os.getpgid(__get_pid__(cmd)))
`sudo setsid command` does not spawn new process group?
1,432,100,072,000
When using setpgrp vi (and other tty programs) work completely different than if setpgrp is not used. Example: perl -MIPC::Open3 -e '$pid= open3("<&STDIN", ">&STDOUT", ">&STDERR", qw(perl -e),q(exec qw(bash -c),qq(vi foo))); wait' That works great and calls vi foo. But add setpgrp: perl -MIPC::Open3 -e '$pid= open3("<&STDIN", ">&STDOUT", ">&STDERR", qw(perl -e),q(setpgrp;exec qw(bash -c),qq(vi foo))); wait' and then it does not work so well. Tested on GNU/Linux (Mint), FreeBSD, OpenBSD, Solaris, HPUX, AIX, Dragonfly. All give similar behaviour. Why? Can I somehow create a process group and still spawn tty tools like vi? Background The above is part of a possible extension of GNU Parallel that will allow to kill the process groups instead of processes, and is thus a tiny corner of the full program. An answer to just run vi foo is thus not a useful answer.
From setpgrp man page from Darwin/MacOS (BSD-based): If the calling process is not already a session leader, setpgrp() sets the process group ID of the calling process to that of the calling process. Any new session that this creates will have no controlling terminal. There's your answer.
setpgrp causes tty gone
1,432,100,072,000
Suppose that app X is running in the foreground in tmux pane. I'd like to send a given signal, e.g. SIGUSR1, to app X. Can I configure a tmux keybinding to send a signal to the currently-selected pane's foreground process (or process group)?
In my Kubuntu ps can give me the ID of the foreground process group on the terminal that the process is connected to. The keyword is tpgid. If I tell ps to query the process identified by tmux as #{pane_pid} then I will get the foreground process group ID in this pane. The following binding (in ~/.tmux.conf) will make prefixk send SIGUSR1 to the foreground process group (the default prefix is Ctrl+b): bind-key k run-shell 'kill -s USR1 -- "-$(ps -o tpgid:1= -p #{pane_pid})"' Notes: The dash (-) before $(…) is responsible for targeting the foreground process group. You can try without the dash to target one process only; it will be the foreground process group "leader". There is no guarantee the "leader" (still) exists though. Targeting the group is a sane approach, it's similar to Ctrl+c sending SIGINT to the group, although the mechanism is different. :1 is taken from this answer: Format ps command output without whitespace. Getting rid of leading spaces is crucial when we prepend the dash. There's a race condition: kill acts after ps and there is no guarantee the process group is still in the foreground (or exists at all). You may be unfortunate and hit prefixk around the time the process you want to target exits. This way you may inadvertently send SIGUSR1 to another process. It may be the shell. And then… The default action for SIGUSR1 is to terminate. In particular your interactive shell being in the foreground (i.e. awaiting command) may exit upon prefixk. Bash does. You can prevent this by setting up a trap beforehand: trap '' USR1 will make the shell ignore the signal. In this case child processes will also ignore the signal, unless they explicitly choose to handle it (e.g. dd does this). trap : USR1 will make the shell "ignore" the signal (react by doing nothing), but this will not affect the behavior of child processes.
Send signal to process in tmux pane
1,432,100,072,000
How can process become a member of a PGRP? My attempt: Process needs to be a child of a PGRP's leader or we need to use a system call setpgid(). Also, another two questions. 1) How can process become a leader of a group? I can only think about creating a new process, which will automatically become a leader 2) Can group have many leaders? I think it is impossible, but can't find any information about this Are my answers correct?
I can only think about creating a new process, which will automatically become a leader False. #include <stdio.h> #include <unistd.h> int main(void) { pid_t pid; pid = fork(); printf("%d member of %d\n", getpid(), getpgrp()); return 0; } The new process shares the group of the parent: $ make leadership cc -g leadership.c -o leadership $ ./leadership 65617 member of 65617 65618 member of 65617 $ Only with setpgid(2) or setsid(2) or similar system calls will the group or leadership change. 2) Can group have many leaders? False. Quoting from Stevens, "Advanced Programming in the UNIX Environment" (2nd ed.), chapter 9 section 4 (p. 243): "Each process group can have a process group leader. The leader is identified by having its process group ID equal to its process ID." Singular leader, and a very specific case for identifying said leader.
How can process become a member of a process group?
1,432,100,072,000
I searched a lot but didn't find a solution. So it can be silly question. The format of waitpid is pid_t waitpid (pid_t pid, int *status, int options) The pid parameter specifies exactly which process or processes to wait for. Its values fall into four camps: < -1 Wait for any child process whose process group ID is equal to the absolute value of this value. -1 Wait for any child process. This is the same behavior as wait( ). 0 Wait for any child process that belongs to the same process group as the calling process. > 0 Wait for any child process whose pid is exactly the value provided. Now the question is what if parent and child have different group id and group id of child is 1. How to use waitpid for this specific child? Because we can't use -1 it will tell to wait for any child.
You can only wait for children from your process. If the child changes it's process group id, the new process group id can be used as a negative number with waitpid(). BTW: the function waitpid() is deprecated since 1989. The modern function is: waitid() and it supports what you like: waitid(idtype, id, infop, opts) idtype_t idtype; id_t id; siginfo_t *infop; /* Must be != NULL */ int opts; If you like to wait for a process group, use: waitid(P_PGID, pgid, infop, opts); So if you really have a process under process group ID 1, call: waitid(P_PGID, 1, infop, opts); But since init already uses this process group id, you would need to be the init process in order to able to have children under pgid 1. This however will not work, if you are on a platform that does not implement waitid() as a syscall but as an emulation on top of the outdated waitpid(). The advantages of waitid() are: allows to cleanly specify what to wait for (e.g. P_PID P_PGID P_ALL) returns all 32 bits from the exit(2) parameter in the child back to the parent process. allows to wait with the flag: WNOWAIT that does not reap the child and keeps it for later in the process table. BTW: The siginfo_t pointer in waitid() is identical to the second parameter of the signal handler function for SIGCHLD.
Use waitpid for child having groupid 1
1,610,233,766,000
On Lubuntu 18.04, I run a shell in lxterminal. Its controlling terminal is the current pseudoterminal slave: $ tty /dev/pts/2 I would like to know what relations are between my current controlling terminal /dev/pts/2 and /dev/tty. /dev/tty acts like my current controlling terminal /dev/pts/2: $ echo hello > /dev/tty hello $ cat < /dev/tty world world ^C But they seem to be unrelated files, instead of one being a symlink or hardlink to the other: $ ls -lai /dev/tty /dev/pts/2 5 crw--w---- 1 t tty 136, 2 May 31 16:38 /dev/pts/2 13 crw-rw-rw- 1 root tty 5, 0 May 31 16:36 /dev/tty For different sessions with different controlling terminals, if /dev/tty is guaranteed to be their controlling terminals. How can it be different controlling terminals, without being a symlink or hardlink? So what are their relations and differences? Any help is much appreciated! This post is originated from an earlier one Do the output of command `tty` and the file `/dev/tty` both refer to the controlling terminal of the current bash process?
The tty manpage in section 4 claims the following: The file /dev/tty is a character file with major number 5 and minor number 0, usually of mode 0666 and owner.group root.tty. It is a synonym for the controlling terminal of a process, if any. In addition to the ioctl(2) requests supported by the device that tty refers to, the ioctl(2) request TIOCNOTTY is supported. TIOCNOTTY Detach the calling process from its controlling terminal. If the process is the session leader, then SIGHUP and SIGCONT signals are sent to the foreground process group and all processes in the current session lose their controlling tty. This ioctl(2) call works only on file descriptors connected to /dev/tty. It is used by daemon processes when they are invoked by a user at a terminal. The process attempts to open /dev/tty. If the open succeeds, it detaches itself from the terminal by using TIOCNOTTY, while if the open fails, it is obviously not attached to a terminal and does not need to detach itself. This would explain in part why /dev/tty isn’t a symlink to the controlling terminal: it would support an additional ioctl, and there might not be a controlling terminal (but a process can always try to access /dev/tty). However the documentation is incorrect: the additional ioctl isn’t only accessible via /dev/tty (see mosvy’s answer, which also gives a more sensible explanation for the nature of /dev/tty). /dev/tty can represent different controlling terminals, without being a link, because the driver which implements it determines what the calling process’ controlling terminal is, if any. You can think of this as /dev/tty being the controlling terminal, and thus offering functionality which only makes sense for a controlling terminal, whereas /dev/pts/2 etc. are plain terminals, one of which might happen to be the controlling terminal for a given process.
what relations are between my current controlling terminal and `/dev/tty`?
1,610,233,766,000
When slave side of pty is not opened, strace on the process, which does read(master_fd, &byte, 1);, shows this: read(3, So, when nobody is connected to the slave side of pty, read() waits for data - it does not return with a error. But when slave side of pty is opened by a process and that process exits, the read() dies with this: read(3, 0xbf8ba7f3, 1) = -1 EIO (Input/output error) The pty is created with master_fd = posix_openpt(O_RDWR|O_NOCTTY) Slave side of the pty is opened with comfd = open(COM_PORT, O_RDWR|O_NOCTTY) Why the read() exits when process which opened slave side of the pty exits? Where is this described?
On Linux, a read() on the master side of a pseudo-tty will return -1 and set ERRNO to EIO when all the handles to its slave side have been closed, but will either block or return EAGAIN before the slave has been first opened. The same thing will happen when trying to read from a slave with no master. For the master side, the condition is transient; re-opening the slave will cause a read() on the master side to work again. On *BSD and Solaris the behavior is similar, with the difference that the read() will return 0 instead of -1 + EIO. Also, on OpenBSD a read() will also return 0 before the slave is first opened. I don't know if there's any standard spec or rationale for this, but it allows to (crudely) detect when the other side was closed, and simplifies the logic of programs like script which are just creating a pty and running another program inside it. The solution in a program which manages the master part of a pty to which other unrelated programs can connect is to also open and keep open a handle to its slave side. See related answer: read(2) blocking behaviour changes when pts is closed resulting in read() returning error: -1 (EIO) Why the read() exits when process which opened slave side of the pty exits? When a process exits, all its file descriptors are automatically closed.
Why blocking read() on a pty returns when process on the other end dies?
1,610,233,766,000
Linux has 7 virtual consoles, which correspond to 7 device files /dev/tty[n]. Is a virtual console running as a process, just like a terminal emulator? (I am not sure. It seems a virtual console is part of the kernel, and if that is correct, it can't be a process.) Is a virtual console implemented based on pseudoterminal, just like a terminal emulator? (I guess no. Otherwise, a virtual console's device file will be /dev/pts/[n], instead of /dev/tty[n]) Thanks.
That is incorrect. There's a terminal emulator program built into the Linux kernel. It doesn't manifest as a running process with open file handles. Nor does it require pseudo-terminal devices. It's layered on top of the framebuffer and the input event subsystem, which it uses internal kernel interfaces to access. It presents itself to application-mode systems as a series of 63 (not 7) kernel virtual terminal devices, /dev/tty1 to /dev/tty63. User-space virtual terminals are implemented using pseudo-terminal devices. Pseudo-terminal devices, kernel virtual terminal devices, and real terminal devices layered on top of serial ports are the three types of terminal device (as far as applications programs are concerned) in Linux. Because of a lack of coördination, Linux documentation is now quite bad on this subject. There has been for several years no manual page for kernel virtual terminal devices on several Linux operating systems, although there are pages for the other two types of terminal device. This manual page would have explained the correct number or devices and their device file names and used to read:A Linux system has up to 63 virtual consoles (character devices with major number 4 and minor number 1 to 63), usually called /dev/ttyn with 1 <= n <= 63. The current console is also addressed by /dev/console or /dev/tty0, the character device with major number 4 and minor number 0. Debian people noticed that Debian was missing a console(4) manual page in 2014, and switched to installing the one from the Linux Manpages Project, only for people in that same project to delete their console(4) manual page a year and a bit later in 2016 because "Debian and derivatives don't install this page" and "Debian no longer carries it". Further reading https://unix.stackexchange.com/a/177209/5132 https://unix.stackexchange.com/a/333922/5132 Linux: Difference between /dev/console , /dev/tty and /dev/tty0 What are TTYs >12 used for? ttyS. Linux Programmers' Manual. Michael Kerrisk. 1992-12-19. pty. Linux Programmers' Manual. Michael Kerrisk. 2017-09-15. https://dyn.manpages.debian.org/jessie/manpages/console.4.html https://dyn.manpages.debian.org/stretch/manpages/console.4.html https://dyn.manpages.debian.org/testing/manpages/console.4.html http://manpages.ubuntu.com/manpages/trusty/en/man4/console.4.html http://manpages.ubuntu.com/manpages/artful/en/man4/console.4.html http://manpages.ubuntu.com/manpages/bionic/en/man4/console.4.html http://manpages.ubuntu.com/manpages/cosmic/en/man4/console.4.html Vincent Lefevre (2014-12-27). manpages: some man pages have references to console(4), which no longer exists. Debian bug #774022. Dr. Tobias Quathamer (2016-01-05). "console.4: Is now included in this package. (Closes: #774022)". manpages 4.04-0.1. changelog. Marko Myllynen (2016-01-07). console(4) is out of date. Kernel bug #110481. Michael Kerrisk (2016-03-15). "console.4: Remove outdated page". man-pages. kernel.org. Jonathan de Boyne Pollard (2016). "Terminals". nosh Guide. Softwares. Jonathan de Boyne Pollard (2018). Manual pages for Linux kernel virtual terminal devices. Proposals. Jonathan de Boyne Pollard (2018). console. Linux Programmers' Manual. Proposals. Jonathan de Boyne Pollard (2018). vt. Linux Programmers' Manual. Proposals.
Is a virtual console running as a process and implemented based on pseudoterminal?
1,610,233,766,000
After having opened the master part of a pseude-terminal int fd_pseudo_term_master = open("/dev/ptmx",O_RDWR); there is the file /dev/pts/[NUMBER] created, representing the slave part of he pseudo-terminal. Ignorant persons, like me might imagine that after having done ptsname(fd_pseudo_term_master,filename_pseudo_term_slave,buflen); one should be set to simply do int fd_pseudo_term_slave = open(filename_pseudo_term_slave,O_RDWR); and be good. However there must be a very important use case of "locked" pseudo-terminal slaves, since to make stuff simple, before the open call can be made, it is made necessary to use man 3 unlockpt, to "unlock it". I was not able to find out what this use-case is? What is the need of the pseudo-terminal to be initially locked? What is achieved with code (taken from a libc) /* Unlock the slave pseudo terminal associated with the master pseudo terminal specified by FD. */ int unlockpt (int fd) { #ifdef TIOCSPTLCK int save_errno = errno; int unlock = 0; if (ioctl (fd, TIOCSPTLCK, &unlock)) { if (errno == EINVAL) { errno = save_errno; return 0; } else return -1; } #endif /* If we have no TIOCSPTLCK ioctl, all slave pseudo terminals are unlocked by default. */ return 0; } If possible an answer would detail a use-case, historical or current. Bonus part of the question would be: Do current linux kernels still rely on this functionality of "locked pseudo terminal slaves?" Idea: Is this an inefficient attempt to avoid racing contitions? Waiting for an answer I have looked more into the linux kernel source without having any good answer myself. However it appears that one use that can be "extraced" from an initial lockdown case of the pseudo-terminal is to provide some time for the pseudo-terminal-master process to setup some access rights to the file at /dev/pts/[NUMBER], as to prevent some user to access the file in the first place. Can this be part of the answer? Strangly then, however it appears that such "initial lockdown" state seems not really able to prevent multiple openings of the slave file anyway, at least to what I conceive to be guaranteed atomicity here.
The old AT&T System 5 mechanism for pseudo-terminal slave devices was that they were ordinary persistent character device nodes under /dev. There was a multiplexor master device at /dev/ptmx. The old 4.3BSD mechanism for pseudo-terminal devices had parallel pairs of ordinary persistent master and slave device nodes under /dev. In both cases, this meant that the slave device files retained their last ownership and permissions after last file descriptor closure. Hence the evolution of the grantpt() function to fix up the ownership and permissions of the slave device file after a (re-used) pseudo-terminal had been (re-)allocated. This in turn meant that there was a window when a program was setting up a re-used pseudo-terminal between the open() and the grantpt() where whoever had owned the slave device beforehand could sneak in and open it as well, potentially gaining access to someone else's terminal. Hence the idea of pseudo-terminal slave character devices starting in a locked state where they could not be opened and being unlocked by unlockpt() after the grantpt() had been successfully performed. Over the years, it turned out that this was unnecessary. Nowadays, the slave device files are not persistent, because the kernel makes and destroys things in /dev itself. The act of opening the master device either resets the slave device permissions and ownership, or outright creates the slave device file afresh (in the latter case with the slave device file disappearing again when all open file descriptors are closed), in either case atomically in the same system call. On OpenBSD, this is part of the PTMGET I/O control's functionality on the /dev/ptm device. /dev is still a disc volume, and the kernel internally issues the relevant calls to create new device nodes there and reset their ownerships and permissions. On FreeBSD, this is done by the posix_openpt() system call. /dev is not a disc volume at all. It is a devfs filesystem. It contains no "multiplexor" device nor master device files, because posix_openpt() is an outright system call, not an wrapped ioctl() on an open file descriptor. Slave devices appear in the devfs filesystem under its pts/ directory. The kernel thus ensures that they have the right permissions and ownership ab initio, and there is no window of opportunity where they have stale ones. Thus the grantpt() and unlockpt() library functions are essentially no-ops, whose sole remaining functionality is to check their passed file descriptor and set EINVAL if it isn't the master side of a pseudo-terminal, because programs might be doing daft things like passing non-pseudo-terminal file descriptors to these functions and expecting them to return errors. For a while on Linux, pseudo-terminal slave devices were persistent device nodes. The GNU C library's grantpt() wasn't a system call. Rather, it forked and executed a set-UID helper program named pt_chown, much to the dismay of the no set-UID executables crowd. (grantpt() has to allow an unprivileged user to change the ownership and permissions of a special device file that it does not necessarily own, remember.) So there was still the window of opportunity, and Linux still had to maintain a lock for unlockpt(). Its "new" devpts filesystem (where "new" means introduced quite a few years ago, now) almost permits the same way of doing things as on FreeBSD with devfs, however. There are some differences. There is still a "multiplexor" device. In the older "new" devpts system, this was a ptmx device in a different devtmpfs filesystem, with the devpts filesystem containing only the automatically created/destroyed slave device files. Conventionally the setup was /dev/ptmx and an accompanying devpts mount at /dev/pts. But Linux people wanted to have multiple wholly independent instances of the devpts filesystem, for containers and the like, and it turned out to be quite hard synchronizing the (correct) two filesystems when there were many devtmpfs and devpts filesystems. So in the newer "new" devpts system all of the devices, multiplexor and slave, are in the one filesystem. For backwards compatibility, the default was for the new ptmx node to be inaccessible unless one set a new ptmxmode mount option. In the even newer still "new" devpts the ptmx device file in the devpts filesystem is now the primary multiplexor, and the ptmx in the devtmpfs is either a shim provided by the kernel that tries to mimic a symbolic link, a bind mount, or a plain old actual symbolic link to pts/ptmx. The kernel does not always set up the ownership and permissions as grantpt() should. Setting the wrong mount options, either a gid other than the tty GID or a mode other than 0620, triggers fallback behaviour in the GNU C library. In order to reduce grantpt() to a no-operation in the GNU C library as desired, the kernel must not assign the group of the opening process (i.e. there must be an explicit gid setting), the group assigned must be the tty group, and the mode of newly created slave devices must be exactly 0620. Not switching on /dev/pts/ptmx by default and the GNU C library not wholly reducing grantpt() to a no-op are both because the kernel and the C library are not maintained in lockstep. Each had to operate with older versions of the other. Linux still had to provide an older /dev/ptmx. The GNU C library still has to fall back to running pt_chown if there's not a new devpts filesystem with the correct mount options in place. The window of opportunity thus still exists for unlockpt() to guard against on Linux, if the devpts mount options are wrong and the GNU C library consequently has to fall back to actually doing something in grantpt(). Further reading https://unix.stackexchange.com/a/470853/5132 What would be the best way to work around this glibc problem? https://unix.stackexchange.com/a/214685/5132 Documentation/filesystems/devpts.txt. Linux kernel. Daniel Berrange (2009-05-20). /dev/pts must use the 'newinstance' mount flag to avoid security problem with containers. RedHat bug #501718. Jonathan de Boyne Pollard (2018). open-controlling-tty. nosh Guide. Softwares. Jonathan de Boyne Pollard (2018). vc-get-tty. nosh Guide. Softwares. Jonathan de Boyne Pollard (2018). pty-get-tty. nosh Guide. Softwares.
Is pseudo terminals ( unlockpt / TIOCSPTLCK ) a security feature?
1,610,233,766,000
I'm trying to figure out how I can reliably loop a read on a pt master I have. I open the ptmx, grant and unlock it as per usual: * ptmx stuff */ /* get the master (ptmx) */ int32_t masterfd = open("/dev/ptmx", O_RDWR | O_NOCTTY); if(masterfd < 0){ perror("open"); exit(EXIT_FAILURE); }; /* grant access to the slave */ if(grantpt(masterfd) < 0){ perror("grantpt"); exit(EXIT_FAILURE); }; /* unlock the slave */ if(unlockpt(masterfd) < 0){ perror("unlockpt"); exit(EXIT_FAILURE); }; comms_in->ptmx = masterfd; Next I save the slave's name (yes I know sizeof(char) is always 1) /* get the path to the slave */ char * slavepathPtr; char * slavePath; size_t slavepathLen; if((slavepathPtr = ptsname(masterfd)) == NULL){ perror("ptsname"); exit(EXIT_FAILURE); }else{ slavepathLen = strlen(slavepathPtr); slavePath = (char *) malloc(sizeof(char) * (slavepathLen + 1)); strcpy(slavePath, slavepathPtr); }; I then create a predictably named symlink to the slave (/dev/pts/number) in /dev/custom/predictable (which was provided as an argument to this program using getopts) and verify that its permissions are safe using calls to access, lstat, readlink, symlink and confirm that the program can continue execution, otherwise it calls unlink on the symlink and terminates the thread. Finally the program ends up in this loop ssize_t read_result; ssize_t write_result; while(1){ if((read_result = read(comms_in->ptmx, ptmxio_read_buffer, sizeof ptmxio_read_buffer)) <= 0){ { /** calls thread ender routine */ pthread_mutex_lock(&COMMS_MUTEX); comms_in->thread_statuses[PTMXIO_THREAD] = THREAD_FAILED; pthread_mutex_unlock(&COMMS_MUTEX); pthread_cond_signal(&SIG_PROGRAM_FINISHED); pthread_exit((void *) comms_in); } }else if((write_result = write(STDOUT_FILENO, ptmxio_read_buffer, read_result)) != read_result){ { /** same as above */ } }; }; On the system, I can run this program and all is swell. The read blocks. When the pts symlink is opened with cu or picocom then bytes are successfully read up to the buffer limits either on my end or the kernel's end, depending on who's lower. The problem comes when the slave is closed. At this point, the read returns -1 -> EIO with error text: Input/output error and will continue to do so, consuming a lot of cpu time if I choose to not terminate the thread and loop. When cu or picocom or even just an echo -en "some text" > /dev/pts/number, the read blocks again, until bytes are available. In the case of the redirection into the symlink, obviously if it fills less than a buffer, read just gets that one buffer and continues to return -1 -> EIO again. What's going on? I need a method that doesn't consume a lot of CPU as this runs on a slow embedded application processor and allows me to re-establish reads without losing bytes. I noticed a thread making a call to this: ioctl(3, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, {B38400 opost isig icanon echo ...}) and can't make much sense of what the 3 options are as they're not in my Linux headers anywhere.. Note that 3 is comms_in->ptmx / masterfd. Here is an lstat on the symlink and some extra information, note that the st_mode is unchanged before and after successful and unsuccessful reads. ‘ptmxio_thread’ failed read (-1) on /dev/pts/13 /dev/pts/13: Input/output error ‘ptmxio_thread’ ptsNum (from ioctl) 13 ‘ptmxio_thread’ st_dev: 6, st_ino: 451, st_mode: 0000A1FF, st_nlink: 1 ‘ptmxio_thread’ st_uid: 000003E8, st_gid: 000003E8, st_rdev: 0, st_size: 11 ‘ptmxio_thread’ st_blksize: 4096, st_blocks: 0, st_atime: 1540963806, st_mtime: 1540963798 ‘ptmxio_thread’ st_ctime: 1540963798
It's very simple: you should open and keep open a handle to the slave side of the pty in the program handling the master side. After you got the name with ptsname(3), open(2) it. I noticed a thread making a call to this: ioctl(3, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, {B38400 opost isig icanon echo ...}) and can't make much sense of what the 3 options are as they're not in my Linux headers anywhere.. Note that 3 is comms_in->ptmx / masterfd. ioctl(TCGETS) is tcgetattr(3), which is also called from isatty(3) and ptsname(3). It's defined in /usr/include/asm-generic/ioctls.h. As to the SNDCTL* and SNDRV*, they're because of bugs in older versions of strace. int32_t masterfd = open("/dev/ptmx", O_RDWR | O_NOCTTY); There is no point in making your program needlessly unportable. Use posix_openpt(3) instead. slavepathLen = strlen(slavepathPtr); slavePath = (char *) malloc(sizeof(char) * (slavepathLen + 1)); strcpy(slavePath, slavepathPtr); That's what strdup(3) is for ;-) And you should also handle your read() being interrupted by a signal, unless you're absolutely sure you (and all the library functions you call) set all the signal handlers with SA_RESTART.
read(2) blocking behaviour changes when pts is closed resulting in read() returning error: -1 (EIO)
1,610,233,766,000
I am want to stream a linux terminal to my own program, and as far as I understand this is done by opening /dev/ptmx to start a new pts, I have tested this and this does indeed work (it creates a new file in /dev/pts). But I am not sure how I am supposed to actually read and write to this terminal. Writing directly to /dev/pts/(pts number) I just get an input/output error. Also, am I supposed to open /dev/ptmx and /dev/pts/(pts number) at the same time with the same program. Am I supposed to somehow open a shell first? I find this stuff kinda confusing and I have not been able to find much information except for this man page http://man7.org/linux/man-pages/man4/pts.4.html
There are two distinct parts to this sort of thing. As you have determined, you open the master side of the pseudo-terminal and this creates a slave-side device file that can be opened. The ptsname() library function allows something with an open file descriptor for the master side to determine this device name. The same or another program, in a different process, opens the slave side, treating it exactly as real and virtual terminals are treated by the login subsystem: setting that process as a session leader; setting the slave side as the controlling terminal for the session; and setting standard input, output, and error as open file descriptors to the slave side. It then chain loads whatever interactive program is appropriate, which can indeed be a shell amongst other things. The second part cannot proceed, on several operating systems, until the first part has called the grantpt() and unlockpt() library functions. Interlocks in the kernel prevent the slave side from being openable until these have happened on the master side. Interestingly, these functions (which date from AT&T Unix System 5 Release 4) have proven to be unnecessary. They result from implementations where the slave side device is created owned by the wrong user account and with the wrong permissions, or even older implementations where slave side devices are persistent character device nodes (whose permissions and ownership persist from what they were last set to) and are not created on the fly, resulting in windows of opportunity in various circumstances for an attacker program run by other users to gain access to the terminal. But nowadays some operating system kernels simply give the slave side devices the appropriate ownership and permissions right from the get-go, allowing these functions to essentially be no-operations as a consequence. FreeBSD and OpenBSD both work this way nowadays. Unfortunately, despite several rumblings to this effect by kernel developers over the years, Linux is not one such kernel. The second part is intentionally vague about exactly what different process this is. A common architecture is for the master-side process to fork(), call ptsname(), open the slave device, and close the master side file descriptor. This is what pty-run in the nosh toolset, out of which one can build tools such as ptybandage and ptyrun, does. It's how script, GNU Screen, tmux, and GUI terminal emulators such as XTerm work. But this is not a necessary thing. As long as it knows the filename to open, the slave side process does not need to be a fork()ed child of the master side process. Indeed, in not being it does away with any need for knowledge about the master side or that the terminal is a pseudo-terminal in particular. In my user-space virtual terminal subsystem, for example, the process running console-terminal-emulator creates a symbolic link with a known fixed name pointing to the slave device filename. Entirely separate service processes use the known fixed name to open the slave side device without need to know the exact /dev/pts/N name that the kernel happened to use each time. These service processes operate identically to similar service processes attached to kernel virtual terminals, and largely the same as service processes attached to real terminals. The slave side of a pseudo-terminal is, after all, designed to work just like the other two sorts of terminals. Further reading Jonathan de Boyne Pollard (2014). pty-get-tty. nosh Guide. Softwares. Jonathan de Boyne Pollard (2014). pty-run. nosh Guide. Softwares. Jonathan de Boyne Pollard (2014). console-terminal-emulator. nosh Guide. Softwares. Jonathan de Boyne Pollard (2018). The head0 user-space virtual terminal. nosh Guide. Softwares. grantpt(), unlockpt(), and ptsname(). §3. FreeBSD Library Functions Manual. 2008-08-20. What are the responsibilities of each Pseudo-Terminal (PTY) component (software, master side, slave side)? How do I come by this pty and what can I do with it? Mocking a pseudo tty (pts) Eric W. Biederman (2015-12-11). devpts: Sensible /dev/ptmx & force newinstance. Linux kernel mailing list. Eric W. Biederman (2016-04-08). devpts: Teach /dev/ptmx to find the associated devpts via path lookup. Linux kernel mailing list. Jonathan de Boyne Pollard (2016). "Daniel J. Bernstein's ptyget toolset". djbwares. Softwares. Jonathan de Boyne Pollard (2010). Daniel J. Bernstein on TTYs in UNIX. Frequently Given Answers.
documentation on ptmx and pts [closed]
1,610,233,766,000
In a diagram from APUE, Where is a physical terminal device or virtual console for the terminal emulator read and write with? what process open, read and write with some physical terminal device or virtual console? Is it the terminal emulator?
See What are the responsibilities of each Pseudo-Terminal (PTY) component (software, master side, slave side)? for lots of useful context. The point of a terminal emulator is to emulate the physical terminals of old. None of the connections in the APUE diagram correspond to anything physical. When it starts a shell, the terminal emulator opens the PTY master, allocates a PTY slave, sets the appropriate line discipline (if necessary), and execs the shell with the corresponding file descriptors as standard input etc. The terminal emulator’s job then consists of emulating the behaviour of a physical terminal, implementing the display (typically using X or Wayland), and the user input (ditto).
How does a terminal emulator read and write with a physical terminal device?
1,610,233,766,000
From The Linux Programming Interface under "data transfer" under "communication", we have "byte stream", "message" and "pseudoterminal". Does pseudoterminal belong to byte stream instead, just like how pipe belongs? If not, why?
Consider the various modes a pseudoterminal can be in: in raw mode, it would behave much like a byte stream, but in cooked mode, it becomes more message-like.
Does pseudoterminal transfer byte stream or message?
1,610,233,766,000
I'm trying to understand ssh's -t option: -t Force pseudo-terminal allocation. This can be used to execute arbitrary screen- based programs on a remote machine, which can be very useful, e.g. when implementing menu services. Multiple -t options force tty allocation, even if ssh has no local tty. So, TTY is a device. A way to refer to a TTY is by descriptor (which is obtained by opening the TTY device). STDIN, STDOUT and STDERR are descriptors. But they do not necessarily refer to a TTY device. -t option forces them to refer to a TTY device. Is this the correct way of reasoning in order to understand what this option does? And what is so special about TTY which may not be achieved using ordinary STDIN, STDOUT and STDERR? An example of use case of -t option is welcome. By which mechanism does ssh allocate that TTY? Does ssh create new TTY on server or on client? How to check this? (a new node in /dev/ must appear or something...) And how this new TTY is tied to existing STDIN, STDOUT and STDERR?
-t option forces [the standard file descriptors] to refer to a TTY device. Is this the correct way of reasoning in order to understand what this option does? No. The -t option will run the command on the remote machine with its stdin/out/err connected to a pseudo-tty slave instead of a pair of pipes. Running it in a pty is the default if (a) no explicit command is given and (b) the stdin of the ssh client is itself a tty. You need a single -t to force tty allocation even when the (a) condition is not satisfied, and two of them (-tt) when (b) is not satisfied. By which mechanism does ssh allocate that TTY? By some system-dependent mechanism. Nowadays, it's mostly the standard master_fd = posix_openpt() followed by slave_fd = open(ptsname(master_fd)) [1] Does ssh create new TTY on server or on client? ssh always create the new pseudo-tty on the server, ie on the remote machine. If a pseudo-tty is allocated on the server and the local (client's) stdin is a tty, the local tty will be set to raw mode. How to check this? (a new node in /dev/ must appear or something Not necessarily. But on a regular linux machine of 2019, a new file will appear under /dev/pts for each new pseudo-tty. And how this new TTY is tied to existing STDIN, STDOUT and STDERR? Just like any other file descriptor, with dup2(slave_fd, 0), dup2(slave_fd, 1), etc. dup2(newfd, oldfd) will close whatever file oldfd was referring to. If a pty was allocated, it will also be made the controlling tty of the remote session. An example of use case of -t option is welcome. ssh -t /bin/bash on a system where your login shell is csh. If you leave out the -t, /bin/bash will run without prompt, job control, line editing capabilities, etc. And what is so special about TTY which may not be achieved using ordinary STDIN, STDOUT and STDERR? See above ;-) And a pipe is no more of an "ordinary" stdin than a tty or some other kind of file. [1] that itself has a lot of problems (with multiple devpts mounts and mount namespaces) that's why the TIOCGPTPEER ioctl was added to Linux, which returns a fd referring to the slave without going through the file system.
How TTY differs from an ordinary file? [duplicate]
1,610,233,766,000
The Linux Programming Interface says SIGHUP is sent to the controlling process of a terminal When a terminal window is closed on a workstation. This occurs because the last open file descriptor for the master side of the pseudoterminal associated with the terminal window is closed. My understanding is that a terminal window is created for a slave side, and a master side can have multiple slave sides. So when a terminal window is closed, it only means the last open file descriptor for the slave side of the pseudoterminal associated with the terminal window is closed. Why does the quote say the "master" side? Thanks.
My understanding is that a terminal window is created for a slave side, and a master side can have multiple slave sides. A pseudo terminal has always just one master side and one slave side. It's just a bidirectional pipe with some extra ops [1]. A terminal emulator which can open more than one window/tab will also handle more than one pseudo tty master. As I already explained in another answer, the terminal emulator can do its own thing when the user tries to close the window or one of its tabs; For instance, xterm will not close the master side of the pty, but will just send a SIGHUP to the process group of the tty, and only destroy the window (and exit) when the process started in in has exited or it's not able to use itself the master part of the pty anymore (eg. because all handles to its slave side were closed). [1]. On a SystemV system with STREAMS, those extra ops are modular and have to be "pushed" with ioctl(I_PUSH). On Linux and *BSD, they're hardwired. Also, the behavior of ptys is not completely standardized; trying to read from a slave with no master or vice-versa will fail with EIO on Linux but return 0 (EOF) on FreeBSD.
When closing a terminal emulator window, is the last file descriptor of a slave side or master side closed?
1,610,233,766,000
I have SSH'd into a remote machine. I would like to get the current working directory (and ideally execute commands like ls) on that remote machine, but from outside this process. Here are my processes $ ps 49100 ttys001 0:00.21 -zsh 52134 ttys002 0:00.21 -zsh 52171 ttys002 0:00.05 ssh [email protected] Terminal 2 (ttys002) is where I am currently SSH'd into a remote machine. Is it possible to get the current working directory of the remote host from the client computer? ie without just typing pwd into Terminal 2. If I run lsof, I can get the current working directory on the local machine of the process, but not the current working directory of the remote machine. ~ $ lsof -p 52171 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME ssh 52175 falky cwd DIR 1,4 2816 994619 /Users/falky If this just isn't possible, would there be something I could do before SSHing into the remote machine that would allow me to do this? For example, could I set up a pseudo terminal? Or could I install something on the remote machine that sends a ping back to my local machine? Any advice/direction here would be helpful.
If this just isn't possible, would there be something I could do before SSHing into the remote machine that would allow me to do this? You could start the ssh client in the "connection sharing mode": ssh -M -S ~/.ssh/%r@%h:%p user@localhost user@localhost's password: ... user@localhost$ echo $$ 5555 user@localhost$ cd /some/path In another terminal: ssh -S ~/.ssh/%r@%h:%p user@localhost <no need to enter the password again> user@localhost$ ls -l /proc/5555/cwd <listing of /some/path> Refer to the ssh(1) manpage for the -S and -M options, and to ssh_config(1) for the Control* config options.
Get working directory inside SSH client process from outside process
1,610,233,766,000
If run in (pseudo?)text mode as observed under a hupervisor KVM either VMWare, CentOS 8 displays three animated square dots at boot screen: How to disable this screen to see normal text output also during boot?
Edit your grub configuration and remove the "quiet" option from the kernel line. An easy way to do this is to remove "quiet" from "/boot/grub2/grubenv" but make sure to back up this file first just in case there is a typo which can cause the VM to not boot.
How to disable three square dots on CentOS 8 boot screen?
1,610,233,766,000
Open xterm, run tty and see pseudo terminal slave file (let's say it is /dev/pts/0). Then open another xterm and run $ stty -F /dev/pts/0 speed 38400 baud; line = 0; lnext = <undef>; discard = <undef>; min = 1; time = 0; -brkint -icrnl -imaxbel iutf8 -icanon -echo Then run /bin/sleep 1000 in first xterm. Then run the same stty command in second xterm again: $ stty -F /dev/pts/0 speed 38400 baud; line = 0; -brkint -imaxbel iutf8 Then terminate sleep command in first xterm. Then run the same stty command in second xterm again: $ stty -F /dev/pts/0 speed 38400 baud; line = 0; lnext = <undef>; discard = <undef>; min = 1; time = 0; -brkint -icrnl -imaxbel iutf8 -icanon -echo We see that bash changes tty attributes before running a command and restores them back after running a command. Where is it described in bash documentation? Are all tty attributes restored, or some attributes may not be restored if they are changed by program?
That's the readline(3) line-editing library, which is usually statically built as part of bash, but is also used by other programs. Every time it starts reading a command from the user, readline saves the terminal settings, and puts the terminal into "raw" mode [1], so it could be able to handle moving the insertion point right and left, recall commands from the history etc. When readline(3) returns (eg. when the user has pressed Enter), the original settings of the terminal are restored. Readline will also mess with signals, which may result in some puzzling behaviour. If you strace bash, look for ioctl(TCSETS*) (which implements tcsetattr(3)) and for ioctl(TCGETS) (tcgetattr(3)). Those are the same functions used by stty(1). If you run bash with --noediting you will see that it leaves the terminal settings alone. [1] not exactly the "raw" mode of cfmakeraw(3); you can see the exact details here. All those terminal settings are documented in the termios(3) manpage.
How bash sets tty attributes before and after running a command?
1,610,233,766,000
OK, I've been googling for hours so I obviously have not been able to understand the answers to the various questions that have already been asked about this subject. I am hoping that, by asking the question again in a more specific way, I will be able to get an answer I can understand. I have some application running in linux that communicates with an external device attached to a serial port. I want be able to capture and log the data sent in both directions between the application and the device, with timestamps at the beginning of each line in the file. As a test case, I am using minicom as the application I want to monitor, connected via a null modem cable to another computer also running minicom. I have already confirmed that, if I type characters on either of the computers, characters appear in the other computer's terminal. So far, so good. I then found this question: How can I monitor serial port traffic? In the answer to this question, the program jpnevulator is suggested. However, when reviewing the man page, I could not figure out the right way to use jpnevulator to get what I want. Here is what I tried to do: First, I opened a terminal window and typed the following command: $jpnevulator --tty=/dev/ttyS0 --pty --pass --read --ascii --timing-print --file=serial.log I saw the output: jpnevulator: slave pts device is /dev/pts/18 I then opened another terminal window and typed the following command: minicom -D/dev/pts/18 -b115200 Minicom opened without complaint. However, when I typed characters in either terminal (local and remote), nothing appeared in either terminal. jpnevulator only logged the data written to /dev/pts/18. My expectation is that jpnevulator : reads data from /dev/pts/18 and "passes" this data on to /dev/ttyS0 while also writing this data to the specified file. reads data from /dev/ttyS0 and "passes" this data on to /dev/pts/18 while also writing this data to the specified file. I am aware of the remark in the faq that says "Jpnevulator was never built to sit in between the kernel and your application. I'm sorry." However the very same faq states in the second paragraph down: "Now with a little bit of luck some good news: A little while ago Eric Shattow suggested to use pseudo-terminal devices to sit in between the kernel and your application." That is the approach I am trying to take but I am having no success. What am I missing? Thanks to all in advance. Cheers, Allan p.s. I was successfully able to capture the back and forth traffic using the socat method mentioned in the existing question I referenced but this method did not offer any way of timestamping the traffic. I was hoping that jpnevulator would have provided this for me.
I have answered my own question: I found another utility that better provides what I want: https://github.com/geoffmeyers/interceptty That package includes a perl script that post-processes the output of interceptty to provide a "pretty" output. I found it quite easy to modify the script to add a timestamp to each line. Thanks to Geoff Meyers for providing this. Allan
How do I use jpnevulator to capture and log the serial traffic between an application and hardware serial port?
1,610,233,766,000
I am writing an executable that uses a 3rd party C library (libmodbus if it matters) to communicate via serial device (in my case, /dev/ttyUSB0 or similar to talk RS-485 via an FTDI chipset based USB-to-RS485 adapter). This executable, based on CLI args, can initiate commands (in my case, act like a modbus client) then await a response (in my case, from an external modbus server), or listen for incoming commands (in my case, act like a modbus server) then generate a response. I would like to automate the testing of my executable, without a need for some external device. In other words, I'd like to launch two instances of my executable, where: The first instance is put into modbus client mode and uses /dev/xxxx for comms The second instance is put into modbus server mode and uses /dev/yyyy for comms Set up /dev/xxxx and /dev/yyyy to both act as serial devices that are are essentially the two ends of the same wire. From what I read on pty manpage, I believe /dev/xxxx and /dev/yyyy are the two ends of a pseudo-terminal. Which brings me to my questions: The man page refers to BSD-style pseudoterminals which seem more appropriate to what I am trying to do. Is my understanding of BSD-style pseudoterminals correct? If so, is it possible to create BSD-style pseudoterminals on non-BSD linux distributions? In pacticular, I am using debian 10, 11, 12 (and debian based, like Ubuntu 20.04, 22.04) The man page also refers to UNIX 98 pseudoterminals, which are implemented using posix_openpt(). However, even after the subsequent grantpt() and unlockpt(), I can only one one /dev/pts device for the client side of the pty, with the master side being only a file descriptor inside the executable. Is my understanding (which is loosely based on code like this) correct? If so, what tricks may I use to convert the master side file descriptor to an proper /dev/xxxx which is the only API available to get a modbus context Are there other "standard" linux tools for doing what I am attempting to do? It seems like tools like the ones mentioned here are expecting to connect an executable's STDIO to the pty.
You can set up PTY "virtual serial ports" using socat. socat \ pty,rawer,echo=0,link=/tmp/portA \ pty,rawer,echo=0,link=/tmp/portB This will create two PTY devices and two symlinks to those devices. On my system, the above command created: $ ls -l /tmp/port* lrwxrwxrwx 1 lars lars 11 Jul 24 11:49 /tmp/portA -> /dev/pts/20 lrwxrwxrwx 1 lars lars 11 Jul 24 11:49 /tmp/portB -> /dev/pts/21 You can treat these pty devices as serial ports. For example, I can attach slcand to these devices to create CANbus interfaces: slcand -o -c -f -s6 $(readlink /tmp/portA) slcand -o -c -f -s6 $(readlink /tmp/portB) Or I can attach picocom to each port and chat across the virtual link. In one window: picocom $(readlink /tmp/portA) And in another window: picocom $(readlink /tmp/portB) Etc. Both UNIX-98 ptys and BSD-style ptys behave identically; the difference is how they are allocated (UNIX-98 ptys are allocated dynamically while BSD-style ptys are pre-allocated devices). However, even after the subsequent grantpt() and unlockpt(), I can only one one /dev/pts device for the client side of the pty, with the master side being only a file descriptor inside the executable. That's correct; to link two ptys to create a virtual serial line, your code needs to open two pty devices and then handle moving data between the two (that's what socat is doing in the above example).
Pseudo terminal for comms between two processes
1,610,233,766,000
On my Ubuntu 20.04.5 machine, I have a Perl script running under userA's account. The script issues this command: sudo su - userB -c "ssh -l userB 10.0.0.1 ls -tr /some/remote/directory" (i.e., SSH to a remote host as userB, and then list all the files in /some/remote/directory) The command works great... except that I'm seeing an error on the command line: me@ubuntu1$ sudo su - userB -c "ssh -l userB 10.0.0.1 ls -tr /some/remote/directory" mesg: cannot open /dev/pts/2: Permission denied Welcome to 10.0.0.1! You have logged in. file1.txt file2.txt file3.txt me@ubuntu1$ What is that mesg: cannot open /dev/pts/2: Permission denied message??? A little Internet research reveals that: Entries in /dev/pts are pseudo-terminals (pty for short). Unix kernels have a generic notion of terminals. A terminal provides a way for applications to display output and to receive input through a terminal device. And assuming that I'm only reading through my pseudo-terminal: If a program opens a terminal for reading, the input from the user is passed to that program. If multiple programs are reading from the same terminal, each character is routed independently to one of the programs; this is not recommended. Normally there is only a single program actively reading from the terminal at a given time; programs that try to read from their controlling terminal while they are not in the foreground are automatically suspended by a SIGTTIN signal. That's interesting... but I'm still befuddled why I'm seeing mesg: cannot open /dev/pts/2: Permission denied. I'm still developing my script and have run it several times; I can't remember if I noticed this error message on the first run. Is it likely that every time my script runs, it tries to access /dev/pts/2 every time, but my code didn't properly close the connection or something? Or might this be related to userA using userB to run the command? Pseudo-terminals wouldn't come into play when accessing another user account, would they? Any insight or feedback is welcome, thank you.
if group assigned of that device is "tty", just add the user in that group tty (/etc/group). Regards
Pseudo Terminal Error :: "mesg: cannot open /dev/pts/2: Permission denied"
1,610,233,766,000
I'm trying to figure out how exactly CTRL^C sends a SIGINT to a process. Let's consider a pseudo-terminal system. I'll write what I know (or think I know lol) and please add/replace where needed: The players are: Xterm - This is a user space program that reads from the keyboard (using the X window system) and renders a picture to the screen. Every character it gets from the keyboard is passed to the pty master . User process - the user process that runs as a foreground job of the terminal. Usually when opening a Xterm it runs bash or some other shell program as this user process. PTY device - This is a character device that the user process is connected to as its stdin, stdout, stderr. Everything that's written by the process to stdout is processed by the TTY driver and its line discipline, and passed as input to the maser side, and vice versa. I don't mind at the moment how exactly the kernel passes the signal to the process once the line discipline/TTY driver understands that it should send such a signal to the process. What I'm interested in is how, after I press CTRL^Z on my keyboard, the Xterm (Which is the process that reads this key presses) passes this information to the pty master EDIT Thanks for the answers. I welcome you to response on this thread where I actually tried to simulate this by writing 0x3 to a PTY master and see what happens in the slave. Could you guys respond to that?
xterm just writes the ^C character (ASCII 3) to the pseudo-tty master, something you can easily simulate with script (another program which, just like xterm, is managing a master pseudo-tty): { sleep 1; printf '\x03'; } | script -qc 'trap "echo SIGINT ma tuer; exit 1" I NT; cat' /dev/null ^CSIGINT ma tuer
How exactly a CTRL^C passes a signal to process
1,610,233,766,000
Why pseudo-terminal infers keystrokes from /dev/pts/{number} (and) X-session infers keystrokes from /dev/input/by-id/{keyboard-device-name}? I understand pseudo-terminal are running on-top of X-session. Why is pseudo-terminal so special that it has a separate file location to read/write the data which will be subsequently displayed in the UI terminal view? How did Kernel knew the difference between a pseudo-terminal and other applications such that it wrote to 2 different file locations?
Why is pseudo-terminal so special that it has a separate file location to read/write the data which will be subsequently displayed in the UI terminal view? Because there are two fundamentally different views of the user input in play on your desktop. The display server (X11 or your Wayland compositor) handles all the input from hardware, through /dev/input/... (when using libinput at least). X11 and Wayland clients receive the corresponding events through the respective protocols. Programs which aren’t X11 and Wayland clients, but need to receive input, are run using some form of emulation. A terminal emulator is one such emulation: as its name suggests, it emulates a terminal, and is helped in this by pseudoterminals which involve /dev/pts/... devices. Thus when you run a program in a terminal emulator, your keystrokes follow this path: keyboard → kernel → display server → terminal emulator (as a higher-level event) → pseudoterminal → program This provides the final program with the illusion that it’s running with its input (and output) connected to a terminal. How did Kernel knew the difference between a pseudo-terminal and other applications such that it wrote to 2 different file locations? It doesn’t know about applications and where they receive their input. It knows about the keyboard and feeds events from that through input devices; and separately, it knows about pseudoterminals, and allows controlling programs to transmit events through them.
Why pseudo-terminals and X write to different special files
1,610,233,766,000
I have a C program which works with a normal terminal using this code: int dtr_rts = TIOCM_DTR | TIOCM_RTS; /* out-of-band signal */ ... int comfd = open(COM_PORT, O_RDWR); ... ioctl(comfd, TIOCMBIS, &dtr_rts); Now I need to run this program on a pseudo-terminal. How do I read DTR/RTS on master side? Is DTR/RTS set to 1 or to 0 by default (i.e., on open()) on /dev/pts/X? Is TIOCMGET ioctl implemented for pseudo-terminals?
No, it's not. A pseudo terminal has no way to pass through serial ioctls like TIOCMBIS or TIOCSET. See also: Virtual tty client for network telnet/RFC2217 server? Run a serial connection over SSH
Is it possible to use TIOCMBIS with pseudo-terminal?
1,610,233,766,000
Since terminal emulators are X11 applications, do they receive input from X11Server if we directly type into the corresponding terminal window? In that case why would /dev/pts/N directory exists? Do terminal emulator reject the input events from X Server and directly read from /dev/pts/N?
Terminal emulators receive keyboard input as events from the X11 server (or other display server) to which they are connected. /dev/pts exists so that the terminal emulator can simulate input for the programs running inside it. The emulator receives events from the display server, and translates them into events which it feeds into /dev/pts/.... Programs running inside the emulator take their input from /dev/pts/... instead of /dev/tty....
How do terminal emulators receive input from keyboard [closed]
1,369,503,686,000
What are the practical uses of both pushd and popd when there is an advantage of using these two commands over cd and cd -? EDIT: I'm looking for some practical examples of uses for both of these commands or reasons for keeping stack with directories (when you have tab completion, cd -, aliases for shortening cd .., etc.).
pushd, popd, and dirs are shell builtins which allow you manipulate the directory stack. This can be used to change directories but return to the directory from which you came. For example start up with the following directories: $ pwd /home/saml/somedir $ ls dir1 dir2 dir3 pushd to dir1 $ pushd dir1 ~/somedir/dir1 ~/somedir $ dirs ~/somedir/dir1 ~/somedir dirs command confirms that we have 2 directories on the stack now. dir1 and the original dir, somedir. NOTE: Our "current" directory is ~/somedir/dir1. pushd to ../dir3 (because we're inside dir1 now) $ pushd ../dir3 ~/somedir/dir3 ~/somedir/dir1 ~/somedir $ dirs ~/somedir/dir3 ~/somedir/dir1 ~/somedir $ pwd /home/saml/somedir/dir3 dirs shows we have 3 directories in the stack now. dir3, dir1, and somedir. Notice the direction. Every new directory is getting added to the left. When we start popping directories off, they'll come from the left as well. manually change directories to ../dir2 $ cd ../dir2 $ pwd /home/saml/somedir/dir2 $ dirs ~/somedir/dir2 ~/somedir/dir1 ~/somedir Now start popping directories $ popd ~/somedir/dir1 ~/somedir $ pwd /home/saml/somedir/dir1 Notice we popped back to dir1. Pop again... $ popd ~/somedir $ pwd /home/saml/somedir And we're back where we started, somedir. Might get a little confusing, but the head of the stack is the directory that you're currently in. Hence when we get back to somedir, even though dirs shows this: $ dirs ~/somedir Our stack is in fact empty. $ popd bash: popd: directory stack empty
How do I use pushd and popd commands?
1,369,503,686,000
The Bash command cd - prints the previously used directory and changes to it. On the other hand, the Bash command cd ~- directly changes to the previously used directory, without echoing anything. Is that the only difference? What is the use case for each of the commands?
There are two things at play here. First, the - alone is expanded to your previous directory. This is explained in the cd section of man bash (emphasis mine): An argument of - is converted to $OLDPWD before the directory change is attempted. If a non-empty directory name from CDPATH is used, or if - is the first argument, and the directory change is successful, the absolute pathname of the new working directory is written to the standard output. The return value is true if the directory was successfully changed; false otherwise. So, a simple cd - will move you back to your previous directory and print the directory's name out. The other command is documented in the "Tilde Expansion" section: If the tilde-prefix is a ~+, the value of the shell variable PWD replaces the tilde-prefix. If the tilde-prefix is a ~-, the value of the shell variable OLDPWD, if it is set, is substituted. If the characters following the tilde in the tilde-prefix consist of a number N, optionally prefixed by a + or a -, the tilde-prefix is replaced with the corresponding element from the directory stack, as it would be displayed by the dirs builtin invoked with the tilde-prefix as an argument. If the characters following the tilde in the tilde-prefix consist of a number without a leading + or -, + is assumed. This might be easier to understand with an example: $ pwd /home/terdon $ cd ~/foo $ pwd /home/terdon/foo $ cd /etc $ pwd /etc $ echo ~ ## prints $HOME /home/terdon $ echo ~+ ## prints $PWD /etc $ echo ~- ## prints $OLDPWD /home/terdon/foo So, in general, the - means "the previous directory". That's why cd - by itself will move you back to wherever you were. The main difference is that cd - is specific to the cd builtin. If you try to echo - it will just print a -. The ~- is part of the tilde expansion functionality and behaves similarly to a variable. That's why you can echo ~- and get something meaningful. You can also use it in cd ~- but you could just as well use it in any other command. For example cp ~-/* . which would be equivalent to cp "$OLDPWD"/* .
Difference between "cd -" and "cd ~-"
1,369,503,686,000
After pushding too many times, I want to clear the whole stack of paths. How would I popd all the items in the stack? I'd like to popd without needing to know how many are in the stack? The bash manual doesn't seem to cover this. Why do I need to know this? I'm fastidious and to clean out the stack.
dirs -c is what you are looking for.
removing or clearing stack of popd/pushd paths
1,369,503,686,000
cd - can move to the last visited directory. Can we visit more history other than the last one?
The command you are looking for is pushd and popd. You could view a practical working example of pushd and popd from here. mkdir /tmp/dir1 mkdir /tmp/dir2 mkdir /tmp/dir3 mkdir /tmp/dir4 cd /tmp/dir1 pushd . cd /tmp/dir2 pushd . cd /tmp/dir3 pushd . cd /tmp/dir4 pushd . dirs /tmp/dir4 /tmp/dir4 /tmp/dir3 /tmp/dir2 /tmp/dir1
Do we have more history for cd?
1,369,503,686,000
Is there a difference between the behavior of pushd/popd in bash vs zsh? It seems in zsh cd, cd- behaves exactly the same as pushd/popd (which adds/pops directory automatically when cd) while in bash cd doesn't affect the dir stack. If someone can give me a pointer that would be great.
It depends. In zsh you can configure cd to push the old directory on the directory stack automatically, but it is not the default setting. As far as I can tell zsh with default settings behaves very similar to bash: cd somedir change directory to somedir save the original directory in OLDPWD set PWD="somedir" replace top element of the directory stack (as shown by dirs) with somedir (the number of elements on the stack does not change). cd -: change directory to $OLDPWD swap values of PWD and OLDPWD modify the top element of the directory stack to reflect (the new) PWD pushd somedir: change directory to somedir save original directory in OLDPWD set PWD="somedir" push somedir onto the directory stack (extending it by one element) popd: save original directory in OLDPWD remove first element of the directory stack change directory to the new top element of the directory stack set PWD to the new top element of the directory stack Note: Whether the present working directory is considered an element of the directory stack differs between zsh and bash. I used bash as reference for the above lists. In bash the present working directory is considered to be the top element of the directory stack. The man 1 bash says: pushd [-n] [dir] […] Adds dir to the directory stack at the top, making it the new current working directory as if it had been supplied as the argument to the cd builtin. […] Printing DIRSTACK (echo ${dirstack[@]}) confirms that the first element is identical to $PWD. In zsh the present working directory is not part of the directory stack (but still shown with dirs). man 1 zshbuiltins says: pushd [ -qsLP ] [ arg ] […] Change the current directory, and push the old current directory onto the directory stack. In the first form, change the current directory to arg. […] Printing dirstack (echo ${dirstack[@]}) and comparing it to the output of dirs should show that the PWD is not part of `dirstack. In both shells dirs prints the present working directory as the first element. Also in both shells, the directory stack element with the index 1 refers to the directory which was current before the last pushd. That is because arrays in zsh are usually numbered from 1, while they are numbered from 0 in bash. So there is little practical difference As said above, this behavior can be modified in zsh. If you set the AUTO_PUSHD option in zsh (setopt autopushd) cd somedir behaves like pushd somedir, the previous directory is pushed onto the directory stack automatically. This is probably the case on your machine. You can run setopt to get a list of options that are not set the default way. See, whether autopushd appears in the list. But this does not modify cd - to behave like popd. Instead it just pushes $PWD onto the directory stack, and changes directory to $OLDPWD. That means repeatedly calling cd - will actually grow the directory stack (($PWD $OLDPWD $PWD $OLDPWD $PWD …)). If it actually does behave exactly like popd on your system, I would suggest if cd is actually the builtin (whence -v cd); it is possible that its replaced with an alias or function. As the directory stack will grow rather quickly with AUTO_PUSHD enabled, you can limit its size by setting the parameter DIRSTACKSIZE to the desired maximum size. You can also prevent duplicates by setting the PUSHD_IGNORE_DUPS option. For more options have a look at the manual.
pushd, popd vs cd, cd- in bash and zsh
1,369,503,686,000
I would like to use the recently accessed directories list for logging purposes. Is the directory stack as used by pushd and popd stored somewhere, perhaps as a list of folders in a text file? If so, where?
it could be in... printf %s\\n "${DIRSTACK[@]}" >this_text_file
How can I view the stack used by `pushd` and `popd`?
1,369,503,686,000
I use pushd to work with multiple directories in bash and zsh. I've aliased dirs to dirs -v so that I get an ordered list when I want to see what's on the directory stack: chb$ dirs 0 /Volumes/banister/grosste_daever_gh/2013-03-27/reader 1 /tmp/20130618202713/Library/Internet Plug-Ins 2 ~/code/foo/view/static/css 3 ~/Downloads Is there a way (either in bash or zsh) that I can refer to one of the directories listed on the command line using an alias for its position on the stack? For example, instead of typing: chb$ cp ~/code/foo/view/static/css/baz.css ~/code/bar/view/static/css/ I'd type: chb$ cp <2>baz.css ~/code/bar/view/static/css/ ...or something like that, maybe using a dollar sign and a variable name instead of <n>.
Bash exposes the directory stack in the DIRSTACK variable. You can also use the command dirs +2 to refer to the second entry on the stack. More conveniently, ~1 through ~9 refer to the nine topmost entries on the stack. So your example would translate to chb$ cp ~2/baz.css ~/code/bar/view/static/css/ Zsh has the same ~n facility, and the stack is exposed through an array called dirstack. Bash's dirs +2 is zsh's print -r ~2 or print -r $dirstack[2].
Refer to an item in `dirs`
1,369,503,686,000
Ok this is a short question. I just happened to know that with pushd command, we can add more working directories into our list, which is handy. But is there a way to make this list permanent, so it can survive reboots or logoffs?
You may pre-populate the directory stack in your ~/.bashrc file if you wish: for dir in "$HOME/dir" /usr/src /usr/local/lib; do pushd -n "$dir" >/dev/null end or, if you want to put the directories in an array and use them from there instead: dirstack=( "$HOME/dir" /usr/src /usr/local/lib ) for dir in "${dirstack[@]}"; do pushd -n "$dir" >/dev/null end unset dirstack With -n, pushd won't actually change the working directory, but instead just add the given directory to the stack. If you wish, you can store the value of the DIRSTACK array (upper-case variable name here), which is the current directory stack, into a file from ~/.bash_logout, and then read that file in ~/.bashrc rather than using a predefined array. In ~/.bash_logout: declare -p DIRSTACK >"$HOME/.dirstack" In ~/.bashrc: if [ -f "$HOME/.dirstack" ]; then source "$HOME/.dirstack" fi I don't know how well this would work in a situation where you use multiple terminals. The .dirstack file would be overwritten every time a terminal exited, if it ran a bash as a login shell.
Making 'pushd' directory stack persistent
1,369,503,686,000
I am a happy user of the cd - command to go to the previous directory. At the same time I like pushd . and popd. However, when I want to remember the current working directory by means of pushd ., I lose the possibility to go to the previous directory by cd -. (As pushd . also performs cd .). How can I use pushd to still be able to use cd - By the way: GNU bash, version 4.1.7(1)
You can use something like this: push() { if [ "$1" = . ]; then old=$OLDPWD current=$PWD builtin pushd . cd "$old" cd "$current" else builtin pushd "$1" fi } If you name it pushd, then it will have precedence over the built-in as functions are evaluated before built-ins. You need variables old and current as overwriting OLDPWD will make it lose its special meaning.
Conflict between `pushd .` and `cd -`
1,369,503,686,000
In a related question somebody states that the directory stack of the pushd command is emptied when your shell terminates. But how is the stack actually stored? I use fish instead of bash and the commands work the same way. I would assume pushd (and popd) works independently of the shell you're using. Or do both shells have their own implementation?
The directory stack is not stored anywhere permanent. Shell just keeps it in process memory, in an array DIRSTACK (which has restrictions on user modification). It's not even strictly a stack -- bash and ksh allow you to rotate it left and right by specified counts, too. In Bash, the dirs command clears or shows the stack in various ways, popd removes any specified dir, and pushd adds a dir or rotates the stack to change to any of the dirs already stored. The pushd stack is not "cleared" as such. Pushd is a shell built-in, not an external command (which would not be able to change the shell's own environment). Each shell retains its own pushd data, and when that shell process goes away, the contents are just discarded.
How is the pushd directory stack stored?
1,369,503,686,000
Context linux bash pushd/popd/dirs Problem The problem scenario is very similar to the one stated in this question: removing or clearing stack of popd/pushd paths ... however the goal is not to clear the stack, but rather to prune it. Specifically, the pruning operation is to remove duplicates. Question Is there a straightforward way to prune the output of dirs -v -p such that there are no duplicates in the stack?
This function should remove dups. dedup(){ declare -a new=() copy=("${DIRSTACK[@]:1}") declare -A seen local v i seen[$PWD]=1 for v in "${copy[@]}" do if [ -z "${seen[$v]}" ] then new+=("$v") seen[$v]=1 fi done dirs -c for ((i=${#new[@]}-1; i>=0; i--)) do builtin pushd -n "${new[i]}" >/dev/null done } It copies the list of dirs, except the first which is bogus, into an array copy, and for each dir adds it to a new array if we have not already seen it (an associative array). This ensures older dup entries, which are later in the array, are not copied. The dir list is cleared, and the array is then pushd in reverse order. The first bogus element of the dirs list is the current directory, which is unchanged. It is set in the seen array at the start to also get it removed if earlier in the dir list. If you want to do this automatically, you can override pushd eg: pushd(){ builtin pushd "$@" dedup }
removing duplicates from pushd/popd paths
1,369,503,686,000
When I use popd alone it removes a directory from the stack and takes me to that directory. However, if I do cd $(popd) then no directory is removed from the stack. Since the process is simply forked and the result is put in place of the shell expansion, why isn't a directory taken off of the stack?
The command substitution $(…) runs the command in a subshell. A subshell starts out as an identical¹ copy of the main shell, but from that point on the main shell and the subshell live their own life. The shell process creates a pipe and forks. The child runs popd with its output connected to the pipe, then exits. The parent reads the data from the pipe and substitutes it into the command line. Since popd runs in the child process, its effect is limited to the child process. The directory is taken off the stack — off the child's stack. Nothing happens to the stack in the parent. ¹ Nearly identical; the differences are not relevant here.
Why doesn't shell expansion on popd remove a directory from stack?