date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,637,322,023,000 |
I need to use a program called Kpax, the "installation" process consist in this:
(for bash users, edit ~/.bashrc)
export KPAX_ROOT=/home/dritchie/kpax <- substitute the proper pathname here.
export PATH=${PATH}:${KPAX_ROOT}/bin
I'm using Garuda with fish shell, if I run Kpax using bash works great, the problem is that I need to run Kpax from a php file and every time I use shell_exec() this appear in the error_log:
kpax: command not found
Is there a way to replicate the thing with the environment variables in fish as in bash?
|
In fish shell 3.2 or later, you can just run:
fish_add_path /home/dritchie/kpax/bin
substituting in your home directory.
You can run this once at the command line, or add it to ~/.config/fish/config.fish; either way it will be remembered. Here's documentation for fish_add_path.
You might still need the KPAX_ROOT environment variable, however, so you might still need to set -U that one.
| Environment variables in Fish |
1,637,322,023,000 |
I'm trying to run a simple command on my fish shell, but I am not able to execute. It just keeps adding lines for me to add additional data to, not sure on how to execute accordingly.
$ for acc in `cat uniprot_ids.txt` ; do curl -s "https://www.uniprot.org/uniprot/$acc.fasta" ; done > uniprot_seqs.fasta
|
Fish is not bash compatible, but uses its own scripting language.
In this case the only differences are
it doesn't support backticks (```), instead it uses parentheses.
for-loops don't use do/done, instead they just end in "end"
for acc in (cat uniprot_ids.txt); curl -s "https://www.uniprot.org/uniprot/$acc.fasta" ; end > uniprot_seqs.fasta
Also command substitutions only split on newlines, not newlines/spaces/tabs, but I'm betting this has entries on lines anyway. If not, you need to use string split.
| run for loop with bash command in fish shell |
1,637,322,023,000 |
I am building a script that should work in csh, bash, and fish with no change:
This does the right thing in all the shells,
perl -e '$bash=shift;$csh=shift;for(@ARGV){unlink;rmdir;}if($bash=~s/h//){exit$bash;}exit$csh;' "$?h" "$status" $PARALLEL_TMP
except that fish complains:
fish: $? is not the exit status. In fish, please use $status.
Is there a compatible way I can tell fish: Please do not warn, I know what I am doing.
|
$ bash -c 'false; echo "[$status]" "[`echo \$?h`]"'
[] [1h]
$ csh -c 'false; echo "[$status]" "[`echo \$?h`]"'
[1] [0]
$ fish -c 'false; echo "[$status]" "[`echo \$?h`]"'
[1] [`echo $?h`]
Uses the fact that ` is not special in fish, and that Bourne-like shells do an extra level of backslash processing within `...`.
You should also be able to use eval, supported by all three shells, and have different code ready for all three in some environment variable, which would simplify things.
if ($csh || $fish) {
ENV{CHECK_STATUS} = q{perl -e '...' $status};
} else {
ENV{CHECK_STATUS} = q{perl -e '...' "$?"};
}
exec $shell, "-c", ...;
and the shell code would be eval "$CHECK_STATUS" (beware that for csh, $CHECK_STATUS must not contain newline characters).
| Stop fish complaining: fish: $? is not the exit status. In fish, please use $status |
1,637,322,023,000 |
I have been using fish shell for a while, but only recently got into playing around with the oh-my-fish framework and theming the prompt.
I cannot figure out what this [I] character means! In most themes I install it comes at the very beginning of the prompt, but depending it can be elsewhere.
In my fish_prompt.fish file I see this function:
function fish_prompt -d 'Write out the left prompt of the dangerous theme'
set -g last_status $status
echo -n -s (__dangerous_prompt_bindmode) (__dangerous_prompt_git_branch) (__dangerous_prompt_left_symbols) ' '
end
I cannot figure out what is causing the [I]. I am using the dangerous theme if that matters (however, I see the [I] in all of the themes)
I would love if someone can shed some light on this for me! Thanks.
|
The [I] signifies "Vi Insert Mode" when the shell is in Vi command line editing mode.
This changes to [N] when you press Esc to enter "Vi Normal Mode" (also sometimes referred to as "Vi Command Mode").
The solution (to remove the [I]) is to use
function fish_mode_prompt
end
in your fish configuration file.
| Terminal prompts have a mysterious [I] in it |
1,637,322,023,000 |
Fish is not recognizing the abbr command.
fish: Unknown command “abbr”
abbr: command not found
In all other ways fish is behaving normally.
The Fish documentation doesn't give any clues as to why this might happen.
Stack: EC2 Ubuntu machine, fish version 2.0.0.
|
abbr was not in 2.0.0 - it was only added in 2.2.0, so that's why it isn't working!
You can install the latest available packages (currently 2.2.0, soon to be 2.3.0) from the fish-shell Ubuntu PPA.
| fish: Unknown command "abbr" |
1,402,510,190,000 |
While I am connecting to my server I get,
-bash: fork: retry: Resource temporarily unavailable
-bash: fork: retry: Resource temporarily unavailable
-bash: fork: retry: Resource temporarily unavailable
-bash: fork: retry: Resource temporarily unavailable
-bash: fork: Resource temporarily unavailable
And I try following commands also, then the result is same.
-bash-4.1$ df -h
-bash: fork: retry: Resource temporarily unavailable
-bash: fork: retry: Resource temporarily unavailable
-bash: fork: retry: Resource temporarily unavailable
-bash: fork: retry: Resource temporarily unavailable
-bash: fork: Resource temporarily unavailable
-bash-4.1$
-bash-4.1$ ls -lrth
-bash: fork: retry: Resource temporarily unavailable
-bash: fork: retry: Resource temporarily unavailable
-bash: fork: retry: Resource temporarily unavailable
-bash: fork: retry: Resource temporarily unavailable
-bash: fork: Interrupted system call
-bash-4.1$
-bash-4.1$ ps -aef | grep `pwd`
-bash: fork: retry: Resource temporarily unavailable
-bash: fork: retry: Resource temporarily unavailable
-bash: fork: retry: Resource temporarily unavailable
-bash: fork: retry: Resource temporarily unavailable
-bash: fork: Resource temporarily unavailable
-bash-4.1$
Why this comming ? And how can I resolve it ?
|
This could be due to some resource limit, either on the server itself (or) specific to your user account. Limits in your shell could be checked via ulimit -a. Esp check for ulimit -u max user processes, if you have reached max processes, fork is unable to create any new and failing with that error. This could also be due to swap/memory resource issue
| fork: retry: Resource temporarily unavailable |
1,402,510,190,000 |
Recently I've been digging up information about processes in GNU/Linux and I met the infamous fork bomb :
:(){ : | :& }; :
Theoretically, it is supposed to duplicate itself infinitely until the system runs out of resources...
However, I've tried testing both on a CLI Debian and a GUI Mint distro, and it doesn't seem to impact much the system. Yes there are tons of processes that are created, and after a while I read in console messages like :
bash: fork: Resource temporarily unavailable
bash: fork: retry: No child processes
But after some time, all the processes just get killed and everything goes back to normal. I've read that the ulimit set a maximum amount of process per user, but I can't seem to be able to raise it really far.
What are the system protections against a fork-bomb? Why doesn't it replicate itself until everything freezes or at least lags a lot? Is there a way to really crash a system with a fork bomb?
|
You probably have a Linux distro that uses systemd.
Systemd creates a cgroup for each user, and all processes of a user belong to the same cgroup.
Cgroups is a Linux mechanism to set limits on system resources like max number of processes, CPU cycles, RAM usage, etc. This is a different, more modern, layer of resource limiting than ulimit (which uses the getrlimit() syscall).
If you run systemctl status user-<uid>.slice (which represents the user's cgroup), you can see the current and maximum number of tasks (processes and threads) that is allowed within that cgroup.
$ systemctl status user-$UID.slice
● user-22001.slice - User Slice of UID 22001
Loaded: loaded
Drop-In: /usr/lib/systemd/system/user-.slice.d
└─10-defaults.conf
Active: active since Mon 2018-09-10 17:36:35 EEST; 1 weeks 3 days ago
Tasks: 17 (limit: 10267)
Memory: 616.7M
By default, the maximum number of tasks that systemd will allow for each user is 33% of the "system-wide maximum" (sysctl kernel.threads-max); this usually amounts to ~10,000 tasks. If you want to change this limit:
In systemd v239 and later, the user default is set via TasksMax= in:
/usr/lib/systemd/system/user-.slice.d/10-defaults.conf
To adjust the limit for a specific user (which will be applied immediately as well as stored in /etc/systemd/system.control), run:
systemctl [--runtime] set-property user-<uid>.slice TasksMax=<value>
The usual mechanisms of overriding a unit's settings (such as systemctl edit) can be used here as well, but they will require a reboot. For example, if you want to change the limit for every user, you could create /etc/systemd/system/user-.slice.d/15-limits.conf.
In systemd v238 and earlier, the user default is set via UserTasksMax= in /etc/systemd/logind.conf. Changing the value generally requires a reboot.
More info about this:
man 5 systemd.resource-control
man 5 systemd.slice
man 5 logind.conf
http://0pointer.de/blog/projects/systemd.html (search this page for cgroups)
man 7 cgroups and https://www.kernel.org/doc/Documentation/cgroup-v1/pids.txt
https://en.wikipedia.org/wiki/Cgroups
| Why can't I crash my system with a fork bomb? |
1,402,510,190,000 |
I am running a docker server on Arch Linux (kernel 4.3.3-2) with several containers. Since my last reboot, both the docker server and random programs within the containers crash with a message about not being able to create a thread, or (less often) to fork. The specific error message is different depending on the program, but most of them seem to mention the specific error Resource temporarily unavailable. See at the end of this post for some example error messages.
Now there are plenty of people who have had this error message, and plenty of responses to them. What’s really frustrating is that everyone seems to be speculating how the issue could be resolved, but no one seems to point out how to identify which of the many possible causes for the problem is present.
I have collected these 5 possible causes for the error and how to verify that they are not present on my system:
There is a system-wide limit on the number of threads configured in /proc/sys/kernel/threads-max (source). In my case this is set to 60613.
Every thread takes some space in the stack. The stack size limit is configured using ulimit -s (source). The limit for my shell used to be 8192, but I have increased it by putting * soft stack 32768 into /etc/security/limits.conf, so it ulimit -s now returns 32768. I have also increased it for the docker process by putting LimitSTACK=33554432 into /etc/systemd/system/docker.service (source, and I verified that the limit applies by looking into /proc/<pid of docker>/limits and by running ulimit -s inside a docker container.
Every thread takes some memory. A virtual memory limit is configured using ulimit -v. On my system it is set to unlimited, and 80% of my 3 GB of memory are free.
There is a limit on the number of processes using ulimit -u. Threads count as processes in this case (source). On my system, the limit is set to 30306, and for the docker daemon and inside docker containers, the limit is 1048576. The number of currently running threads can be found out by running ls -1d /proc/*/task/* | wc -l or by running ps -elfT | wc -l (source). On my system they are between 700 and 800.
There is a limit on the number of open files, which according to some sources is also relevant when creating threads. The limit is configured using ulimit -n. On my system and inside docker, the limit is set to 1048576. The number of open files can be found out using lsof | wc -l (source), on my system it is about 30000.
It looks like before the last reboot I was running kernel 4.2.5-1, now I’m running 4.3.3-2. Downgrading to 4.2.5-1 fixes all the problems. Other posts mentioning the problem are this and this. I have opened a bug report for Arch Linux.
What has changed in the kernel that could be causing this?
Here are some example error messages:
Crash dump was written to: erl_crash.dump
Failed to create aux thread
Jan 07 14:37:25 edeltraud docker[30625]: runtime/cgo: pthread_create failed: Resource temporarily unavailable
dpkg: unrecoverable fatal error, aborting:
fork failed: Resource temporarily unavailable
E: Sub-process /usr/bin/dpkg returned an error code (2)
test -z "/usr/include" || /usr/sbin/mkdir -p "/tmp/lib32-popt/pkg/lib32-popt/usr/include"
/bin/sh: fork: retry: Resource temporarily unavailable
/usr/bin/install -c -m 644 popt.h '/tmp/lib32-popt/pkg/lib32-popt/usr/include'
test -z "/usr/share/man/man3" || /usr/sbin/mkdir -p "/tmp/lib32-popt/pkg/lib32-popt/usr/share/man/man3"
/bin/sh: fork: retry: Resource temporarily unavailable
/bin/sh: fork: retry: No child processes
/bin/sh: fork: retry: Resource temporarily unavailable
/bin/sh: fork: retry: No child processes
/bin/sh: fork: retry: No child processes
/bin/sh: fork: retry: Resource temporarily unavailable
/bin/sh: fork: retry: Resource temporarily unavailable
/bin/sh: fork: retry: No child processes
/bin/sh: fork: Resource temporarily unavailable
/bin/sh: fork: Resource temporarily unavailable
make[3]: *** [install-man3] Error 254
Jan 07 11:04:39 edeltraud docker[780]: time="2016-01-07T11:04:39.986684617+01:00" level=error msg="Error running container: [8] System error: fork/exec /proc/self/exe: resource temporarily unavailable"
[Wed Jan 06 23:20:33.701287 2016] [mpm_event:alert] [pid 217:tid 140325422335744] (11)Resource temporarily unavailable: apr_thread_create: unable to create worker thread
|
The problem is caused by the TasksMax systemd attribute. It was introduced in systemd 228 and makes use of the cgroups pid subsystem, which was introduced in the linux kernel 4.3. A task limit of 512 is thus enabled in systemd if kernel 4.3 or newer is running. The feature is announced here and was introduced in this pull request and the default values were set by this pull request. After upgrading my kernel to 4.3, systemctl status docker displays a Tasks line:
# systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/etc/systemd/system/docker.service; disabled; vendor preset: disabled)
Active: active (running) since Fri 2016-01-15 19:58:00 CET; 1min 52s ago
Docs: https://docs.docker.com
Main PID: 2770 (docker)
Tasks: 502 (limit: 512)
CGroup: /system.slice/docker.service
Setting TasksMax=infinity in the [Service] section of docker.service fixes the problem. docker.service is usually in /usr/share/systemd/system, but it can also be put/copied in /etc/systemd/system to avoid it being overridden by the package manager.
A pull request is increasing TasksMax for the docker example systemd files, and an Arch Linux bug report is trying to achieve the same for the package. There is some additional discussion going on on the Arch Linux Forum and in an Arch Linux bug report regarding lxc.
DefaultTasksMax can be used in the [Manager] section in /etc/systemd/system.conf (or /etc/systemd/user.conf for user-run services) to control the default value for TasksMax.
Systemd also applies a limit for programs run from a login-shell. These default to 4096 per user (will be increased to 12288) and are configured as UserTasksMax in the [Login] section of /etc/systemd/logind.conf.
| Creating threads fails with “Resource temporarily unavailable” with 4.3 kernel |
1,402,510,190,000 |
In Program 1 Hello world gets printed just once, but when I remove \n and run it (Program 2), the output gets printed 8 times. Can someone please explain me the significance of \n here and how it affects the fork()?
Program 1
#include <sys/types.h>
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
int main()
{
printf("hello world...\n");
fork();
fork();
fork();
}
Output 1:
hello world...
Program 2
#include <sys/types.h>
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
int main()
{
printf("hello world...");
fork();
fork();
fork();
}
Output 2:
hello world... hello world...hello world...hello world...hello world...hello world...hello world...hello world...
|
When outputting to standard output using the C library's printf() function, the output is usually buffered. The buffer is not flushed until you output a newline, call fflush(stdout) or exit the program (not through calling _exit() though). The standard output stream is by default line-buffered in this way when it's connected to a TTY.
When you fork the process in "Program 2", the child processes inherits every part of the parent process, including the unflushed output buffer. This effectively copies the unflushed buffer to each child process.
When the process terminates, the buffers are flushed. You start a grand total of eight processes (including the original process), and the unflushed buffer will be flushed at the termination of each individual process.
It's eight because at each fork() you get twice the number of processes you had before the fork() (since they are unconditional), and you have three of these (23 = 8).
| Why does a program with fork() sometimes print its output multiple times? |
1,402,510,190,000 |
What are the practical differences from a sysadmin point of view when deploying services on a unix based system?
|
The traditional way of daemonizing is:
fork()
setsid()
close(0) /* and /dev/null as fd 0, 1 and 2 */
close(1)
close(2)
fork()
This ensures that the process is no longer in the same process group as the terminal and thus won't be killed together with it. The IO redirection is to make output not appear on the terminal.
| What's the difference between running a program as a daemon and forking it into background with '&'? |
1,402,510,190,000 |
I have been studying the Linux kernel behaviour for quite some time now, and it's always been clear to me that:
When a process dies, all its children are given back to the init process (PID 1) until they eventually die.
However, recently, someone with much more experience than me with the kernel told me that:
When a process exits, all its children also die (unless you use NOHUP in which case they get back to init).
Now, even though I don't believe this, I still wrote a simple program to make sure of it. I know I should not rely on time (sleep) for tests since it all depends on process scheduling, yet for this simple case, I think that's fairly enough.
int main(void){
printf("Father process spawned (%d).\n", getpid());
sleep(5);
if(fork() == 0){
printf("Child process spawned (%d => %d).\n", getppid(), getpid());
sleep(15);
printf("Child process exiting (%d => %d).\n", getppid(), getpid());
exit(0);
}
sleep(5);
printf(stdout, "Father process exiting (%d).\n", getpid());
return EXIT_SUCCESS;
}
Here is the program's output, with the associated ps result every time printf talks:
$ ./test &
Father process spawned (435).
$ ps -ef | grep test
myuser 435 392 tty1 ./test
Child process spawned (435 => 436).
$ ps -ef | grep test
myuser 435 392 tty1 ./test
myuser 436 435 tty1 ./test
Father process exiting (435).
$ ps -ef | grep test
myuser 436 1 tty1 ./test
Child process exiting (436).
Now, as you can see, this behaves quite as I would have expected it to. The orphan process (436) is given back to init (1) until it dies.
However, is there any UNIX-based system on which this behaviour does not apply by default? Is there any system on which the death of a process immediately triggers the death of all its children?
|
When a process exits, all its children also die (unless you use NOHUP in which case they get back to init).
This is wrong. Dead wrong. The person saying that was either mistaken, or confused a particular situation with the the general case.
There are two ways in which the death of a process can indirectly cause the death of its children. They are related to what happens when a terminal is closed. When a terminal disappears (historically because the serial line was cut due to a modem hangup, nowadays usually because the user closed the terminal emulator window), a SIGHUP signal is sent to the controlling process running in that terminal — typically, the initial shell started in that terminal. Shells normally react to this by exiting. Before exiting, shells intended for interactive use send HUP to each job that they started.
Starting a job from a shell with nohup breaks that second source of HUP signals because the job will then ignore the signal and thus not be told to die when the terminal disappears. Other ways to break the propagation of HUP signals from the shell to the jobs include using the shell's disown builtin if it has one (the job is removed from the shell's list of jobs), and double forking (the shell launches a child which launches a child of its own and exits immediately; the shell has no knowledge of its grandchild).
Again, the jobs started in the terminal die not because their parent process (the shell) dies, but because their parent process decides to kill them when it is told to kill them. And the initial shell in the terminal dies not because its parent process dies, but because its terminal disappears (which may or may not coincidentally be because the terminal is provided by a terminal emulator which is the shell's parent process).
| Is there any UNIX variant on which a child process dies with its parent? |
1,402,510,190,000 |
The UNIX system call for process creation, fork(), creates a child process by copying the parent process. My understanding is that this is almost always followed by a call to exec() to replace the child process' memory space (including text segment). Copying the parent's memory space in fork() always seemed wasteful to me (although I realize the waste can be minimized by making the memory segments copy-on-write so only pointers are copied). Anyway, does anyone know why this duplication approach is required for process creation?
|
It's to simplify the interface. The alternative to fork and exec would be something like Windows' CreateProcess function. Notice how many parameters CreateProcess has, and many of them are structs with even more parameters. This is because everything you might want to control about the new process has to be passed to CreateProcess. In fact, CreateProcess doesn't have enough parameters, so Microsoft had to add CreateProcessAsUser and CreateProcessWithLogonW.
With the fork/exec model, you don't need all those parameters. Instead, certain attributes of the process are preserved across exec. This allows you to fork, then change whatever process attributes you want (using the same functions you'd use normally), and then exec. In Linux, fork has no parameters, and execve has only 3: the program to run, the command line to give it, and its environment. (There are other exec functions, but they're just wrappers around execve provided by the C library to simplify common use cases.)
If you want to start a process with a different current directory: fork, chdir, exec.
If you want to redirect stdin/stdout: fork, close/open files, exec.
If you want to switch users: fork, setuid, exec.
All these things can be combined as needed. If somebody comes up with a new kind of process attribute, you don't have to change fork and exec.
As larsks mentioned, most modern Unixes use copy-on-write, so fork doesn't involve significant overhead.
| Why is the default process creation mechanism fork? |
1,402,510,190,000 |
I have some confusion regarding fork and clone. I have seen that:
fork is for processes and clone is for threads
fork just calls clone, clone is used for all processes and threads
Are either of these accurate? What is the distinction between these 2 syscalls with a 2.6 Linux kernel?
|
fork() was the original UNIX system call. It can only be used to create new processes, not threads. Also, it is portable.
In Linux, clone() is a new, versatile system call which can be used to create a new thread of execution. Depending on the options passed, the new thread of execution can adhere to the semantics of a UNIX process, a POSIX thread, something in between, or something completely different (like a different container). You can specify all sorts of options dictating whether memory, file descriptors, various namespaces, signal handlers, and so on get shared or copied.
Since clone() is the superset system call, the implementation of the fork() system call wrapper in glibc actually calls clone(), but this is an implementation detail that programmers don't need to know about. The actual real fork() system call still exists in the Linux kernel for backward compatibility reasons even though it has become redundant, because programs that use very old versions of libc, or another libc besides glibc, might use it.
clone() is also used to implement the pthread_create() POSIX function for creating threads.
Portable programs should call fork() and pthread_create(), not clone().
| Fork vs Clone on 2.6 Kernel Linux |
1,402,510,190,000 |
A fork() system call clones a child process from the running process. The two processes are identical except for their PID.
Naturally, if the processes are just reading from their heaps rather than writing to it, copying the heap would be a huge waste of memory.
Is the entire process heap copied? Is it optimized in a way that only writing triggers a heap copy?
|
The entirety of fork() is implemented using mmap / copy on write.
This not only affects the heap, but also shared libraries, stack, BSS areas.
Which, incidentally, means that fork is a extremely lightweight operation, until the resulting 2 processes (parent and child) actually start writing to memory ranges. This feature is a major contributor to the lethality of fork-bombs - you end up with way too many processes before kernel gets overloaded with page replication and differentiation.
You'll be hard-pressed to find in a modern OS an example of an operation where kernel performs a hard copy (device drivers being the exception) - it's just far, far easier and more efficient to employ VM functionality.
Even execve() is essentially "please mmap the binary / ld.so / whatnot, followed by execute" - and the VM handles the actual loading of the process to RAM and execution. Local uninitialized variables end up being mmaped from a 'zero-page' - special read-only copy-on-write page containing zeroes, local initialized variables end up being mmaped (copy-on-write, again) from the binary file itself, etc.
| Does fork() immediately copy the entire process heap in Linux? |
1,402,510,190,000 |
When a child is forked then it inherits parent's file descriptors, if child closes the file descriptor what will happen? If child starts writing what shall happen to the file at the parent's end? Who manages these inconsistencies, kernel or user?
When a process calls the close function to close a particular file through file descriptor. In the file table of the process, the reference count is decremented by one.
But since parent and child are both holding the same file, the reference count is 2 and after close it reduces to 1. Since it is not zero the process still continue to use file without any problem.
See Terrence Chan UNIX system programming,(Unix kernel support for Files).
|
When a child is forked then it inherits parent's file descriptors, if child closes the file descriptor what will happen ?
It inherits a copy of the file descriptor. So closing the descriptor in the child will close it for the child, but not the parent, and vice versa.
If child starts writing what shall happen to the file at the parent's end ? Who manages these inconsistencies , kernel or user ?
It's exactly (as in, exactly literally) the same as two processes writing to the same file. The kernel schedules the processes independently, so you will likely get interleaved data in the file.
However, POSIX (to which *nix systems largely or completely conform), stipulates that read() and write() functions from the C API (which map to system calls) are "atomic with respect to each other [...] when they operate on regular files or symbolic links". The GNU C manually also provisionally promises this with regard to pipes (note the default PIPE_BUF, which is part of the proviso, is 64 kiB). This means that calls in other languages/tools, such as use of echo or cat, should be included in that contract, so if two indepenedent process try to write "hello" and "world" simultaneously to the same pipe, what will come out the other end is either "helloworld" or "worldhello", and never something like "hweolrllod".
when a process call close function to close a particular open file through file descriptor.The file table of process decrement the reference count by one.But since parent and child both are holding the same file(there refrence count is 2 and after close it reduces to 1)since it is not zero so process still continue to use file without any problem.
There are TWO processes, the parent and the child. There is no "reference count" common to both of them. They are independent. WRT what happens when one of them closes a file descriptor, see the answer to the first question.
| File descriptor and fork |
1,402,510,190,000 |
First this question is related but definitely not the same as this very nice question:
Difference between nohup, disown and &
I want to understand something: when I do '&', I'm forking right?
Is it ever useful to do "nohup ... &" or is simply & sufficient?
Could someone show a case where you'd be using '&' and still would want to use 'nohup'?
|
First of all, every time you execute a command, you shell will fork a new process, regardless of whether you run it with & or not. & only means you're running it in the background.
Note this is not very accurate. Some commands, like cd are shell functions and will usually not fork a new process. type cmd will usually tell you whether cmd is an external command or a shell function. type type tells you that type itself is a shell function.
nohup is something different. It tells the new process to ignore SIGHUP. It is the signal sent by the kernel when the parent shell is closed.
To answer your question do the following:
run emacs & (by default should run in a separate X window).
on the parent shell, run exit.
You'll notice that the emacs window is killed, despite running in the background. This is the default behavior and nohup is used precisely to modify that.
Running a job in the background (with & or bg, I bet other shells have other syntaxes as well) is a shell feature, stemming from the ability of modern systems to multitask. Instead of forking a new shell instance for every program you want to launch, modern shells (bash, zsh, ksh, ...) will have the ability to manage a list of programs (or jobs). Only one of them at a time can be at the foreground, meaning it gets the shell focus. I wish someone could expand more on the differences between a process running in the foreground and one in the background (the main one being acess to stdin/stdout).
In any case, this does not affect the way the child process reacts to SIGHUP. nohup does.
| When do you need 'nohup' if you're already forking using '&'? |
1,402,510,190,000 |
According to Wikipedia (which could be wrong)
When a fork() system call is issued, a copy of all the pages corresponding to the parent process is created, loaded into a separate memory location by the OS for the child process. But this is not needed in certain cases. Consider the case when a child executes an "exec" system call (which is used to execute any executable file from within a C program) or exits very soon after the fork(). When the child is needed just to execute a command for the parent process, there is no need for copying the parent process' pages, since exec replaces the address space of the process which invoked it with the command to be executed.
In such cases, a technique called copy-on-write (COW) is used. With this technique, when a fork occurs, the parent process's pages are not copied for the child process. Instead, the pages are shared between the child and the parent process. Whenever a process (parent or child) modifies a page, a separate copy of that particular page alone is made for that process (parent or child) which performed the modification. This process will then use the newly copied page rather than the shared one in all future references. The other process (the one which did not modify the shared page) continues to use the original copy of the page (which is now no longer shared). This technique is called copy-on-write since the page is copied when some process writes to it.
It seems that when either of the processes tries to write to the page a new copy of the page gets allocated and assigned to the process that generated the page fault. The original page gets marked writable afterwards.
My question is: what happens if the fork() gets called multiple times before any of the processes made an attempt to write to a shared page?
|
Nothing particular happens. All processes are sharing the same set of pages and each one gets its own private copy when it wants to modify a page.
| How does copy-on-write in fork() handle multiple fork? |
1,402,510,190,000 |
I'm trying to learn UNIX programming and came across a question regarding fork(). I understand that fork() creates an identical process of the currently running process, but where does it start? For example, if I have code
int main (int argc, char **argv)
{
int retval;
printf ("This is most definitely the parent process\n");
fflush (stdout);
retval = fork ();
printf ("Which process printed this?\n");
return (EXIT_SUCCESS);
}
The output is:
This is most definitely the parent process
Which process printed this?
Which process printed this?
I thought that fork() creates a same process, so I initially thought that in that program, the fork() call would be recursively called forever. I guess that new process created from fork() starts after the fork() call?
If I add the following code, to differentiate between a parent and child process,
if (child_pid = fork ())
printf ("This is the parent, child pid is %d\n", child_pid);
else
printf ("This is the child, pid is %d\n", getpid ());
after the fork() call, where does the child process begin its execution?
|
The new process will be created within the fork() call, and will start by returning from it just like the parent. The return value (which you stored in retval) from fork() will be:
0 in the child process
The PID of the child in the parent process
-1 in the parent if there was a failure (there is no child, naturally)
Your testing code works correctly; it stores the return value from fork() in child_pid and uses if to check if it's 0 or not (although it doesn't check for an error)
| After fork(), where does the child begin its execution? |
1,402,510,190,000 |
WARNING DO NOT ATTEMPT TO RUN THIS ON A PRODUCTION MACHINE
In reading the Wikipedia page on the topic I generally follow what's going on with the following code:
:(){ :|:& };:
excerpt of description
The following fork bomb was presented as art in 2002;56
its exact origin is unknown, but it existed on Usenet prior to 2002.
The bomb is executed by pasting the following 13 characters into a
UNIX shell such as bash or zsh. It operates by defining
a function called ':', which calls itself twice, once in the
foreground and once in the background.
However the last bit isn't entirely clear to me. I see the function definition:
:(){ ... }
But what else is going on? Also do other shells such as ksh, csh, and tcsh also suffer the same fate of being able to construct something similar?
|
This fork bomb always reminds me of the something an AI programming teacher said on one of the first lessons I attended "To understand recursion, first you must understand recursion".
At it's core, this bomb is a recursive function. In essence, you create a function, which calls itself, which calls itself, which calls itself.... until system resources are consumed. In this specific instance, the recursion is amplified by the use of piping the function to itself AND backgrounding it.
I've seen this answered over on StackOverflow, and I think the example given there illustrates it best, just because it's easier to see what it does at a glance (stolen from the link above...)
‚òÉ(){ ‚òÉ|‚òÉ& };‚òÉ
Define the bug function ‚òÉ() { ... }, the body of which calls itself (the bug function), piping the output to itself (the bug function) ‚òÉ|‚òÉ, and background the result &. Then, after the function is defined, actually call the bug function, ; ‚òÉ.
I note that at least on my Arch VM, the need to background the process is not a requirement to have the same end result, to consume all available process space and render the host b0rked. Actually now I've said that it seems to sometimes terminate the run away process and after a screenful of -bash: fork: Resource temporarily unavailable it will stop with a Terminated (and journalctl shows bash core dumping).
To answer your question about csh/tcsh, neither of those shells support functions, you can only alias. So for those shells you'd have to write a shell script which calls itself recursively.
zsh seems to suffer the same fate (with the same code), does not core dump and causes Arch to give Out of memory: Kill process 216 (zsh) score 0 or sacrifice child., but it still continues to fork. After a while it then states Killed process 162 (systemd-logind) ... (and still continues to have a forking zsh).
Arch doesn't seem to have a pacman version of ksh, so I had to try it on debian instead. ksh objects to : as a function name, but using something - say b() instead seems to have the desired result.
| How does a fork bomb work? |
1,402,510,190,000 |
I don't have much experience, just trying to get involved into the processes how do they interpret to hardware from user level.
So when a command is fired from a shell, fork() inherits a child process of it and exec() loads the child process to the memory and executes.
If the child process contains all the attributes of the parent process (which is the original process), then what is the need of this child process? The original process also could have been loaded to the memory.
Does this fork and exec concept apply to all the executable program in UNIX? Like for shell script also or only for commands? Does it also apply for shell builtin commands?
When is the copy on write concept used if I'll execute a command/script?
Sorry for asking many questions at a time, but all these questions come to my mind at once when I think about any command execution.
|
So when a command is fired from a shell, fork() inherits a child
process of it and exec() loads the child process to the memory and
executes.
Not quite. fork() clones the current process, creating an identical child. exec() loads a new program into the current process, replacing the existing one.
My qs is:
If the child process contains all the attributes of the parent process(which is the original process), then what is the need of this
child process? The original process also could have been loaded to the
memory.
The need is because the parent process does not want to terminate yet; it wants a new process to go off and do something at the same time that it continues to execute as well.
Does this fork and exec concept apply to all the executable program in
UNIX?Like for shell script also or only for commands? Does it also
apply for shell builtin commands?
For external commands, the shell does a fork() so that the command runs in a new process. Builtins are just run by the shell directly. Another notable command is exec, which tells the shell to exec() the external program without first fork()ing. This means that the shell itself is replaced with the new program, and so is no longer there for that program to return to when it exits. If you say, exec true, then /bin/true will replace your shell, and immediately exit, leaving nothing running in your terminal anymore, so it will close.
when copy on write concept is used if I'll execute a command/script?
Back in the stone age, fork() actually had to copy all of the memory in the calling process to the new process. Copy on Write is an optimization where the page tables are set up so that the two processes start off sharing all of the same memory, and only the pages that are written to by either process are copied when needed.
| How do fork and exec work? |
1,402,510,190,000 |
Passing a password on command line (to a child process started from my program) is known to be insecure (because it can be seen even by other users with ps command). Is it OK to pass it as an environment variable instead?
What else can I use to pass it? (Except of environment variable) the easiest solution seems to use a pipe, but this easiest solution is not easy.
I program in Perl.
|
Process arguments are visible to all users, but the environment is only visible to the same user (at least on Linux, and I think on every modern unix variant). So passing a password through an environment variable is safe. If someone can read your environment variables, they can execute processes as you, so it's game over already.
The contents of the environment is at some risk of leaking indirectly, for example if you run ps to investigate something and accidentally copy-paste the result including confidential environment variables in a public place. Another risk is that you pass the environment variable to a program that doesn't need it (including children of the process that needs the password) and that program exposes its environment variables because it didn't expect them to be confidential. How bad these risks of secondary leakage are depends on what the process with the password does (how long does it run? does it run subprocesses?).
It's easier to ensure that the password won't leak accidentally by passing it through a channel that is not designed to be eavesdropped, such as a pipe. This is pretty easy to do on the sending side. For example, if you have the password in a shell variable, you can just do
echo "$password" | theprogram
if theprogram expects the password on its standard input. Note that this is safe because echo is a builtin; it would not be safe with an external command since the argument would be exposed in ps output. Another way to achieve the same effect is with a here document:
theprogram <<EOF
$password
EOF
Some programs that require a password can be told to read it from a specific file descriptor. You can use a file descriptor other than standard input if you need standard input for something else. For example, with gpg:
get-encrypted-data | gpg --passphrase-fd 3 --decrypt … 3<<EOP >decrypted-data
$password
EOP
If the program can't be told to read from a file descriptor but can be told to read from a file, you can tell it to read from a file descriptor by using a file name like `/dev/fd/3.
theprogram --password-from-file=/dev/fd/3 3<<EOF
$password
EOF
In ksh, bash or zsh, you can do this more concisely through process substitution.
theprogram --password-from-file=<(echo "$password")
| How to pass a password to a child process? |
1,402,510,190,000 |
I program that I wrote in C fork()'s off a child process. Neither process will terminate. If I launch the program from the command line and press control-c which process(es) will receive the interrupt signal?
|
Why don't we try it out and see? Here's a trivial program using signal(3) to trap SIGINT in both the parent and child process and print out a message identifying the process when it arrives.
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <signal.h>
void parent_trap(int sig) {fprintf(stderr, "They got back together!\n");}
void child_trap(int sig) {fprintf(stderr, "Caught signal in CHILD.\n");}
int main(int argc, char **argv) {
if (!fork()) {
signal(SIGINT, &child_trap);
sleep(1000);
exit(0);
}
signal(SIGINT, &parent_trap);
sleep(1000);
return 0;
}
Let's call that test.c. Now we can run it:
$ gcc test.c
$ ./a.out
^CCaught signal in CHILD.
They got back together!
Interrupt signals generated in the terminal are delivered to the active process group, which here includes both parent and child. You can see that both child_trap and parent_trap were executed when I pressed Ctrl-C.
There is a lengthy discussion of interactions between fork and signals in POSIX. The most material part of it here is that:
A signal sent to the process group after the fork() should be delivered to both parent and child.
They also note that some systems may not behave in exactly the correct way, in particular when the signal arrives very close to the time of the fork(). Figuring out whether you're on one of those systems is probably going to require reading the code or a lot of luck, because the interactions are vanishingly unlikely in each individual attempt.
Other useful points are that:
A signal manually generated and sent to an individual process (perhaps with kill) will be delivered only to that process, regardless of whether it is the parent or child.
The order that the signal handlers run between processes is not defined, so you can't rely on either executing first.
If you don't define an interrupt handler (or explicitly ignore the signal), both processes would just exit with a SIGINT code (the default behaviour).
If one handles the signal non-fatally and the other doesn't, the one without the handler will die and the other will continue.
| fork() and how signals are delivered to processes |
1,402,510,190,000 |
I've written quite a few shell scripts over the years (but I'm certainly not a sysadmin) and there's something that always caused me troubles: how can I fork a shell command immune to hangups in the background from a Bash script?
For example if I have this:
command_which_takes_time input > output
How can I "nohup" and fork this?
The following doesn't seem to do what I want:
nohup command_which_takes_time input > output &
What is the syntax I am looking for and what am I not understanding?
|
Try creating subshell with (...) :
( command_which_takes_time input > output ) &
Example:
~$ ( (sleep 10; date) > /tmp/q ) &
[1] 19521
~$ cat /tmp/q # ENTER
~$ cat /tmp/q # ENTER
(...) #AFTER 10 seconds
~$ cat /tmp/q #ENTER
Wed Jan 11 01:35:55 CET 2012
[1]+ Done ( ( sleep 10; date ) > /tmp/q )
| How to totally fork a shell command that is using redirection |
1,402,510,190,000 |
I found the following function in the source code of catwm (a minimalistic window manager):
void spawn(const Arg arg) {
if(fork() == 0) {
if(fork() == 0) {
if(dis)
close(ConnectionNumber(dis));
setsid();
execvp((char*)arg.com[0],(char**)arg.com);
}
exit(0);
}
}
(see it on github)
I don't understand why not simply
void spawn(const Arg arg) {
if(fork() == 0) {
if(dis)
close(ConnectionNumber(dis));
setsid();
execvp((char*)arg.com[0],(char**)arg.com);
}
}
?
Are there any benefits of using the double fork() here?
|
The following paragraphs, quoted from Stevens and Rago Advanced Programming in the UNIX Environment, describe two of six coding rules for writing a daemon. Specifically, they implement them in a single daemonize function listed in Figure 13.1 in case you want to look it up.
Call fork and have the parent exit. This does several things. First, if the daemon was started as a simple shell command, having the parent terminate makes the shell think that the command is done. Second, the child inherits the process group ID of the parent but gets a new process ID, so we’re guaranteed that the child is not a process group leader. This is a prerequisite for the call to setsid that is done next.
Call setsid to create a new session. The three steps listed in Section 9.5 occur. The process (a) becomes the leader of a new session, (b) becomes the leader of a new process group, and (c) is disassociated from its controlling terminal.
Under System V–based systems, some people recommend calling fork again at this point, terminating the parent, and continuing the daemon in the child. This guarantees that the daemon is not a session leader, which prevents it from acquiring a controlling terminal under the System V rules (Section 9.6). Alternatively, to avoid acquiring a controlling terminal, be sure to specify O_NOCTTY whenever opening a terminal device.
In your changed code, the parent doesn't exit() which will continue its execution after spawn() is called; the exact behavior would depend on what follows the spawn() call.
| Double fork() - why? |
1,402,510,190,000 |
After going through the famous Fork Bomb questions on Askubuntu and many other Stack Exchange sites, I don't quite understand what everyone is saying like it's obvious.
Many answers (Best example) say this:
"{:|: &} means run the function : and send its output to the : function again "
Well, what exactly is the output of : ? What is being passed to the other :?
And also:
Essentially you are creating a function that calls itself twice every call and doesn't have any way to terminate itself.
How exactly is that executed twice? In my opinion, nothing is passed to the second : until the first : finishes its execution, which actually will never end.
In C for example,
foo()
{
foo();
foo(); // never executed
}
the second foo() is not executed at all, just because the first foo() never ends.
I am thinking that the same logic applies to :(){ :|: & };: and
:(){ : & };:
does the same job as
:(){ :|: & };:
Please help me understand the logic.
|
Piping doesn't require that the first instance finishes before the other one starts. Actually, all it is really doing is redirecting the stdout of the first instance to the stdin of the second one, so they can be running simultaneously (as they have to for the fork bomb to work).
Well, What exactly is the output of : ? what is being passed to the other : ?
':' is not writing anything to the other ':' instance, it's just redirecting the stdout to the stdin of the second instance. If it writes something during its execution (which it never will, since it does nothing but forking itself) it would go to the stdin of the other instance.
It helps to imagine stdin and stdout as a pile:
Whatever is written to the stdin will be piled up ready for when the program decides to read from it, while the stdout works the same way: a pile you can write to, so other programs can read from it when they want to.
That way it's easy to imagine situations like a pipe that has no communication happening (two empty piles) or non-synchronized writes and reads.
How exactly is that executed twice? In my opinion, nothing is passed to the second : until the first : finishes its execution, which actually will never end.
Since we are just redirecting the input and output of the instances, there is no requirement for the first instance to finish before the second one starts. It's actually usually desired that both run simultaneously so the second can work with the data being parsed by the first one on the fly. That's what happens here, both will be called without needing to wait for the first to finish. That applies to all pipe chains lines of commands.
I am thinking that the same logic applies to :(){ :|: & };: and
:(){ : & };:
Does the same job as
:(){ :|: & };:
The first one wouldn't work, because even though it's running itself recursively, the function is being called in the background (: &). The first : doesn't wait until the "child" : returns before ending itself, so in the end you'd probably only have one instance of : running. If you had :(){ : };: it would work though, since the first : would wait for the "child" : to return, which would wait for its own "child" : to return, and so on.
Here's how different commands would look like in terms of how many instances would be running:
:(){ : & };:
1 instance (calls : and quits) -> 1 instance (calls : and quits) -> 1 instance (calls : and quits) -> 1 instance -> ...
:(){ :|: &};:
1 instance (calls 2 :'s and quits) -> 2 instances (each one calls 2 :'s and quits) -> 4 instances (each one calls 2 :'s and quits) -> 8 instances -> ...
:(){ : };:
1 instance (calls : and waits for it to return) -> 2 instances (child calls another : and waits for it to return) -> 3 instances (child calls another : and waits for it to return) -> 4 instances -> ...
:(){ :|: };:
1 instance (calls 2 :'s and waits for them to return) -> 3 instances (children calls 2 :'s each and wait for them to return) -> 7 instances (children calls 2 :'s each and wait for them to return) -> 15 instances -> ...
As you can see, calling the function in the background (using &) actually slows the fork bomb, because the callee will quit before the called functions returns.
| How exactly does the typical shell "fork bomb" call itself twice? |
1,402,510,190,000 |
I get how a normal fork bomb works, but I don't really understand why the & at the end of the common bash fork bomb is required and why these scripts behave differently:
:(){ (:) | (:) }; :
and
:(){ : | :& }; :
The former causes a cpu usage spike before throwing me back to the login screen. The latter instead just causes my system to freeze up, forcing me to hard reboot. Why is that? Both continually create new processes, so why does the system behave differently?
Both of the scripts also behave differently from
:(){ : | : }; :
which doesn't cause any problems at all, even though I would have expected them to be alike. The bash manual page states that the commands in a pipeline are already executed in a subshell, so I'm led to believe that : | : should already suffice. I belive & should just run the pipeline in a new subshell, but why does that change so much?
Edit:
Using htop and limiting the amount of processes, I was able to see that the first variant creates an actual tree of processes, the second variant creates all the processes on the same level and the last variant doesn't seem to create any processes at all. This confuses me even more, but maybe it helps somehow?
|
WARNING DO NOT ATTEMPT TO RUN THIS ON A PRODUCTION MACHINE. JUST DON'T.
Warning: To try any "bombs" make sure ulimit -u is in use. Read below[a].
Let's define a function to get the PID and date (time):
bize:~$ d(){ printf '%7s %07d %s\n' "$1" "$BASHPID" "$(date +'%H:%M:%S')"; }
A simple, non-issue bomb function for the new user (protect yourself: read [a]):
bize:~$ bomb() { d START; echo "yes"; sleep 1; d END; } >&2
When that function is called to be executed works as this:
bize:~$ bomb
START 0002786 23:07:34
yes
END 0002786 23:07:35
bize:~$
The command date is executed, then a "yes" is printed, an sleep for 1 second, then the closing command date, and, finally, the function exits printing a new command prompt. Nothing fancy.
| pipe
When we call the function like this:
bize:~$ bomb | bomb
START 0003365 23:11:34
yes
START 0003366 23:11:34
yes
END 0003365 23:11:35
END 0003366 23:11:35
bize:~$
Two commands get started at same time, both commands will end 1 second later and then the prompt returns.
That's the reason for the pipe |, to start two processes in parallel.
& background
If we change the call adding an ending &:
bize:~$ bomb | bomb &
[1] 3380
bize:~$
START 0003379 23:14:14
yes
START 0003380 23:14:14
yes
END 0003379 23:14:15
END 0003380 23:14:15
The prompt returns immediately (all the action is sent to the background) and the two commands get executed as before.
Please note the value of "job number" [1] printed before the PID of the process 3380.
Later, the same number will be printed to indicate that the pipe has ended:
[1]+ Done bomb | bomb
That is the effect of &.
That is the reason of the &: to get processes started faster.
Simpler name
We can create a function called simply b to execute the two commands. Typed in three lines:
bize:~$ b(){
> bomb | bomb
> }
And executed as:
bize:~$ b
START 0003563 23:21:10
yes
START 0003564 23:21:10
yes
END 0003564 23:21:11
END 0003563 23:21:11
Note that we used no ; in the definition of b (the newlines were used to separate elements).
However, for a definition on one line, it is usual to use ;, like this:
bize:~$ b(){ bomb | bomb ; }
Most of the spaces are also not mandatory, we can write the equivalent (but less clear):
bize:~$ b(){ bomb|bomb;}
We can also use a & to separate the } (and send the two processes to the background).
The bomb.
If we make the function bite its tail (by calling itself), we get the "fork bomb":
bize:~$ b(){ b|b;} ### May look better as b(){ b | b ; } but does the same.
And to make it call more functions faster, send the pipe to the background.
bize:~$ b(){ b|b&} ### Usually written as b(){ b|b& }
If we append the first call to the function after a required ; and change the name to : we get:
bize:~$ :(){ :|:&};:
Usually written as :(){ :|:& }; :
Or, written in a fun way, with some other name (a snow-man):
☃(){ ☃|☃&};☃
The ulimit (which you should have set before running this) will make the prompt return quite quickly after a lot of errors (press enter when the error list stops to get the prompt).
The reason of this being called a "fork bomb" is that the way in which the shell starts a sub-shell is by forking the running shell and then calling exec() to the forked process with the command to run.
A pipe will "fork" two new processes. Doing it to infinity causes a bomb.
Or a rabbit as was originally called because it reproduces so quickly.
Timing:
:(){ (:) | (:) }; time :
Terminated
real 0m45.627s
:(){ : | :; }; time :
Terminated
real 0m15.283s
:(){ : | :& }; time :
real 0m00.002 s
Still Running
Your examples:
:(){ (:) | (:) }; :
Where the second closing ) separates the } is a more complex version of :(){ :|:;};:.
Each command in a pipe is called inside a sub-shell anyway. Which is the effect of the ().
:(){ : | :& }; :
Is the faster version, written to have no spaces: :(){(:)|:&};: (13 characters).
:(){ : | : }; : ### works in zsh but not in bash.
Has a syntax error (in bash), a metacharacter is needed before the closing },
as this:
:(){ : | :; }; :
[a]
Create a new clean user (I'll call mine bize).
Login to this new user in a console either sudo -i -u bize, or:
$ su - bize
Password:
bize:~$
Check and then change the max user processes limit:
bize:~$ ulimit -a ### List all limits (I show only `-u`)
max user processes (-u) 63931
bize:~$ ulimit -u 10 ### Low
bize:~$ ulimit -a
max user processes (-u) 1000
Using only 10 works as is only one solitary new user: bize. It makes easier to call killall -u bize and get the system rid of most (not all) bombs. Please do not ask which ones still work, I will not tell.
But still: Is quite low but on the safe side, adapt to your system.
This will ensure that a "fork bomb" will not collapse your system.
Further reading:
About forking in the bash "fork bomb"
How does a fork bomb work?
Why is the pipe needed?
Understanding Bash fork() Bomb ~ :(){ :|:& };:
| Why do these bash fork bombs work differently and what is the significance of & in it? |
1,402,510,190,000 |
On his web page about the self-pipe trick, Dan Bernstein explains a race condition with select() and signals, offers a workaround and concludes that
Of course, the Right Thing would be to have fork() return a file descriptor, not a process ID.
What does he mean by this -- is it something about being able to select() on child processes to handle their state changes instead of having to use a signal handler to get notified of those state changes?
|
The problem is described there in your source, select() should be interrupted by signals like SIGCHLD, but in some cases it doesn't work that well. So the workaround is to have signal write to a pipe, which is then watched by select(). Watching file descriptors is what select() is for, so that works around the problem.
The workaround essentially turns the signal event into a file descriptor event. If fork() just returned an fd in the first place, the workaround would not be required, as that fd could then presumably be used directly with select().
So yes, your description in the last paragraph seems right to me.
Another reason that an fd (or some other kind of a kernel handle) would be better than a plain process id number, is that PIDs can get reused after the process dies. That can be a problem in some cases when sending signals to processes, it might not be possible to know for sure that the process is the one you think it is, and not another one reusing the same PID. (Though I think this shouldn't be a problem when sending signals to a child process, since the parent has to run wait() on the child for its PID to be released.)
| Why should fork() have been designed to return a file descriptor? |
1,402,510,190,000 |
I want to write my own systemd unit files to manage really long running commands1 (in the order of hours). While looking the ArchWiki article on systemd, it says the following regarding choosing a start up type:
Type=simple (default): systemd considers the service to be started up immediately. The process must not fork. Do not use this type if other services need to be ordered on this service, unless it is socket activated.
Why must the process not fork at all? Is it referring to forking in the style of the daemon summoning process (parent forks, then exits), or any kind of forking?
1 I don't want tmux/screen because I want a more elegant way of checking status and restarting the service without resorting to tmux send-keys.
|
The service is allowed to call the fork system call. Systemd won't prevent it, or even notice if it does. This sentence is referring specifically to the practice of forking at the beginning of a daemon to isolate the daemon from its parent process. “The process must not fork [and exit the parent while running the service in a child process]”.
The man page explains this more verbosely, and with a wording that doesn't lead to this particular confusion.
Many programs that are meant to be used as daemons have a mode (often the default mode) where when they start, they isolate themselves from their parent. The daemon starts, calls fork(), and the parent exits. The child process calls setsid() so that it runs in its own process group and session, and runs the service. The purpose is that if the daemon is invoked from a shell command line, the daemon won't receive any signal from the kernel or from the shell even if something happens to the terminal such as the terminal closing (in which case the shell sends SIGHUP to all the process groups it knows of). This also causes the servicing process to be adopted by init, which will reap it when it exits, avoiding a zombie if the daemon was started by something that wouldn't wait() for it (this wouldn't happen if the daemon was started by a shell).
When a daemon is started by a monitoring process such as systemd, forking is counterproductive. The monitoring process is supposed to restart the service if it crashes, so it needs to know if the service exits, and that's difficult if the service isn't a direct child of the monitoring process. The monitoring process is not supposed to ever die and does not have a controlling terminal, so there are no concerns around unwanted signals or reaping. Thus there's no reason for the service process not to be a child of the monitor, and there's a good reason for it to be.
| Why "the process must not fork" for simple type services in systemd? |
1,402,510,190,000 |
The standard way of making new processes in Linux is that the memory footprint of the parent process is copied and that becomes the environment of the child process until execv is called.
What memory footprint are we talking about, the virtual (what the process requested) or the resident one (what is actually being used)?
Motivation: I have a device with limited swap space and an application with a big difference between virtual and resident memory footprint. The application can't fork due to lack of memory and would like to see if trying to reduce the virtual footprint size would help.
|
In modern systems none of the memory is actually copied just because a fork system call is used. It is all marked read only in the page table such that on first attempt to write a trap into kernel code will happen. Only once the first process attempt to write will the copying happen.
This is known as copy-on-write.
However it may be necessary to keep track of committed address space as well. If no memory or swap is available at the time the kernel has to copy a page, it has to kill some process to free memory. This is not always desirable, so it is possible to keep track of how much memory the kernel has committed to.
If the kernel would commit to more than the available memory + swap, it can give an error code on attempt to call fork. If enough is available the kernel will commit to the full virtual size of the parent for both processes after the fork.
| When a process forks is its virtual or resident memory copied? |
1,402,510,190,000 |
Why does ls require a separate process for its execution?
I know the reason why commands like cd can't be executed by forking mechanism but is there any harm if ls is executed without forking?
|
The answer is more or less that ls is an external executable. You can see its location by running type -p ls.
Why isn't ls built into the shell, then? Well, why should it be? The job of a shell is not to encompass every available command, but to provide an environment capable of running them. Some modern shells have echo, printf, and their ilk as builtins, which don't technically have to be builtins, but are made so for performance reasons when they are run repeatedly (primarily in tight loops). Without making them builtins, the shell would have to fork and exec a new process for each call to them, which could be extremely slow.
At the very least, running ls, an external executable, requires running one of the exec family of system calls. You could do this without forking, but it would replace the primary shell that you are using. You can see what happens in that instance by doing the following:
exec ls; echo "this never gets printed"
Since your shell's process image is replaced, the current shell is no longer accessible after doing this. For the shell to be able to continue to run after running ls, the command would have to be built into the shell.
Forking allows the replacement of a process that is not your primary shell, which means you can continue to run your shell afterwards.
| Why does "ls" require a separate process for executing? |
1,402,510,190,000 |
I would like to understand in detail the difference between fork() and vfork(). I was not able to digest the man page completely.
I would also like to clarify one of my colleagues comment "In current Linux, there is no vfork(), even if you call it, it will internally call fork()."
|
Man pages are usually terse reference documents. Wikipedia is a better place to turn to for conceptual explanations.
Fork duplicates a process: it creates a child process which is almost identical to the parent process (the most obvious difference is that the new process has a different process ID). In particular, fork (conceptually) must copy all the parent process's memory.
As this is rather costly, vfork was invented to handle a common special case where the copy is not necessary. Often, the first thing the child process does is to load a new program image, so this is what happens:
if (fork()) {
# parent process …
} else {
# child process (with a new copy of the process memory)
execve("/bin/sh", …); # discard the process memory
}
The execve call loads a new executable program, and this replaces the process's code and data memory by the code of the new executable and a fresh data memory. So the whole memory copy created by fork was all for nothing.
Thus the vfork call was invented. It does not make a copy of the memory. Therefore vfork is cheap, but it's hard to use since you have to make sure you don't access any of the process's stack or heap space in the child process. Note that even reading could be a problem, because the parent process keeps executing. For example, this code is broken (it may or may not work depending on whether the child or the parent gets a time slice first):
if (vfork()) {
# parent process
cmd = NULL; # modify the only copy of cmd
} else {
# child process
execve("/bin/sh", "sh", "-c", cmd, (char*)NULL); # read the only copy of cmd
}
Since the invention of vfork, better optimizations have been invented. Most modern systems, including Linux, use a form of copy-on-write, where the pages in the process memory are not copied at the time of the fork call, but later when the parent or child first writes to the page. That is, each page starts out as shared, and remains shared until either process writes to that page; the process that writes gets a new physical page (with the same virtual address). Copy-on-write makes vfork mostly useless, since fork won't make any copy in the cases where vfork would be usable.
Linux does retain vfork. The fork system call must still make a copy of the process's virtual memory table, even if it doesn't copy the actual memory; vfork doesn't even need to do this. The performance improvement is negligible in most applications.
| What's the difference between fork() and vfork()? |
1,402,510,190,000 |
From the man page of vfork():
vfork() differs from fork() in that
the parent is suspended until the
child makes a call to execve(2) or
_exit(2). The child shares all memory with its parent, including the stack,
until execve() is issued by the child.
The child must not return from the
current function or call exit(), but
may call _exit().
Why should the child use an _exit() rather than simply calling exit()? I hope this is applicable to both vfork() and fork().
|
As seen earlier, vfork does not allow the child process to access the parent's memory. exit is a C library function (that's why it's often written as exit(3)). It performs various cleanup tasks such as flushing and closing C streams (the files open through functions declared in stdio.h) and executing user-specified functions registered with atexit. All these tasks involve reading and writing to the process memory.
_exit exits without cleanup. It's directly a system call (which is why it's written as _exit(2)), typically implemented by placing the system call number in a processor register and executing a particular processor instruction (branching to the system call handler). This doesn't need to access the process memory, so it's safe to do after vfork.
After fork, there is no such restriction: the parent and child process are now completely autonomous.
| Why should a child of a vfork or fork call _exit() instead of exit()? |
1,402,510,190,000 |
So I can run a process in Unix / Linux using POSIX, but is there some way I can store / redirect both the STDOUT and STDERR of the process to a file? The spawn.h header contains a deceleration of posix_spawn_file_actions_adddup2 which looks relevant, but I'm not sure quite how to use it.
The process spawn:
posix_spawn(&processID, (char *)"myprocess", NULL, NULL, args, environ);
The output storage:
...?
|
Here's a minimal example of modifying file descriptors of a spawned process, saved as foo.c:
#include <stdio.h>
#include <stdlib.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <spawn.h>
int main(int argc, char* argv[], char *env[])
{
int ret;
pid_t child_pid;
posix_spawn_file_actions_t child_fd_actions;
if (ret = posix_spawn_file_actions_init (&child_fd_actions))
perror ("posix_spawn_file_actions_init"), exit(ret);
if (ret = posix_spawn_file_actions_addopen (&child_fd_actions, 1, "/tmp/foo-log",
O_WRONLY | O_CREAT | O_TRUNC, 0644))
perror ("posix_spawn_file_actions_addopen"), exit(ret);
if (ret = posix_spawn_file_actions_adddup2 (&child_fd_actions, 1, 2))
perror ("posix_spawn_file_actions_adddup2"), exit(ret);
if (ret = posix_spawnp (&child_pid, "date", &child_fd_actions, NULL, argv, env))
perror ("posix_spawn"), exit(ret);
}
What does it do?
The third parameter of posix_spwan is a pointer of type posix_spawn_file_actions_t (one you have given as NULL). posix_spawn will open, close or duplicate file descriptors inherited from the calling process as specified by the posix_spawn_file_actions_t object.
So we start with a posix_spawn_file_actions_t object (chiild_fd_actions), and initialize it with posix_spawn_file_actions_init().
Now, the posix_spawn_file_actions_{addopen,addclose,addup2} functions can be used to open, close or duplicate file descriptors (after the open(3), close(3) and dup2(3) functions) respectively.
So we posix_spawn_file_actions_addopen a file at /tmp/foo-log to file descriptor 1 (aka stdout).
Then we posix_spawn_file_actions_adddup2 fd 2 (aka stderr) to fd 1.
Note that nothing has been opened or duped yet. The last two functions simply changed the child_fd_actions object to note that these actions are to be taken.
And finally we use posix_spawn with the child_fd_actions object.
Testing it out:
$ make foo
cc foo.c -o foo
$ ./foo
$ cat /tmp/foo-log
Sun Jan 3 03:48:17 IST 2016
$ ./foo +'%F %R'
$ cat /tmp/foo-log
2016-01-03 03:48
$ ./foo -d 'foo'
$ cat /tmp/foo-log
./foo: invalid date ‘foo’
As you can see, both stdout and stderr of the spawned process went to /tmp/foo-log.
| Get output of `posix_spawn` |
1,402,510,190,000 |
I'm learning about fork() and exec() commands. It seems like fork() and exec() are usually called together. (fork() creates a new child process, and exec() replaces the current process image with a new one.) However, in what scenarios might you call each function on its own? Are there scenarios like these?
|
Sure! A common pattern in "wrapper" programs is to do various things and then replace itself with some other program with only an exec call (no fork)
#!/bin/sh
export BLAH_API_KEY=blub
...
exec /the/thus/wrapped/program "$@"
A real-life example of this is GIT_SSH (though git(1) does also offer GIT_SSH_COMMAND if you do not want to do the above wrapper program method).
Fork-only is used when spawning a bunch of typically worker processes (e.g. Apache httpd in fork mode (though fork-only better suits processes that need to burn up the CPU and not those that twiddle their thumbs waiting for network I/O to happen)) or for privilege separation used by sshd and other programs on OpenBSD (no exec)
$ doas pkg_add pstree
...
$ pstree | grep sshd
|-+= 70995 root /usr/sbin/sshd
| \-+= 28571 root sshd: jhqdoe [priv] (sshd)
| \-+- 14625 jhqdoe sshd: jhqdoe@ttyp6 (sshd)
The root sshd has on client connect forked off a copy of itself (28571) and then another copy (14625) for the privilege separation.
| When to call fork() and exec() by themselves? |
1,402,510,190,000 |
When you fork a process, the child inherits its parent's file descriptors. I understand that when this happens, the child receives a copy of the parent's file descriptor table with the pointers in each pointing to the same open file description. Is this the same thing as a file table, as in http://en.wikipedia.org/wiki/File_descriptor, or something else?
|
I found the answer in documentation for the open system call:
The term open file description is the one used by POSIX to refer to the entries in the system-wide table of open files. In other contexts, this object is variously also called an "open file object", a "file handle", an "open file table entry", or—in kernel-developer parlance—a struct file. When a file descriptor is duplicated (using dup(2) or similar), the duplicate refers to the same open file description as the original file descriptor, and the two file descriptors consequently share the file offset and file status flags. Such sharing can also occur between processes: a child process created via fork(2) inherits duplicates of its parent's file descriptors, and those duplicates refer to the same open file descriptions. Each open(2) of a file creates a new open file description; thus, there may be multiple open file descriptions corresponding to a file inode.
| What is an open file description? |
1,402,510,190,000 |
I have two bash scripts that try to check hosts that are up:
Script 1:
#!/bin/bash
for ip in {1..254}; do
ping -c 1 192.168.1.$ip | grep "bytes from" | cut -d" " -f 4 | cut -d ":" -f 1 &
done
Script 2:
#!/bin/bash
for ip in {1..254}; do
host=192.168.1.$ip
(ping -c 1 $host > /dev/null
if [ "$?" = 0 ]
then
echo $host
fi) &
done
As I am checking a large range, I would like to process each ping command in parallel. However, my second script seems to not retry failed fork attempts due to resource limits. This results in the second script having inconsistent results while my first script gives constant results despite both failing to fork at times. Can someone explain this to me? Also is there anyway to retry failed forks?
|
There is already an answer which gives an improved code snippet to the task the original poster questions was related to, while it might not yet have more directly responded to the question.
The question is about differences of
A) Backgrounding a "command" directly, vs
B) Putting a subshell into the background (i.e with a similar task)
Lets check about those differences running 2 tests
# A) Backgrounding a command directly
sleep 2 & ps
outputs
[1] 4228
PID TTY TIME CMD
4216 pts/8 00:00:00 sh
4228 pts/8 00:00:00 sleep
while
# A) backgrounding a subhell (with similar tas)
( sleep 2; ) & ps
outputs something like:
[1] 3252
PID TTY TIME CMD
3216 pts/8 00:00:00 sh
3252 pts/8 00:00:00 sh
3253 pts/8 00:00:00 ps
3254 pts/8 00:00:00 sleep
** Test results:**
In this test (which run only a sleep 2) the subshell version indeed differs, as it would use 2 child processes (i.e. two fork()/exec operations and PID) and hence more than the direct backgrounding of the command.
In the script 1 of the question however the command was not a single sleep 2s but instead it was a pipe of 4 commands, which if we test in an additional case
C) Backgrounding a pipe with 4 commands
# C) Backgrounding a pipe with 4 commands
sleep 2s | sleep 2s | sleep 2s | sleep 2s & ps
yields this
[2] 3265
PID TTY TIME CMD
3216 pts/8 00:00:00 bash
3262 pts/8 00:00:00 sleep
3263 pts/8 00:00:00 sleep
3264 pts/8 00:00:00 sleep
3265 pts/8 00:00:00 sleep
3266 pts/8 00:00:00 ps
and shows that indeed the script 1 would be a much higher strain in terms of PIDs and fork()s.
As a rough estimate the script one would have used about 254 * 4 ~= 1000 PIDs and hence even more than the script 2 with 254 * 2 ~= 500 PIDs. Any problem occurring because of PIDs resouce depletion seems yet unlikely since at most Linux boxes
$ cat /proc/sys/kernel/pid_max
32768
gives you 32x times the PIDs needed even for case script 1 and the processes/programs involved (i.e. sed , ping, etc) also seem unlikely to cause the inconstant results.
As mentioned by user @derobert the real issue behind the scripts failing was that the missing of the wait command, which means that after backgrounding the commands in the loop the end of the script and hence the shell caused all the child processes to be terminated.
| Putting subshell in background vs putting command in background |
1,402,510,190,000 |
In this page from The Design and Implementation of the 4.4BSD Operating System, it is said that:
A major difference between pipes and sockets is that pipes require a
common parent process to set up the communications channel
However, if I record correctly, the only way to create a new process is to fork an existing one. So I can’t really see how 2 processes could not have a common ancestor. Am I then right to think that any pair of processes can be piped to each other?
|
Am I then right to think that any pair of processes can be piped to each other?
Not really.
The pipes need to be set up by the parent process before the child or children are forked. Once the child process is forked, its file descriptors cannot be manipulated "from the outside" (ignoring things like debuggers), the parent (or any other process) can't do the "set up the comms. channel" part after the fact.
So if you take two random processes that are already running, you can't set up a pipe between them directly. You need to use some form of socket (or another IPC mechanism) to get them to communicate. (But note that some operating systems, FreeBSD among them, allow you to send file descriptors on Unix-domain sockets.)
| Can I pipe any two processes to each other? |
1,402,510,190,000 |
I'm studying 'operation system concepts' on my own and I'm studying the chp3. process part.
There is an example where the 'fork()' function is called and depending of the returned pid value like the following:
pid=fork();
if(pid<0){ //error stuff
}
else if(pid==0){
// child process stuff
}
else{
// parent process stuff
}
What confused me here is that if this code is executed, among the three scenarios of 'if's, only one would be executed which means that only one out of parent/child procedures would be executed.
But reading a bit more carefully, I found a sentence that kind of helped me solve the confusion but not entirely.
The new process consists of a copy of the address space of the
original process.
From my imagination, I guess this means that whenever the 'fork()' calls is executed, somehow an exact copy of this code will be duplicated and be run as
'child' process when the original c code will be run as 'parent'.
Am I understanding this right?
Also, what does the 'address space' have to do with this? Again, using my imagination I assume that, since the execution of parent code means that the code is loaded to the RAM and executed where it will have a segment of RAM assigned to the code, this segment will be copied to a new segment somewhere else located in the RAM and be assigned for the child process.
Is my understanding correct?
|
Yes, you are correct.
In particular, this means that the child will inherit all variables from the parent process with the value they had at the moment of the fork. However, if at a later step one of the parent or the child modifies one of these variables, the modification will be local to this process: if the child modify a variable, the parent process will still see the old value and not the new one.
With forks, if you want the child and parent process to communicate you will need to use some explicit inter-process communication.
This is the difference with threads. Conceptually forks and threads look the same: the same code being executed by two processes in the case of forks and two threads in the case of threads. However, in the case of threads the address space will not be copied: the two threads will share the same memory, so if one thread modify a variable, it will impact all other threads.
Threads therefore allows a very flexible communication between the threads, but this is also very error prone because of the high probability of race conditions if not used carefully.
Both systems are address different needs. As a side note, the fork primitive is usually implemented in a clever way on the system side since the address space will not be physically copied, but the system will use a copy-on-write system: data will be duplicated only if one of the processes attempts to actually modify it. While the data is not modified it will not be duplicated and will therefore not consume more memory.
More information regarding forks and thread can be found on StackOverflow.
| what does it mean 'fork()' will copy address space of original process |
1,402,510,190,000 |
I just learned about a fork bomb, an interesting type of a denial of service attack. Wikipedia (and a few other places) suggest using :(){ :|:& };: on UNIX machines to fork the process an infine number of times. However, it doesn't seem to work on Mac OS X Lion (I remember reading that the most popular operating systems are not vulnerable to such a direct attack). I am, however, very curious about how such an attack works (and looks), and would want to try it out my Mac. Is there a way to go around the system's safeguards, or is it the case that a fork bomb is not possible on Macs?
|
How a fork bomb works: in C (or C-like) code, a function named fork() gets called. This causes linux or Unix or Unix-a-likes to create an entirely new process. This process has an address space, a process ID, a signal mask, open file descriptors, all manner of things that take up space in the OS kernel's somewhat limited memory. The newly created process also gets a spot in the kernel's data structure for processes to run. To the process that called fork(), it looks like nothing happened. A fork-bomb process will try to call fork() as fast as it can, as many times as it can.
The trick is that the newly created process also comes back from fork() in the same code. After a fork, you have two processes running the same code. Each new fork-bomb process tries to call fork() as fast as it can, as many times as it can. The code you've given as an example is a Bash-script version of a fork bomb.
Soon, all the OS kernel's process-related resources get used up. The process table is full. The waiting-to-run list of processes is full. Real memory is full, so paging starts. If this goes on long enough, the swap partition fills up.
What this looks like to a user: everything runs super slowly. You get error messages like "could not create process" when you try simple things like ls. Trying a ps causes an interminable pause (if it runs at all) and gives back a very long list of processes. Sometimes this situation requires a reboot via the power cord.
Fork bombs used to be called "rabbits" back in the old days. Because they reproduced so rapidly.
Just for fun, I wrote a fork bomb program in C:
#include <stdio.h>
#include <unistd.h>
int
main(int ac, char **av)
{
while (1)
fork();
return 0;
}
I compiled and ran that program under Arch Linux in one xterm. I another xterm I tried to get a process list:
1004 % ps -fu bediger
zsh: fork failed: resource temporarily unavailable
The Z shell in the 2nd xterm could not call fork() successfully as the fork bomb processes associated with the 1st xterm had used up all kernel resources related to process created and running.
| Fork bomb on a Mac? |
1,402,510,190,000 |
This post is basically a follow-up to an earlier question of mine.
From the answer to that question I realized that not only I don't quite understand the whole concept of a "subshell", but more generally, I don't understand the relationship between fork-ing and children processes.
I used to think that when process X executes a fork, a new process Y is created whose parent is X, but according to the answer to that question,
[a] subshell is not a completely new process, but a fork of the existing process.
The implication here is that a "fork" is not (or does not result in) "a completely new process."
I'm now very confused, too confused, in fact, to formulate a coherent question to directly dispel my confusion.
I can however formulate a question that may lead to enlightenment indirectly.
Since, according to zshall(1), $ZDOTDIR/.zshenv gets sourced whenever a new instance of zsh starts, then any command in $ZDOTDIR/.zshenv that results in the creation of a "a completely new [zsh] process" would result in an infinite regress. On the other hand, including either of the following lines in a $ZDOTDIR/.zshenv file does not result in an infinite regress:
echo $(date; printenv; echo $$) > /dev/null #1
(date; printenv; echo $$) #2
The only way I found to induce an infinite regress by the mechanism described above was to include a line like the following1 in the $ZDOTDIR/.zshenv file:
$SHELL -c 'date; printenv; echo $$' #3
My questions are:
what difference between the commands marked #1, #2 above and the one marked #3 accounts from this difference in behavior?
if the shells that get created in #1 and #2 are called "subshells", what are those like the one generated by #3 called?
is it possible to rationalize (and maybe generalize) the empirical/anecdotal findings described above in terms of the "theory" (for lack of a better word) of Unix processes?
The motivation for the last question is to be able to determine ahead of time (i.e. without resorting to experimentation) what commands would lead to an infinite regress if they were included in $ZDOTDIR/.zshenv?
1 The particular sequence of commands date; printenv; echo $$ that I used in the various examples above is not too important. They happen to be commands whose output was potentially helpful towards interpreting the results of my "experiments". (I did, however, want these sequences to consist of more than one command, for the reason explained here.)
|
Since, according to zshall(1), $ZDOTDIR/.zshenv gets sourced whenever a new instance of zsh starts
If you focus on the word "starts" here you'll have a better time of things. The effect of fork() is to create another process that begins from exactly where the current process already is. It's cloning an existing process, with the only difference being the return value of fork. The documentation is using "starts" to mean entering the program from the beginning.
Your example #3 runs $SHELL -c 'date; printenv; echo $$', starting an entirely new process from the beginning. It will go through the ordinary startup behaviour. You can illustrate that by, for example, swapping in another shell: run bash -c ' ... ' instead of zsh -c ' ... '. There's nothing special about using $SHELL here.
Examples #1 and #2 run subshells. The shell forks itself and executes your commands inside that child process, then carries on with its own execution when the child is done.
The answer to your question #1 is the above: example 3 runs an entirely new shell from the start, while the other two run subshells. The startup behaviour includes loading .zshenv.
The reason they call this behaviour out specifically, which is probably what leads to your confusion, is that this file (unlike some others) loads in both interactive and non-interactive shells.
To your question #2:
if the shells that get created in #1 and #2 are called "subshells", what are those like the one generated by #3 called?
If you want a name you could call it a "child shell", but really it's nothing. It's no different than any other process you start from the shell, be it the same shell, a different shell, or cat.
To your question #3:
is it possible to rationalize (and maybe generalize) the empirical/anecdotal findings described above in terms of the "theory" (for lack of a better word) of Unix processes?
fork makes a new process, with a new PID, that starts running in parallel from exactly where this one left off. exec replaces the currently-executing code with a new program loaded from somewhere, running from the beginning. When you spawn a new program, you first fork yourself and then exec that program in the child. That is the fundamental theory of processes that applies everywhere, inside and outside of shells.
Subshells are forks, and every non-builtin command you run leads to both a fork and an exec.
Note that $$ expands to the PID of the parent shell in any POSIX-compatible shell, so you may not be getting the output you expect regardless. Note also that zsh aggressively optimises subshell execution anyway, and commonly execs the last command, or doesn't spawn the subshell at all if all the commands are safe without it.
One useful command for testing your intuitions is:
strace -e trace=process -f $SHELL -c ' ... '
That will print to standard error all process-related events (and no others) for the command ... you run in a new shell. You can see what does and does not run in a new process, and where execs occur.
Another possibly-useful command is pstree -h, which will print out and highlight the tree of parent processes of the current process. You can see how many layers deep you are in the output.
| On `fork`, children processes, and "subshells" |
1,402,510,190,000 |
I have a runscript that starts some processes and sends them to the background
mongod & pid_mongo=$!
redis-server & pid_redis=$!
# etc.
All these processes then output concurrently to the same standard output. My question: is it possible to color the output of each different forked process, so that - for example - one of them outputs in green and the other one in red ?
|
You could do this by piping through a filter, it is just a matter of adding appropriate ANSI codes before and after each line:
http://en.wikipedia.org/wiki/ANSI_escape_sequences#Colors
I could not find a tool which actually does this after a few minutes googling, which is bit odd considering how easy it would be to write one.
Here's an idea using C:
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <fcntl.h>
#include <errno.h>
/* std=gnu99 required */
// ANSI reset sequence
#define RESET "\033[0m\n"
// length of RESET
#define RLEN 5
// size for read buffer
#define BUFSZ 16384
// max length of start sequence
#define START_MAX 12
void usage (const char *name) {
printf("Usage: %s [-1 N -2 N -b -e | -h]\n", name);
puts("-1 is the foreground color, -2 is the background.\n"
"'N' is one of the numbers below, corresponding to a color\n"
"(if your terminal is not using the standard palette, these may be different):\n"
"\t0 black\n"
"\t1 red\n"
"\t2 green\n"
"\t3 yellow\n"
"\t4 blue\n"
"\t5 magenta\n"
"\t6 cyan\n"
"\t7 white\n"
"-b sets the foreground to be brighter/bolder.\n"
"-e will print to standard error instead of standard out.\n"
"-h will print this message.\n"
);
exit (1);
}
// adds character in place and increments pointer
void appendChar (char **end, char c) {
*(*end) = c;
(*end)++;
}
int main (int argc, char *const argv[]) {
// no point in no arguments...
if (argc < 2) usage(argv[0]);
// process options
const char options[]="1:2:beh";
int opt,
set = 0,
output = STDOUT_FILENO;
char line[BUFSZ] = "\033[", // ANSI escape
*p = &line[2];
// loop thru options
while ((opt = getopt(argc, argv, options)) > 0) {
if (p - line > START_MAX) usage(argv[0]);
switch (opt) {
case '?': usage(argv[0]);
case '1': // foreground color
if (
optarg[1] != '\0'
|| optarg[0] < '0'
|| optarg[0] > '7'
) usage(argv[0]);
if (set) appendChar(&p, ';');
appendChar(&p, '3');
appendChar(&p, optarg[0]);
set = 1;
break;
case '2': // background color
if (
optarg[1] != '\0'
|| optarg[0] < '0'
|| optarg[0] > '7'
) usage(argv[0]);
if (set) appendChar(&p, ';');
appendChar(&p, '4');
appendChar(&p, optarg[0]);
set = 1;
break;
case 'b': // set bright/bold
if (set) appendChar(&p, ';');
appendChar(&p, '1');
set = 1;
break;
case 'e': // use stderr
output = STDERR_FILENO;
break;
case 'h': usage(argv[0]);
default: usage(argv[0]);
}
}
// finish 'start' sequence
appendChar(&p, 'm');
// main loop
// set non-block on input descriptor
int flags = fcntl(STDIN_FILENO, F_GETFL, 0);
fcntl(STDIN_FILENO, F_SETFL, flags | O_NONBLOCK);
// len of start sequence
const size_t slen = p - line,
// max length of data to read
rmax = BUFSZ - (slen + RLEN);
// actual amount of data read
ssize_t r;
// index of current position in output line
size_t cur = slen;
// read buffer
char buffer[rmax];
while ((r = read(STDIN_FILENO, buffer, rmax))) {
if (!r) break; // EOF
if (r < 1) {
if (errno == EAGAIN) continue;
break; // done, error
}
// loop thru input chunk byte by byte
// this is all fine for utf-8
for (int i = 0; i < r; i++) {
if (buffer[i] == '\n' || cur == rmax) {
// append reset sequence
for (int j = 0; j < RLEN; j++) line[j+cur] = RESET[j];
// write out start sequence + buffer + reset
write(output, line, cur+RLEN);
cur = slen;
} else line[cur++] = buffer[i];
}
}
// write out any buffered data
if (cur > slen) {
for (int j = 0; j < RLEN; j++) line[j+cur] = RESET[j];
write(output, line, cur+RLEN);
}
// flush
fsync(output);
// the end
return r;
}
I think that is about as efficient as you are going to get. The write() needs to do an entire line with the ANSI sequences all in one go -- testing this with parallel forks led to interleaving if the ANSI sequences and the buffer content were done separately.
That needs to be compiled -std=gnu99 since getopt is not part of the C99 standard but it is part of GNU. I tested this somewhat with parallel forks; that source, a makefile, and the tests are in a tarball here:
http://cognitivedissonance.ca/cogware/utf8_colorize/utf8_colorize.tar.bz2
If the application you use this with logs to standard error, remember to redirect that too:
application 2>&1 | utf8-colorize -1 2 &
The .sh files in the test directory contain some usage examples.
| Coloring output of forked processes |
1,402,510,190,000 |
When we fork() a process, the child process inherits the file descriptors. The question is, why?
As I am seeing it, sharing the file descriptor is a headache when every process is trying to keep track of where the r/w pointer is.
Why was this design decision taken?
|
POSIX explains the reasoning thus:
There are two reasons why POSIX programmers call fork(). One reason is to create a new thread of control within the same program (which was originally only possible in POSIX by creating a new process); the other is to create a new process running a different program. In the latter case, the call to fork() is soon followed by a call to one of the exec functions.
When fork() is used as a “poor-man’s threading”, it makes sense to copy the file descriptors. That use-case has to continue to be supported, so this feature will remain...
| Why are file descriptors shared between forked processes? |
1,402,510,190,000 |
On a Linux system a C process is started on boot, which creates a fork of itself. It is not a kernal process or something. In most cases a ps -ef show both processes as expecxted, but sometimes it looks like the following:
1258 root 0:00 myproc
1259 root 0:00 [myproc]
i.e. one of the processes surrounded by brackets. According to ps:
If the arguments cannot be located (usually because it has not been set,
as is the case of system processes and/or kernel threads) the command name
is printed within square brackets.
I do not understand what it means when the 'arguments cannot be located'. The process is started always exactly the same, and the fork is always created in the exact same way. How can it happen, that sometimes 'the arguments cannot be located' and sometimes they can?
In addition, the process is always started without any arguments...
Questions I have:
What do those brackets really mean? Does the process run at all when /proc/{pid}/cmdline is empty?
Why do I get those brackets sometimes and not always?
How/Where to fix this problem?
Additional information:
The process is always started without any arguments! Just the name of the command myproc.
The main process seems to run always correct (no brackets around name, executable in /proc/x/cmdline).
The child process sometimes has its name in brackets.
The content of /proc/child-pid/cmdline of a correct running child process is myproc.
The content of /proc/child-pid/cmdline of an incorrect running child process is empty!
Again: same code, different child processes!
|
ps -f normally shows the argument list passed to the last execve() system call the process or any of its ancestors did.
When you run a command xxx arg1 arg2 at a shell prompt, your shell usually forks a process searches for a command by the xxx name and executes it as:
execve("/path/to/that/xxx", ["xxx", "arg1", "arg2"], @exported_variables)
It should be noted that the first argument is xxx there.
After execution, the whole memory of the process is wiped, and those arguments (and environment) are found at the bottom of the stack of the process.
You get the first 4096 bytes of those arguments in /proc/<the-pid>/cmdline and that's where ps gets it from.
Upon a fork or clone, the child inherits the whole memory of its parent including that arg list.
You get the [xxx] when /proc/<the-pid>/cmdline is empty. In that case, instead of displaying the arg list, ps displays the process name which it finds in /proc/<the-pid>/stat (for executed commands, that's the first 16 bytes of the basename of the executable file passed to the last execve()). That can happen for three reasons (that I can think of):
The process or any of its ancestors never executed anything. That's the case of kernel threads (and can only be the case of kernel threads since all the other processes are descendants of init (which is executed)).
$ ps -fp2
UID PID PPID C STIME TTY TIME CMD
root 2 0 0 Jan13 ? 00:00:00 [kthreadd]
The process executed a command with an empty list of arguments. That usually never happens because programs are usually always passed at least one argument, the command name, but you can force it with for instance:
int main(int argc, char *argv[]) {
if (argc) execve("/proc/self/exe",0,0);
else system("ps -fp $PPID");
}
Once compiled and run:
$ test1
UID PID PPID C STIME TTY TIME CMD
stephane 31932 29296 0 15:16 pts/5 00:00:00 [exe]
The process overwrites its argv[] on its stack.
The arguments are NUL-terminated strings in memory. If you make the last character of the arg list non-null for instance with envp[0][-1]=1 (the envp[] values follow the argv[] ones on the stack), then the kernel assumes you've modified it and only returns in /proc/xxx/cmdline the first argument up to the first NUL character. So
int main(int argc, char* argv[], char *envp[]) {
envp[0][-1]=1;
argv[0][0]=0;
system("ps -fp $PPID");
}
Would also show [xxx].
Given that the arglist (and environ) are at the bottom of the stack, this kind of scenario can happen if you've got a bug in your code that makes you write on the stack past the end of what it's meant to write, for instance, if using strcpy instead of strncpy. To debug this kind of issue, valgrind is very useful.
| Why do forked processes sometimes appear with brackets [] around their name in ps? [duplicate] |
1,402,510,190,000 |
If I have a Bash script like:
function repeat {
while :; do
echo repeating; sleep 1
done
}
repeat &
echo running once
running once is printed once but repeat's fork lives forever, printing endlessly.
How should I prevent repeat from continuing to run after the script which created it has exited?
I thought maybe explicitly instantiating a new bash -c interpreter would force it to exit as its parent has disappeared, but I guess orphaned processes are adopted by init or PID 1.
Testing this using another file:
# repeat.bash
while :; do echo repeating; sleep 1; done
# fork.bash
bash -c "./repeat.bash & echo an exiting command"
Running ./fork.bash still causes repeat.bash to continue to run in the background forever.
The simple and lazy solution is to add the line to fork.bash:
pkill repeat.bash
But you had better not have another important process with that name, or it will also be obliterated.
I wonder, if there is a better or accepted way to handle background jobs in forked shells that should exit when the script (or process) that created them has exited?
If there is no better way than blindly pkilling all processes with the same name, how should a repeating job that runs alongside something like a webserver be handled to exit? I want to avoid a cron job because the script is in a git repository, and the code should be self-contained without changing system files in /etc/.
|
This kills the background process before the script exits:
trap '[ "$pid" ] && kill "$pid"' EXIT
function repeat {
while :; do
echo repeating; sleep 1
done
}
repeat &
pid=$!
echo running once
How it works
trap '[ "$pid" ] && kill "$pid"' EXIT
This creates a trap. Whenever the script is about to exit, the commands in single-quotes will be run. That command checks to see if the shell variable pid has been assigned a non-empty value. If it has, then the process associated with pid is killed.
pid=$!
This saves the process id of the preceding background command (repeat &) in the shell variable pid.
Improvement
As Patrick points out in the comments, there is a chance that the script could be killed after the background process starts but before the pid variable is set. We can handle that case with this code:
my_exit() {
[ "$racing" ] && pid=$!
[ "$pid" ] && kill "$pid"
}
trap my_exit EXIT
function repeat {
while :; do
echo repeating; sleep 1
done
}
racing=Y
repeat &
pid=$!
racing=
echo running once
| Prevent a shell fork from living longer than its initiator? |
1,402,510,190,000 |
My nginx unitfile is following,
[root@arif ~]# cat /usr/lib/systemd/system/nginx.service
[Unit]
Description=The nginx HTTP and reverse proxy server
After=network.target remote-fs.target nss-lookup.target
[Service]
Type=forking
PIDFile=/run/nginx.pid
# Nginx will fail to start if /run/nginx.pid already exists but has the wrong
# SELinux context. This might happen when running `nginx -t` from the cmdline.
# https://bugzilla.redhat.com/show_bug.cgi?id=1268621
ExecStartPre=/usr/bin/rm -f /run/nginx.pid
ExecStartPre=/usr/sbin/nginx -t
ExecStart=/usr/sbin/nginx
ExecReload=/bin/kill -s HUP $MAINPID
KillSignal=SIGQUIT
TimeoutStopSec=5
KillMode=process
PrivateTmp=true
[Install]
WantedBy=multi-user.target
Here, in the [Service] portion, the value of Type is equal to forking which means from here,
The process started with ExecStart spawns a child process that becomes the main process of the service. The parent process exits when the startup is complete.
My questions are,
Why a service does that?
What are the advantages for doing this?
What's wrong is Type=simple or other similar options?
|
Why a service does that?
Services generally do not do that, in fact. Aside from the fact that it isn't good practice, and the idea of "dæmonization" is indeed fallacious, what services do isn't what the forking protocol requires. They get the protocol wrong, because they are in fact doing something else, which is being shoehorned into the forking protocol, usually unnecessarily.
What are the advantages for doing this?
There aren't any. Better readiness notification protocols exist, and no-one actually speaks this protocol properly. This service unit is not doing this because it is advantageous.
What's wrong is Type=simple or other similar options?
Nothing. It is in fact generally the use of the forking readiness protocol that is wrong. This is not best practice, as claimed in other answers. Quite the reverse.
The simple fact is that this is the best of a bad job, a bodge to cope with a behaviour of nginx that still cannot be turned off. Most service softwares nowadays, thanks to a quarter of a century of encouragement from the IBM SRC, daemontools, and other serious service management worlds, have gained options for, or even changed their default behaviours to, not attempting to foolishly "dæmonize" something that is already in dæmon context.
This is still not the case for nginx, though. daemon off does not work, sadly. Just as many softwares used to erroneously conflate "non-dæmonize" mode with debug mode (but often no longer do, nowadays), nginx unfortunately conflates it with other things, such as not handling its control signals. People have been pushing for this for 5 years, so far.
Further reading
Jonathan de Boyne Pollard (2015). Readiness protocol problems with Unix dæmons. Frequently Given Answers.
Adrien CLERC (2013-10-27). nginx: Don't use type=forking in systemd service file. Debian Bug #728015.
runit and nginx
Jonathan de Boyne Pollard (2001). "Don't fork() in order to 'put the dæmon into the background'.". Mistakes to avoid when designing Unix dæmon programs. Frequently Given Answers.
Jonathan de Boyne Pollard (2015). You really don't need to daemonize. Really.. The systemd House of Horror.
Numerous examples of readiness protocol mismatches here at StackExchange:
https://unix.stackexchange.com/a/401611/5132
https://unix.stackexchange.com/a/200365/5132
https://unix.stackexchange.com/a/194653/5132
https://unix.stackexchange.com/a/211126/5132
https://unix.stackexchange.com/a/336067/5132
https://unix.stackexchange.com/a/283739/5132
https://unix.stackexchange.com/a/242860/5132
| Why forking is used in a unit file of a service? |
1,402,510,190,000 |
Recently I got an load-too-high issue on our server. I watched top for like half an hour to find out that it was Nagios that forked a lot of short-lived processes. After bouncing Nagios, everything was back to normal.
My question here is, how to find out the root process that forks a lot like this more quickly?
Thanks.
|
If you run an OS that supports dtrace, this script will help you identifying what processes are launching short lived processes:
#!/usr/sbin/dtrace -qs
proc:::exec
{
self->parent=stringof((unsigned char*)curpsinfo->pr_psargs);
}
proc:::exec-success
/self->parent != NULL/
{
printf("%s -> %s\n",self->parent,curpsinfo->pr_psargs);
self->parent=NULL;
}
If you are on an OS without dtrace support, have a look to alternatives, e.g. systemtap or sysdig with Linux, ProbeView with AIX.
Here is a sysdig script that will show all commands launch and exit times with their pid and ppid:
sysdig -p"*%evt.time %proc.pid %proc.ppid %evt.dir %proc.exeline" \
"( evt.dir=< and evt.type=execve ) or evt.type=procexit"
Another method would be to enable process accounting with your OS (if available, commonly the acct package under Linux) and have a look to the generated logs. There is also a top like program that leverage process accounting: atop.
| How to find out the process(es) that forks a lot? |
1,402,510,190,000 |
I have read the other questions about its functionality -- that fork bombs operate both by consuming CPU time in the process of forking, and by saturating the operating system's process table.
A basic implementation of a fork bomb is an infinite loop that repeatedly launches the same processes.
But I really want to know: what's the story of this command? why this :(){ :|:& };: and not another one?
|
It is not something new. It dates way back to 1970's when it got introduced.
Quoting from here,
One of the earliest accounts of a fork bomb was at the University of
Washington on a Burroughs 5500 in 1969. It is described as a "hack"
named RABBITS that would make two copies of itself when it was run,
and these two would generate two more copies each, and the copies
would continue making more copies until memory was full, causing a
system crash. Q The Misanthrope wrote a Rabbit-like program using
BASIC in 1972 while in grade 7. Jerry Leichter of Yale University
describes hearing of programs similar to rabbits or fork bombs at his
Alma Mater of Princeton and says given his graduation date, they must
be from 1973 or earlier. An account dating to 1974 describes a program
actually named "rabbit" running on an IBM 360 system at a large firm
and a young employee who was discharged for running it.
So the :(){ :|:& };: is just a way of implementing the fork bomb in shell. If you take some other programming language, you could implement in those languages as well. For instance, in python you could implement the fork bomb as,
import os
while True:
os.fork()
More ways of implementing the fork bomb in different languages can be found from the wikipedia link.
If you want to understand the syntax, it is pretty simple. A normal function in shell would look like,
foo(){
# function code goes here
}
The fork() bomb is defined as follows:
:(){
:|:&
};:
:|: - Next it will call itself using programming technique called recursion and pipes the output to another call of the function :. The worst part is function get called two times to bomb your system.
& - Puts the function call in the background so child cannot die at all and start eating system resources.
; - Terminate the function definition
: - Call (run) the function aka set the fork() bomb.
Here is more human readable code:
bomb() {
bomb | bomb &
}; bomb
References
http://www.cyberciti.biz/faq/understanding-bash-fork-bomb/
| What's the history behind the fork bomb? |
1,402,510,190,000 |
When running the fork call to create a new process, if it succeed it returns either 0 (the child) or the parent.
I didn't get the idea behind this. Why doesn't fork just always return child or always parent?
|
When you fork(), the code that’s running finds itself running in two processes (assuming the fork is successful): one process is the parent, the other the child. fork() returns 0 in the child process, and the child pid in the parent process: it’s entirely deterministic.
This is how you can determine, after the fork(), whether you’re running in the parent or the child. (And also how the parent knows the child pid — it needs to wait on it at some point.)
In a little more detail:
the future parent process calls fork();
the kernel creates a new process, which is the child, and sets various things up appropriately — but both processes are running the same code and are “waiting” for a return from the same function;
both processes continue running (not necessarily straight away, and not necessarily simultaneously, but that’s besides the point):
fork() returns 0 to the child process, which continues and uses that information to determine that it’s the child;
fork() returns the child pid to the parent process, which continues and uses that information to determine that it’s the parent.
| Why does fork sometimes return parent and sometimes child? |
1,402,510,190,000 |
Anyone understand the following code , running in bash ?
:(){ :|:& };:
It seems to be a "fork" bomb on Linux.
|
It's not that difficult to decipher in fact.
This piece of code just defines a function named : which calls two instances of itself in a pipeline: :|:&. After the definition an instance of this function is started.
This leads to a fast increasing number of subshell processes. Unprotected systems (systems without a process number limit per user) will be severely affected by such fork bombs since legitimate processes will quickly be outnumbered and thus deprived of most CPU resources.
| Why is the following command killing a system? |
1,354,215,010,000 |
From the fork(2) man page:
RETURN VALUE
On success, the PID of the child process is returned in the parent, and
0 is returned in the child. On failure, -1 is returned in the parent,
no child process is created, and errno is set appropriately.
I am wondering about the reasons that would make a fork call fail. What are the case scenarios where fork returns -1?
I am writing a standard Unix shell in C. How should I handle the error?
|
In the C API, system calls return a negative value to indicate an error, and the error code in errno gives more information on the nature of the error. Your man page should explain the possible errors on your system. There are two standard error codes:
EAGAIN indicates that the new process cannot be created due to a lack of available resources, either insufficient memory of some kind or a limit that has been reached such as the maximum number of processes per user or overall.
ENOMEM indicates that the new process cannot be created due to a lack of memory of some kind. Under Linux, ENOMEM indicates a lack of kernel memory, while a lack of userland memory is reported as EAGAIN. Under OpenBSD, ENOMEM is used for a lack of userland memory as well.
In summary, fork can fail due to a lack of available resources (possibly in the form of an artificial limit rather than a simple lack of memory).
The behavior of shells when fork fails is not specified by POSIX. In practice, they tend to return different error statuses (1 in pdksh, 2 in ash, 254 in bash, 258 in ksh93, 1 in tcsh; zsh returns 0 which is a bug). If you're implementing a shell for production use (as opposed to a learning exercise), it might be worth discussing this on the Austin Group mailing list.
| Fork: Negative return value |
1,354,215,010,000 |
As when we do fork on current process, our process as parent process generates child process with same characteristics but different process IDs. So after that, when we do exec() in our child process, process stops execution, and our program which was executing in our stoppped child process, now has his own process.
Isn't that the same as when we run our applications in particular after which every application has his own process and PID?
|
Yes, because that's how it's done in UNIX.
There is no "run application" system call; it's always done by fork/exec pairs.
Incidentally, exec does not generate a new PID. exec replaces the contents of the process -- the memory is discarded, and a whole new executable is loaded -- but the kernel state remains the same (open files, environment variables, working directory, user, etc.), and the PID remains the same.
Further reading, if you're interested:
vfork is like fork except that it must always be paired with exec, and is useful when fork can't work, as in ucLinux.
clone is the new fork (today's fork function uses clone behind the scenes) but does a lot more, including creating new processes that share the same memory (rather than duplicate it, like fork) and we call those threads.
| fork() and exec() confusion |
1,354,215,010,000 |
I need to figure out how many forks are done and how many concurrent processes are run by each user over time. It does not look like this information is tracked by my distribution.
I know how to sets limits, but I'm interested in tracking these numbers for each user.
|
Try the psacct package (GNU accounting), it should do just about everything you need, once installed and enabled (accton), then lastcomm will keep report on user processes (see also sa and dump-acct). See this for reference: User's executed commands log file
You might need to upgrade the version to log PID/PPID, see https://serverfault.com/questions/334547/how-can-i-enable-pid-and-ppid-fields-in-psacct-dump-acct , otherwise I suspect it will under-report on fork() without exec().
Update
If your lastcomm outputs F in the 2nd column it means the process was a fork (that never called exec() to replace itself with a new process). The output of dump-acct should show you the PID (and PPID) in acct v3 format.
An alternative to psacct might be the new(ish) taskstats, there's not a huge amount of support for it yet AFAICT, see Documentation/accounting/taskstats.txt in your kernel version source. This might help get you started http://code.google.com/p/arsenalsuite/wiki/TrackingIOUsage https://code.google.com/archive/p/anim-studio-tools/ The specific code example is tasklogger.c, you will need to modify the printf() line in function print_delayacct2(), firstly to replace %u with %llu for the __u64 types and secondly to add the field ac_uid (and perhaps ac_gid) that you need to track by user. Invoke it with something like tasklogger -dl -m 0-1 (where -m 0-1 indicates CPUs 0-1). You will then see realtime details as each process exits.
There is also a perl module Linux::Taskstats::Read available on CPAN, though I have not used it.
You'll need to process the data based on timestamps if you want the concurrent process count per-user, this is not a simple as it sounds.
Update 2
Ok, the things to check for the required psacct support are:
(official) kernel >= 2.6.8 for v3 accounting support (or backport)
kernel with CONFIG_BSD_PROCESS_ACCT and CONFIG_BSD_PROCESS_ACCT_V3 enabled
v3 capable accounting (psacct) package, as noted above
All of the above should be true in CentOS 6, I've checked a 5.x and it does not have CONFIG_BSD_PROCESS_ACCT_V3=y, so you would have to rebuild your kernel to enable it.
The original psacct-6.3.2 is about 15 years old, the Red Hat/CentOS version has backported v3 and PID display support (I can't test it right now, but it should work).
To check a your kernel config:
zgrep BSD_PROCESS_ACCT /proc/config.gz /boot/config-`uname -r`
| How to track the number of processes and forks per user? |
1,354,215,010,000 |
Following a fork() call in Linux, two processes (one being a child of the other) will share allocated heap memory. These allocated pages are marked COW (copy-on-write) and will remain shared until either process modifies them. At this point, they are copied, but the virtual address pointers referencing them remain the same. How can the MMU (memory management unit) distinguish between the two? Consider the following:
Process A is started
Process A is allocated a memory page, pointed to by the virtual address 0x1234
Process A fork()s, spawning process B
Process A and B now share virtual address 0x1234, pointing to the same physical memory location
Process B modifies its 0x1234 memory page
This memory page is copied and then modified
Process A and B both have virtual address 0x1234, but this points to different physical memory addresses
How can this be distinguished?
|
One of the things the kernel does during a context switch between processes is to modify the MMU tables to remove entries that describe the previous process's address space and add entries that describe the next process's address space. Depending on the processor architecture, the kernel and possibly the configuration, this may be done by changing a processor register or by manipulating the page tables in memory.
Immediately after the fork operation, due to copy-on-write, the MMU tables for both processes have the same physical address for the virtual address 0x1234. Once again, these are two separate table, that happen to have identical entries for this particular virtual address.
The descriptor for this page has the read-only attribute. If a process tries to write (it doesn't matter whether it's A or B), this triggers a processor fault due to the permission violation. The kernel's page fault handler runs, analyzes the situation and decides to allocate a new physical page, copies the content of the read-only page to this new page, changes the calling process's MMU configuration so that 0x1234 now points to this freshly-allocated physical page with read-write attributes, and restarts the calling process on the instruction that caused the fault. This time the page is writable so the instruction will not trap.
Note that the page descriptor in the other process is not affected by this operation. In fact, it might be, because the kernel performs one more action: if the page is now only mapped in a single process, it's switched back to read-write, to avoid having to copy it later.
See also What happens after a page fault?
| How can two identical virtual addresses point to different physical addresses? |
1,354,215,010,000 |
I'm currently developping a systemd daemon. The problem I'm facing is that the daemon is killed 1m30s after beeing launched because the forking is not detected.
I'm using the int daemon(int nochdir, int noclose) function to daemonize the process.
int main()
{
openlog("shutdownd", LOG_PID, LOG_DAEMON);
if(daemon(0, 0) != 0)
{
syslog(LOG_ERR, "Error daemonizing process : %s\n", strerror(errno));
exit(EXIT_FAILURE);
}
syslog(LOG_NOTICE, "Daemon started !\n");
pthread_create(&threads[0], NULL, &alimThread, NULL);
pthread_create(&threads[1], NULL, &extinctThread, NULL);
pthread_create(&threads[2], NULL, &blinkThread, NULL);
while(1)
{
}
syslog(LOG_NOTICE, "Daemon stopped !\n");
exit(EXIT_SUCCESS);
}
Here is the service file /etc/systemd/system/shutdownd.service
[Unit]
Description=Shutdown Daemon
After=syslog.target
[Service]
Type=forking
PIDFile=/var/run/shutdownd.pid
ExecStartPre=/bin/rm -f /var/run/shutdownd.pid
ExecStartPre=/usr/bin/shutdownd-exportGpio.sh
ExecStart=/usr/bin/shutdownd
Restart=on-abort
[Install]
WantedBy=multi-user.target
The daemon function is supposed to fork the process and detach it from the terminal, I also close file desciptors and change the working directory to /.
However systemd seems to don't detect the forking as it kill my running daemon after 1m30s.
Sep 8 13:52:50 raspberrypi systemd[1]: shutdownd.service: PID file /var/run/shutdownd.pid not readable (yet?) after start: No such file or directory
Sep 8 13:52:50 raspberrypi shutdownd[293]: Daemon started !
Sep 8 13:52:50 raspberrypi shutdownd[293]: [Extinct] Value changed to 0
Sep 8 13:52:50 raspberrypi shutdownd[293]: OFF
Sep 8 13:52:50 raspberrypi shutdownd[293]: [Alim] Value changed to 0
Sep 8 13:52:50 raspberrypi shutdownd[293]: OFF
Sep 8 13:53:46 raspberrypi shutdownd[293]: [Alim] Value changed to 1
Sep 8 13:53:46 raspberrypi shutdownd[293]: Toogle : ON
Sep 8 13:53:48 raspberrypi shutdownd[293]: Toogle : OFF
[...]
Sep 8 13:54:16 raspberrypi shutdownd[293]: [Extinct] Value changed to 1
Sep 8 13:54:16 raspberrypi shutdownd[293]: ON
Sep 8 13:54:20 raspberrypi systemd[1]: shutdownd.service: Start operation timed out. Terminating.
Sep 8 13:54:20 raspberrypi systemd[1]: shutdownd.service: Unit entered failed state.
Sep 8 13:54:20 raspberrypi systemd[1]: shutdownd.service: Failed with result 'timeout'.
Does anyone as a clue about why systemd doesn't detect the forking ?
Do I have to explicitly call fork() in my code? In this case I'll have to code the daemonize function by myself which is not so difficult but totally useless and redundant since a c function already exists for that purpose.
|
Do not do that.
At all. Any of it, either through a library function or rolling your own code. For any service management system. It has been a wrongheaded idea since the 1990s.
Your dæmon is already running in a service context, invoked that way by a service manager. Your program should do nothing in this respect. Stop writing your program that way at all.
And do not use the forking readiness protocol. Your program is multithreaded and will almost certainly not function correctly if you try to add the forking readiness protocol to it, as enacting the protocol correctly means forking after all initialization has been done, including starting up all of the threads. Almost nothing actually uses the forking readiness protocol in the wild. Use another protocol.
Further reading
https://unix.stackexchange.com/a/200365/5132
https://unix.stackexchange.com/a/194653/5132
https://unix.stackexchange.com/a/211126/5132
https://unix.stackexchange.com/a/336067/5132
https://unix.stackexchange.com/a/283739/5132
Jonathan de Boyne Pollard (2001). "Don't fork() in order to 'put the dæmon into the background'.". Mistakes to avoid when designing Unix dæmon programs. Frequently Given Answers.
Jonathan de Boyne Pollard (2015). You really don't need to daemonize. Really.. The systemd House of Horror.
Jonathan de Boyne Pollard (2015). Readiness protocol problems with Unix dæmons. Frequently Given Answers.
| Systemd timeout because it doesn't detect daemon forking |
1,354,215,010,000 |
I've a very specific question about fork system call. I've this piece of code:
int main (void)
{
for (int i = 0; i < 10; i++) {
pid_t pid = fork ();
if ( !pid ) {
printf("CHILD | PID: %d, PPID: %d\n", getpid(), getppid());
_exit(i + 1);
}
}
for (int i = 0; i < 10; i++) {
int status;
waitpid(-1, &status, 0);
if (WIFEXITED(status)) {
printf("IM %d AND CHILD WITH EXIT CODE %d TERMINATED\n",
getpid(), WEXITSTATUS(status));
}
else {
printf("ERROR: CHILD NOT EXITED\n");
}
}
return 0;
}
which produces this output:
CHILD | PID: 3565, PPID: 3564
CHILD | PID: 3566, PPID: 3564
IM 3564 AND CHILD WITH EXIT CODE 1 TERMINATED
IM 3564 AND CHILD WITH EXIT CODE 2 TERMINATED
CHILD | PID: 3573, PPID: 3564
CHILD | PID: 3567, PPID: 3564
IM 3564 AND CHILD WITH EXIT CODE 9 TERMINATED
IM 3564 AND CHILD WITH EXIT CODE 3 TERMINATED
CHILD | PID: 3568, PPID: 3564
IM 3564 AND CHILD WITH EXIT CODE 4 TERMINATED
CHILD | PID: 3569, PPID: 3564
IM 3564 AND CHILD WITH EXIT CODE 5 TERMINATED
CHILD | PID: 3570, PPID: 3564
IM 3564 AND CHILD WITH EXIT CODE 6 TERMINATED
CHILD | PID: 3571, PPID: 3564
IM 3564 AND CHILD WITH EXIT CODE 7 TERMINATED
CHILD | PID: 3572, PPID: 3564
IM 3564 AND CHILD WITH EXIT CODE 8 TERMINATED
CHILD | PID: 3574, PPID: 3564
IM 3564 AND CHILD WITH EXIT CODE 10 TERMINATED
which makes me wonder how fork really works and what code new processes really execute. Looking at the above output results, I can not understand:
How does second for cycle prints before first for finish all iterations? I've also noticed this second for is always executed by parent process, which makes me wonder if while on first for cycle, if pid != 0 (which means parent process calling) second for cycle is executed (?)
Why does CHILD processes are not sorted printed by PID?
So, bottom line, how does fork really works and who executes what?
|
When you fork, the kernel creates a new process which is a copy of the forking process, and both processes continue executing after the fork (with the return code showing whether an error occurred, and whether the running code is the parent or the child). This “continue executing” part doesn’t necessarily happen straight away: the kernel just adds the new process to the run queue, and it will eventually be scheduled and run, but not necessarily immediately.
This explains both behaviours you’re asking about:
since new processes aren’t necessarily scheduled immediately, the parent might continue running before any child gets a chance to run;
creation order doesn’t have much (if any) impact in the run queue, so there’s no guarantee the child processes will run in the order they’re created.
| How does fork system call really works |
1,354,215,010,000 |
I have a socket server running and listening for incoming connections on a non-admin port (i.e. > 1024). I would also like for this process to be able to handle another type of connection on a different port for monitoring purposes. I have found questions on SE for the opposite situation, many-to-one but this would be a one-to-many situation.
My questions: Is it possible to bind one process to multiple ports?
If so can I reliably handle connections on the different ports uniquely (i.e. port 2000 execute one piece of code and port 3000 execute another).
I am open to other suggestions as to how to handle a connection to monitor the other clients that are that are connecting to the primary port.
|
Absolutely possible You can use a selector or poll to receive notifications and manage each connection.
http://linux.die.net/man/2/select
| Bind one process to multiple ports? |
1,354,215,010,000 |
If we look at the example
#include <stdio.h>
#include <unistd.h>
void main(){
int pi_d ;
int pid ;
pi_d = fork();
if(pi_d == 0){
printf("Child Process B:\npid :%d\nppid:%d\n",getpid(),getppid());
}
if(pi_d > 0){
pid = fork();
if(pid > 0){
printf("\nParent Process:\npid:%d\nppid :%d\n",getpid(),getppid());
}
else if(pid == 0){
printf("Child Process A:\npid :%d\nppid:%d\n",getpid(),getppid());
}
}
}
For me, this looks like it would create processes indefinitely because, when we fork a process, a copy of the parent is made. So the program code is cloned.
This means that every new process runs the same code; thus, it calls pi_d = fork(), and so on.
What I'm missing here?
|
Quoting from POSIX fork definition (bold emphasis mine):
RETURN VALUE
Upon successful completion, fork() shall return 0 to the child process
and shall return the process ID of the child process to the parent
process. Both processes shall continue to execute from the fork()
function. Otherwise, -1 shall be returned to the parent process, no
child process shall be created, and errno shall be set to indicate the
error.
OP wrote:
which means, for every new process, it runs the same code
Upon successful completion of fork() and return from it the parent and the child resume right after fork(): none will execute again the first fork(), then later none will execute again the 1st or 2nd fork() because there is no loop in this code to allow this to happen.
Assuming no error happens (they aren't checked):
parent forks
If it's the child display Child Process B.
else, if it's the parent, fork again
If it's (again) the parent, display Parent Process
If it's parent's second child, display Child Process A
As there's no order guaranteed between which of child or parent will beat the other in exact execution sequence, the 3 outputs can happen in any order or mingled (but on a given specific OS, one display order should happen more often than others, and Child Process B having a head start would probably be displayed first).
| How does the fork system call work? |
1,354,215,010,000 |
Unfortunately I've had no luck figuring this out, as everything I find is just on the syntax of redirection, or shallow information about how redirection works.
What I want to know is how bash actually changes stdin/stdout/stderr when you use pipes or redirection. If for example, you execute:
ls -la > diroutput.log
How does it change stdout of ls to diroutput.log?
I assume it works like this:
Bash runs fork(2) to create a copy of itself
Forked bash process sets it's stdout to diroutput.log using something like freopen(3)
Forked bash process runs execve(2) or a similar exec function to replace itself with ls which now uses the stdout setup by bash
But that's just my educated guess.
|
I was able to figure it out using strace -f and writing a small proof of concept in C.
It appears that bash just manipulates file descriptors in the child process before calling execve as I thought.
Here's how ls -la > diroutput.log works (roughly):
bash calls fork(2)
forked bash process sees the output redirection and opens the file diroutput.log using open(2).
forked bash process replaces the stdout file descriptor using the dup2(2) syscall
bash calls execve(2) to replace it's executable image with ls which then inherits the already setup stdout
The relevant syscalls look like this (strace output):
6924 open("diroutput.log", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3
6924 dup2(3, 1) = 1
6924 close(3) = 0
6924 execve("/bin/ls", ["ls", "-la"], [/* 77 vars */]) = 0
| How does bash actually change stdin/stdout/stderr when using redirection/piping |
1,354,215,010,000 |
If a program runs fork() what sets standard streams STDOUT, STDIN and STDERR?
|
Stdin, stdout and stderr are inherited from the parent process. It's up to the child process to change them to point to new files if that is needed.
From the fork(2) man page:
* The child inherits copies of the parent's set of open file descrip‐
tors. Each file descriptor in the child refers to the same open
file description (see open(2)) as the corresponding file descriptor
in the parent.
| What sets a child's STDERR, STDOUT, and STDIN? |
1,354,215,010,000 |
In the man page for ps, it lists process flag 1 as "process forked but didn't exec". What would be a common use case/situation for a process to be in this state?
|
This sentence refers to the fork and exec system calls¹. The fork system call creates a new process by duplicating the calling process: after running fork, there are two processes which each have their own memory¹ with initially-identical content except for the return value of the fork system call, the process ID and a very few other differences. The exec system call loads a program image from a file and replaces the existing process's memory by that image.
The way to run a program in the usual sense is to call fork to create a new process for the program to run in, and then call exec in the child to replace the copy of the original program by the new program's code and data. That's the most common usage of fork (often with a few things done before exec like setting up file redirections).
Running exec without having done a fork can be seen as an optimization of doing fork+exec and having the parent do exit immediately afterwards. It isn't exactly equivalent because fork+exec/exit changes the parent of the resulting program, whereas a straight exec doesn't.
Linux's process flag 1 signals processes that didn't call exec since they were forked by their parent, i.e. children (or grandchildren, etc.) of the original process of their program. Calling fork without calling exec has several uses. (This is not an exhaustive list, just some common use cases.)
Some programs exploit multiple processors. This can be done either by running multiple threads in the same process (then all the threads share memory) or by running separate processes on each processor (then they don't share memory).
Running separate processes is a way to isolate some tasks. For example, Chrome keeps each tab or each small group of tabs in a separate process; this way, if a tab is hung or crashes, or if a web page triggers a security hole, that doesn't affect tabs displayed by other processes. Separate processes can also be useful to perform different tasks with different privileges; for example the OpenSSH server runs most of its code as an unprivileged user and only performs the final login stage as root. Shells use fork to implement subshells (parts of the script where variables, redirections, etc. don't affect the main script).
Daemons usually start by “double forking”: when the program runs, one of the first things it does is fork, and then the parent exits. This is the inverse of the exec “optimization” I mentioned above, and is done to isolate the daemon process from its original parent, and in particular to avoid blocking the original parent if it was waiting for its child to finish (as happens e.g. when you run a program in a shell without &).
¹ There are nuances that do not matter here and are beyond the scope of this answer.
| Process Flag 1: Forked but didn't exec (use case?) |
1,354,215,010,000 |
When executing ps command in my Linux system i see some user processes twice (different PID...). I wonder if they are new processes or threads of the same process. I know some functions in standard C library that could create a new process such fork(). I wonder what concrete functions can make a process appear twice when i execute ps command because i am looking in the source code where the new process or thread is created.
|
Little bit confusing. fork is a system call which creates a new process by copying the parent process' image. After that if child process wants to be another program, it calls some of the exec family system calls, such as execl. If you for example want to run ls in shell, shell forks new child process which then calls execl("/bin/ls").
If you see two programs and their pid's are different, check their ppid's (parent id's). For example, if p1 is ppid of process whose pid is p2, it means that process whose id is p1 forked that process. But if first process' ppid is not same that the other process' pid, it means that the same command is executed twice.
If pid and ppid are same, but tid's (thread id's) are different, it means that it's one process that has 2 threads.
I think that making your own shell is a good start point.
| Which system calls could create a new process? |
1,354,215,010,000 |
This is the code example given:
# include <stdio.h>
# include <unistd.h>
void main() {
static char *mesg[] = {"0", "1", "2", "3", "4", "5", "6", "7", "8", "9"};
int display(char *), i;
for (i=0; i<10; ++i)
display(mesg[i]);
sleep(2);
}
int display(char *m) {
char err_msg[25];
switch (fork()) {
case 0:
execlp("/bin/echo", "echo", m, (char *) NULL);
sprintf (err_msg, "%s Exec failure", m);
perror(err_msg); return(1);
case -1:
perror ("Fork failure"); return(2);
default:
return(0);
}
}
Now, my assumption before running this program is that the parent would finish before their child. So my expected-output is
0
1
2
3
4
5
6
7
8
9
However, each time i run the program i would get a random order of process.
My question is "why?".
Is it because of "context switching" where the processor would jump between processes?
Is it "resource allocation" where some processes get more than other?
Is the order of parent and child process not set in stone and that is why we have zombie and orphan process?
|
The child processes start running as soon as you fork(), in fact they do not even "start", they just continue in the code after the fork() invocation, just like the parent does. Only the return value of fork() is different. Parent and child can exit in either order. So yes, context switching will make all the processes executed in random.
You'll get zombie processes when a child process exits and the parent doesn't properly "reap" the child exit code. Zombie processes basically only consist of yet-to-be-retrieved exit codes, and every time you see one, blame the parent process for not taking care. (Zombies are a bug in the parent, except if the parent is short-lived and doesn't need to take care.)
If the parent exits before the child, the child process will be reparented to PID 1 which will do the exit code reaping. (It will also clean up any zombies the process had.)
| Why does a "child" process finish before its parent? |
1,354,215,010,000 |
I am running R job under a normal user john and root. Interestingly, the program stalls under john user but runs quickly under root. Using strace, I found that when john runs the R, the process stalls for its child process. I guess the linux do not let the child process continue and the parent (main program) stays stalled infinitely. Is there any limitation on the number of fork/clone that a normal linux user can do? Any idea why this happens?
Anyway, here on this post I've described my start-point of problem.
Further information
Last lines of strace for john user (where program stalls):
lseek(255, -82, SEEK_CUR) = 1746
clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7fa12fd4f9d0) = 13302
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0
rt_sigaction(SIGINT, {0x43d060, [], SA_RESTORER, 0x311b432900}, {0x452250, [], SA_RESTORER, 0x311b432900}, 8) = 0
wait4(-1, <unfinished ...>
Last lines of strace for root (where program runs completely):
lseek(255, -82, SEEK_CUR) = 1746
clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7f81d8e239d0) = 13244
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0
rt_sigaction(SIGINT, {0x43d060, [], SA_RESTORER, 0x311b432900}, {0x452250, [], SA_RESTORER, 0x311b432900}, 8) = 0
wait4(-1, [{WIFEXITED(s) && WEXITSTATUS(s) == 1}], 0, NULL) = 13244
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
--- SIGCHLD (Child exited) @ 0 (0) ---
wait4(-1, 0x7fff54a591dc, WNOHANG, NULL) = -1 ECHILD (No child processes)
rt_sigreturn(0xffffffffffffffff) = 0
rt_sigaction(SIGINT, {0x452250, [], SA_RESTORER, 0x311b432900}, {0x43d060, [], SA_RESTORER, 0x311b432900}, 8) = 0
rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0
read(255, "\n### Local Variables: ***\n### mo"..., 1828) = 82
rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0
rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0
rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0
rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0
rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0
read(255, "", 1828) = 0
exit_group(1)
|
Use strace -f R to follow R and all its child processes as well. This should show the exact point where the child program hangs.
Some additional possible points to check:
as root (su - root), and as the user john, compare the outputs of:
ulimit -a #will show all the "limits" set for that user. You may reach one of them?
set ; env #maybe john & root don't have same PATH or some other thing changes (LD_LIBRARY_PATH? or another?)
grep $(whoami) /etc/passwd /etc/group #see if john maybe needs to be in some group?
| Program stall under user but runs under root |
1,354,215,010,000 |
It is well-known that if I add myself to a new group, that change will not be reflected until I log out and back in:
$ sudo adduser me newgroup
$ groups
me sudo
$ groups me
me sudo newgroup
$
This odd behavior is because groups is interpreted by the shell and new group membership is not shown. But groups me actually references /etc/group and therefore the new membership is shown.
But what I find curious is that a new shell doesn't notice the change:
$ bash
$ groups
me sudo
The ways I know of to reflect the new group membership are (1) newgrp, or sg or su, (2) log out and back in.
So bash must be passing the group list to its child somehow. It's not in the environment (I tried printenv) and it's not in the kernel's task_struct (that has only gid, egid, sgid, and fsgid).
I can't figure how.
|
Groups are inherited by a process from its parent. Bash has no choice in the matter. A process running as root can obtain new supplementary groups upon request; a process not running as root can only relinquish supplementary groups.
The command groups with no arguments returns its own list of groups (which is inherited from its parent): real group, effective group and supplementary groups. The command group SOMEUSER looks up the groups associated with SOMEUSER in the user and group databases.
When logging in, groups are assigned based on the user and group databases, as part of the login process, before the login process switches from root to the target user. The commands newgrp, su and sg are able to acquire extra groups while it's running because it is setuid root; their code is written in such a way that they'll only grant groups that the user would be granted when logging in (except that root can get whatever group it wants).
In the Linux kernel, the UIDs and GIDs of a process are recorded in a struct cred. The supplementary groups are in the group_info field which points to a struct group_info which contains an array of group IDs.
| How does bash pass user groups to a child? |
1,354,215,010,000 |
I was looking at the output of pstree, and realised that processes that I started using dmenu seem to fork from bash.
What is the reasoning behind this? And is there any way I can make dmenu behave like gmrun and other application launchers and only launch the process?
EDIT: The dmenu manpage says that the shell execution behavior is correct for dmenu_run. Figuring out how to not make the shell persist after launching the program is what I am still looking for.
|
I ended up asking about it on the ArchLinux forum after a little while.
Here is what /usr/bin/dmenu_run should look like:
#!/bin/sh
cachedir=${XDG_CACHE_HOME:-"$HOME/.cache"}
if [ -d "$cachedir" ]; then
cache=$cachedir/dmenu_run
else
cache=$HOME/.dmenu_cache # if no xdg dir, fall back to dotfile in ~
fi
exec $(
IFS=:
if stest -dqr -n "$cache" $PATH; then
stest -flx $PATH | sort -u | tee "$cache" | dmenu "$@"
else
dmenu "$@" < "$cache"
fi
)
| Dmenu Processes Forked by Bash? |
1,354,215,010,000 |
As far as I know, when vfork is called, the child process uses the same address space as that of the parent and any changes made by the child process in parent'ss variables are reflected onto the parent process. My questions are:
When a child process is spawned, is the parent process suspended?
If yes, why?
They can run in parallel (like threads)? After all, both threads and process call the same clone() function.
After bit of research and Googling, I found out that the parent process is not really suspended, but the calling thread is suspended. Even if this is the case, when the child process does an exit() or exec(), how does the parent process know that the child has exited? And what will happen if we return from a child process?
|
Your question is partly based on bad naming convention. A "thread of control" in kernel-speak is a process in user-speak. So when you read that vfork "the calling thread is suspended" think "process" (or "heavyweight thread" if you like) not "thread" as in "multi-threaded process".
So yes, the parent process is suspended.
vfork semantics were defined for the very common case where a process (the shell most often) would fork, mess with some file descriptors, and then exec another process in place. The kernel folks realized they could save a huge amount of page copying overhead if they skipped the copy since the exec was just going to throw those copied pages away. A vforked child does have its own file descriptor table in the kernel, so manipulating that doesn't affect the parent process, keeping the semantics of fork unchanged.
Why? Because fork/exec was common, expensive, and wasteful
Given the more accurate definition of "kernel thread of control", the answer to can they run in parallel is clearly
No, the parent will be blocked by the kernel until the child exits or execs
How does the parent know the child has exited?
It doesn't, the kernel knows and keeps the parent from getting any CPU at all until the child has gone away.
As for the last question, I would suspect that the kernel would detect the child stack operations involved in a return and signal the child with an uncatchable signal or just kill it, but I don't know the details.
| When vfork is called is parent process really suspended? |
1,354,215,010,000 |
GDB seems to hang everytime when I try run command from gdb prompt. When I ran ps, there are two gdb processes that have been spawned and pstack reveals the following -
15:47:02:/home/stufs1/pmanjunath/a2/Asgn2_code$ uname -a
SunOS compserv1 5.10 Generic_118833-24 sun4u sparc SUNW,Sun-Blade-1500
15:44:04:/home/stufs1/pmanjunath/a2/Asgn2_code$ ps aux | grep gdb
pmanjuna 13121 0.1 0.1 1216 968 pts/23 S 15:44:11 0:00 grep gdb
pmanjuna 13077 0.0 0.1 7616 4392 pts/15 S 15:41:41 0:00 gdb client
pmanjuna 13079 0.0 0.1 7616 4392 pts/15 T 15:41:51 0:00 gdb client
15:44:50:/home/stufs1/pmanjunath/a2/Asgn2_code$ pstack 13077
13077: gdb client
fef42c30 vfork ()
00065938 procfs_create_inferior (32ea10, 32d728, 317430, 1, 0, 657a8) + 190
0008c668 sol_thread_create_inferior (32ea10, 32d728, 317430, 1, 25e030, 0) + 18
000ffda0 find_default_create_inferior (32ea10, 32d728, 317430, 1, 405c, 4060) + 20
000d8690 run_command_1 (0, 1, 32ea10, 1, ffbff0f4, 316fd0) + 208
0007e344 do_cfunc (316fd0, 0, 1, 1, 0, 0) + c
0008016c cmd_func (316fd0, 0, 1, 0, 1, 0) + 30
0004c1d4 execute_command (316fd0, 1, 0, 4f00c, 1, 2dc800) + 390
000eb6a0 command_handler (2f4ee0, 0, 2f3800, 8acf, ff000000, ff0000) + 8c
000ebbcc command_line_handler (2f3800, 7200636c, 32d71c, 7200, 2dfc00, 2dfc00) + 2a4
0019b354 rl_callback_read_char (fef6b6f8, 0, 931d8, 0, fef68284, fef68284) + 340
000eafb4 rl_callback_read_char_wrapper (0, fef709b0, 0, 11, 0, eafb0) + 4
000eb590 stdin_event_handler (0, 0, 932b4, fef6fad4, 0, 1) + 60
000ea780 handle_file_event (1, 1084, 932f4, 4f00c, ff1f2000, 1000) + bc
000ea11c process_event (0, 0, ffffffff, 0, 2df9f8, 0) + 84
000ea9d4 gdb_do_one_event (1, 1, 0, 2f3158, ff1f2000, 2) + 108
000e7cd4 catch_errors (ea8cc, 0, 2473a8, 6, ffbff6f0, 1) + 5c
000907e8 tui_command_loop (0, 64, ffffffff, 0, 0, 2f6190) + e0
000e7fcc current_interp_command_loop (800000, ff400000, ffc00000, 800000, 0, 331b40) + 54
00045b80 captured_command_loop (1, 1, 0, fef33a54, ff1f2000, 2) + 4
000e7cd4 catch_errors (45b7c, 0, 22db20, 6, 2dc400, 0) + 5c
0004625c captured_main (2d1800, 2f4ae0, 0, 0, 0, 0) + 6a0
000e7cd4 catch_errors (45bbc, ffbffc18, 22db20, 6, 0, 0) + 5c
00046bb0 gdb_main (ffbffc18, 0, 0, 0, 0, 0) + 24
00045b6c main (2, ffbffc9c, ffbffca8, 2f45b8, ff1f0100, ff1f0140) + 28
000459dc _start (0, 0, 0, 0, 0, 0) + 5c
15:45:38:/home/stufs1/pmanjunath/a2/Asgn2_code$ pstack 13079
13079: gdb client
fef4098c execve (ffbfffe6, ffbffc9c, ffbffca8)
feec4a7c execlp (ffbffdc6, ffffffff, 289bc0, ffbfed18, 0, ffbfed10) + ac
0016e3e8 fork_inferior (32ea10, 32d728, 317430, 6567c, 653dc, 0) + 310
00065938 procfs_create_inferior (32ea10, 32d728, 317430, 1, 0, 657a8) + 190
0008c668 sol_thread_create_inferior (32ea10, 32d728, 317430, 1, 25e030, 0) + 18
000ffda0 find_default_create_inferior (32ea10, 32d728, 317430, 1, 405c, 4060) + 20
000d8690 run_command_1 (0, 1, 32ea10, 1, ffbff0f4, 316fd0) + 208
0007e344 do_cfunc (316fd0, 0, 1, 1, 0, 0) + c
0008016c cmd_func (316fd0, 0, 1, 0, 1, 0) + 30
0004c1d4 execute_command (316fd0, 1, 0, 4f00c, 1, 2dc800) + 390
000eb6a0 command_handler (2f4ee0, 0, 2f3800, 8acf, ff000000, ff0000) + 8c
000ebbcc command_line_handler (2f3800, 7200636c, 32d71c, 7200, 2dfc00, 2dfc00) + 2a4
0019b354 rl_callback_read_char (fef6b6f8, 0, 931d8, 0, fef68284, fef68284) + 340
000eafb4 rl_callback_read_char_wrapper (0, fef709b0, 0, 11, 0, eafb0) + 4
000eb590 stdin_event_handler (0, 0, 932b4, fef6fad4, 0, 1) + 60
000ea780 handle_file_event (1, 1084, 932f4, 4f00c, ff1f2000, 1000) + bc
000ea11c process_event (0, 0, ffffffff, 0, 2df9f8, 0) + 84
000ea9d4 gdb_do_one_event (1, 1, 0, 2f3158, ff1f2000, 2) + 108
000e7cd4 catch_errors (ea8cc, 0, 2473a8, 6, ffbff6f0, 1) + 5c
000907e8 tui_command_loop (0, 64, ffffffff, 0, 0, 2f6190) + e0
000e7fcc current_interp_command_loop (800000, ff400000, ffc00000, 800000, 0, 331b40) + 54
00045b80 captured_command_loop (1, 1, 0, fef33a54, ff1f2000, 2) + 4
000e7cd4 catch_errors (45b7c, 0, 22db20, 6, 2dc400, 0) + 5c
0004625c captured_main (2d1800, 2f4ae0, 0, 0, 0, 0) + 6a0
000e7cd4 catch_errors (45bbc, ffbffc18, 22db20, 6, 0, 0) + 5c
00046bb0 gdb_main (ffbffc18, 0, 0, 0, 0, 0) + 24
00045b6c main (2, ffbffc9c, ffbffca8, 2f45b8, ff1f0100, ff1f0140) + 28
000459dc _start (0, 0, 0, 0, 0, 0) + 5c
Why are these processes hanging in vfork and execve? This happens on my university machine where fellow students also have accounts. None of them have reported this problem. Seems to happen only to me.
EDIT : With schily's help, I am able to corner the problem. When I log in, I am in csh by default. GDB works pretty fine here. Now, I run bash from csh to enter bash shell. Now GDB hangs. When I check the output of echo $SHELL, I see something strange
$ echo $SHELL
/bin/bash=
There is an equal sign at the end of the output. I guess GDB is trying to spawn a bash shell I guess using the default shell variable and fails to find the binary cos of that equal sign. Now, the problem is to find out how that equal sign is getting into the shell path.
|
The process that calls vfork() hangs because it is the vfork() parent and the child did borrow the process image at that time so it cannot run until the child finishes a call to_exit() or exec*().
So you need to find out why the exec*() hangs.
A typical reason for a hang in exec*() is a NFS hang or a traversal through a non-existent automount point.
Call truss -p 13079 to get the path for the hanging exec*().
| GDB hangs forever on Solaris |
1,354,215,010,000 |
strace runs a specified command until it exits. It intercepts and records the system calls which are called by a process and the signals which are received by a process.
When running an external command in a bash shell, the shell first fork() a child process, and then execve() the command in the child process. So I guess that strace will report fork() or something similar such as clone()
But the following example shows it doesn't. Why doesn't strace report that the parent shell fork() the child process before execve() the command? Thanks.
$ strace -f time
execve("/usr/bin/time", ["time"], [/* 66 vars */]) = 0
brk(0) = 0x84c000
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7efe9b2a5000
access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=141491, ...}) = 0
mmap(NULL, 141491, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7efe9b282000
close(3) = 0
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
open("/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\320\37\2\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=1840928, ...}) = 0
mmap(NULL, 3949248, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7efe9acc0000
mprotect(0x7efe9ae7b000, 2093056, PROT_NONE) = 0
mmap(0x7efe9b07a000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1ba000) = 0x7efe9b07a000
mmap(0x7efe9b080000, 17088, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7efe9b080000
close(3) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7efe9b281000
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7efe9b27f000
arch_prctl(ARCH_SET_FS, 0x7efe9b27f740) = 0
mprotect(0x7efe9b07a000, 16384, PROT_READ) = 0
mprotect(0x602000, 4096, PROT_READ) = 0
mprotect(0x7efe9b2a7000, 4096, PROT_READ) = 0
munmap(0x7efe9b282000, 141491) = 0
write(2, "Usage: time [-apvV] [-f format] "..., 177Usage: time [-apvV] [-f format] [-o file] [--append] [--verbose]
[--portability] [--format=format] [--output=file] [--version]
[--quiet] [--help] command [arg...]
) = 177
exit_group(1) = ?
+++ exited with 1 +++
|
$ strace -f time
execve("/usr/bin/time", ["time"], [/* 66 vars */]) = 0
brk(0) = 0x84c000
...
Strace directly invokes the program to be traced. It doesn't use the shell to run child commands, unless the child command is a shell invocation. The approximate sequence of events here is as follows:
The shell executes strace with arguments "strace", "-f", "time".
Strace starts up, parses its command line, and eventually forks.
The original (parent) strace process begins tracing the child strace process.
The child strace process executes /usr/bin/time with the argument "time".
The time program starts up.
After step 1, the original shell process is idle, waiting for strace to exit. It's not actively doing anything. And even if it were doing something, it's not being traced by strace, so its activity wouldn't appear in the strace output.
| Why doesn't strace report that the parent shell fork() a child process before execve() a command? |
1,354,215,010,000 |
There are a couple of questions related to the fork bomb for bash :(){ :|: & };: , but when I checked the answers I still could not figure out what the exactly the part of the bomb is doing when the one function pipes into the next, basically this part: :|: .
I understand so far, that the pipe symbol connects two commands by connecting the stdandard output of the first to the standard input to the second, e.g. echo "Turkeys will dominate the world" | sed 's/s//'.
But I do not get it what the first function is pushing through its standard out, which gets pushed into the second one, after all there are no return values defined inside the function, so what is travelling through the human centipede if the man at the beginning has an empty stomach?
|
Short answer: nothing.
If a process takes in nothing on STDIN, you can still pipe to it. Simiarly, you can still pipe from a process that produces nothing on STDOUT. Effectively, you're simply piping a single EOF indicator in to the second process, that is simply ignored. The construction using the pipe is simply a variation on the theme of "every process starts two more". This fork bomb could also be (and sometimes is) also written as:
:(){ :&:; }; :
Where the first recursive call is backgrounded immediately, then the second call is made.
In general, yes, the pipe symbol (|) is used to do exactly what you mentioned - connect STDOUT of the first process to STDIN of the second process. That's also what it's doing here, even though the only thing that ever goes through that pipe is the single EOF indicator.
| What exactly is the function piping into the other function in this fork bomb :(){ :|: & };:? |
1,354,215,010,000 |
We have a spark cluster that launches via supervisor. Excerpts:
/etc/supervisor/conf.d/spark_master.conf:
command=./sbin/start-master.sh
directory=/opt/spark-1.4.1
/etc/supervisor/conf.d/spark_worker.conf:
command=./sbin/start-slave.sh spark://spark-master:7077
directory=/opt/spark-1.4.1
The challenge for supervisor is these scripts launch a daemon process and detach, where supervisor expects things to run in foreground without a fork. So far, my efforts to convince supervisor that forking is okay or to convince spark not to fork have come to naught. Anyone find a better way? Thanks!
|
Solution I inferred from a previous version of the documentation:
/etc/supervisor/conf.d/spark_master.conf:
command=/opt/spark-1.4.1/bin/spark-class org.apache.spark.deploy.master.Master
directory=/opt/spark-1.4.1
/etc/supervisor/conf.d/spark_worker.conf:
command=/opt/spark-1.4.1/bin/spark-class org.apache.spark.deploy.worker.Worker spark://spark-master:7077
directory=/opt/spark-1.4.1
Launching via the bin/spark-class command stays in the foreground, and has the added satisfaction of not perpetuating the "slave" terminology.
| Launch Spark in Foreground via Supervisor |
1,354,215,010,000 |
My code is forking a process and printing each process' PID and PPID. I was expecting the child's PPID to be same as the parent's PID, but it is not coming up as such.
I'm using Ubuntu 14.04.
#include <stdio.h>
#include <sys/wait.h>
int main(){
int pid;
pid = fork();
if(pid==0){
printf("\nI am the child and my parent id is %d and my id %d\n", getppid(), getpid());
}
else
printf("\nI am the parent and my pid is %d and my parent id is %d\n", getpid(), getppid());
return 0;
}
Here is the output I am getting:
I am the parent and my pid is 29229 and my parent id is 27087
I am the child and my parent id is 1135 and my id is 29230
|
My guess is: the parent returned before the child, which became an orphan. PID 1135 must be your user init process, which became the process' new parent. (there are 2 subreapers in a Ubuntu user session).
$ ps -ef | grep init
you 1135 ... init --user
If you want your parent to wait for its child, use wait. You actually have the include already:
#include <stdio.h>
#include <sys/wait.h>
int main(){
int pid;
pid = fork();
if(pid == 0)
printf("\nI am the child and my parent id is - %d and mine id %d\n",getppid(),getpid());
else{
printf("\nI am the parent and my pid is %d and my parent id is %d\n",getpid(),getppid());
wait(NULL);
}
return 0;
}
This will ensure that the parent doesn't exit before the child's printf. You can see this behaviour more clearly by inserting a few sleep() calls here and there to see in which order things occur.
For more information on subreapers, have a look here.
| Unexpected parent process id in output |
1,354,215,010,000 |
I am doing some experiments to know how environment variables are inherited from parent process to child process by executing shell scripts in zsh and then use pstree <username> to see the inheritance tree.
I suppose that zsh do a fork to run a script. But the process name in the pstree is the script file name not zsh.
#parent.sh
#! /bin/zsh
export AMA=1
./childEcho.sh #call child
#childEcho.sh
#! /bin/zsh
echo ${AMA}
./subchild.sh #call sub_child
#subchild.sh
#! /bin/zsh
echo ${AMA}
sleep 5d #sleep so that pstree can see the process tree
Then pstree shows that
sshd───zsh───parent.sh───childEcho.sh───subchild.sh───sleep
Then I delete the hashbang header in the scripts, run again and then by pstree, I will get
sshd───zsh───sh───sh───sh───sleep
So the process is now sh instead of the script file name.
Why we have this different behavior?
How pstree determines the process name and draws the tree?
Something changed the process name at runtime?
|
A program can change it's own command line (as shown in ps's CMD column, or pstree). That's what zsh is doing—it's changing its command name to the shell script, presumably to make it easier to tell what each zsh is doing when looking at ps.
For example (though I'm using bash, not zsh, but the same works in zsh—I tested):
$ perl -E '$0 = "I AM PERL"; sleep(60);' &
[1] 504
$ ps 504
PID TTY STAT TIME COMMAND
504 pts/22 S 0:00 I AM PERL
You can get the actual executable by readlink /proc/PID/exe, at least on Linux.
| shell script process fork |
1,354,215,010,000 |
Basic concurrent client/server architecture: There's a main loop listening for requests on a port (for example 3000), after accepting the connection the server spawns a child process that ends up having access to file descriptors where data can be read.
If we have multiple clients connected to the server, the server will have a child process per request. So S1 (child server process) reads data from C1 (a client), S2 reads from C2 and so on. My question is how is it possible that all clients (C1, C2...) are sending information to the same port (3000) and yet the server processes (S1, S2...) are reading only the information sent from the client assigned to them? Where is the multiplexing being done and how?
|
A TCP/IP connection has both a source port and a destination port, so if the same server connects to another server on port 3000 multiple times, the Linux kernel can sort out the connections because each one has a unique IP + source port + destination port.
This can be seen with the output of netstat when there are active TCP connections, which shows both the local source port and foreign destination port.
As an aside, doing a fork() for each connection is a really bad idea for a server getting any significant load; fork() is a slow, resource-intensive system call. There's a reason nginx is becoming popular; it uses a difficult-to-program fork()-free model for delivering static content.
| How does the kernel know which file descriptor to write data to after fork() in a concurrent server? |
1,354,215,010,000 |
In Linux (CentOS 7.5, kernel 3.10, gcc 7.3), is it possible to change the working directory of a child process created by posix_spawn before it runs a given process image (an executable)? If yes, how? If no, what is the best practice to do it?
|
There is no way to do thisas part of the posix_spawn() set of functions.
There is an ongoing discussion initiated by redhat whether such a feature should be added. If this gets accepted, it could become part of POSIX in the next version - this may be in 2-3 years.
BTW: posix_spawn() is implemented on top of vfork()/exec() and as long as you don't like to implement a POSIX shell with vfork() support, vfork()/exec() is really easy to use.
| How to change working directory of a child process by posix_spawn? [closed] |
1,354,215,010,000 |
Consider a parent process which completes a socket/bind/accept, and will fork children with that socket open for them to communicate with, while the parent continues accepting connections. That parent process is then killed.
Another process now attempts to bind to the same address the parent process was bound to, on the same port, but receives an EADDRINUSE error.
However, when you complete this process with sshd, it seems sshd is able to rebind to the port that was closed, while during the restart window (where the sshd parent process is not running), a different program (running as a different user) just gets EADDRINUSE.
What are the semantics behind this? Why can sshd rebind, but another users process cannot?
Additionally, I can confirm that the netstat -a | grep PORT output from during the time only the child process is running (when the other process can't bind), the only connection is the ESTABLISHED one, none in LISTEN state.
|
While I don't understand all of the semantics (I'm either looking in the wrong place, or the documentation is lacking), I believe that for a certain amount of time after closing a connection (perhaps set by SO_LINGER), no process can open a new socket with the same details unless they have SO_REUSEADDR set.
This is to prevent someone reconnecting a second after a connection is closed and the process having to deal with packets that were meant for the previous process, as I understand it.
man 7 socket doesn't document this as part of SO_REUSEADDR which made this answer hard to figure out.
| What are the semantics of getting a EADDRINUSE when no listening socket is bound, but connections are open |
1,354,215,010,000 |
I'm trying to run a simple one file C program in a Virtual Machine. In fact it is the fork bomb c program:
#include <stdio.h>
#include <sys/types.h>
int main()
{
while(1)
{
fork();
}
return 0;
}
I want to do this in order to check how much of an effect this VM would have on another VMs running on the system.
I was wondering what is the easiest way to execute this in a VM, and if possible to avoid downloading, compiling and building a whole Ubuntu/other Linux VM. I often use Unikernels for such things, however most of the ones I know do not have a support for the fork() system call.
|
With qemu kvm, booting the host kernel in a headless VM with console I/O on serial (redirected to the terminal you run it from):
Compile that fork-bomb.c into a static init executable:
gcc -static -o init fork-bomb.c
Make an initramfs with just that init at the root:
bsdtar --format newc -cf initrd init
Boot the VM on it:
kvm -nographic \
-kernel "/boot/vmlinuz-$(uname -r)" \
-initrd initrd \
-append 'console=ttyS0 debug=9'
Press Ctrl+a then x to terminate the VM.
You can add a -m 4G to get 4GiB of RAM instead of the default of 128MiB, -smp 4 to get 4 CPUs instead of just 1. See man qemu-system-x86_64 for other ways to customize the virtual hardware. Ctrl+a, c to get the qemu monitor console where you can hotplug more components or inspect the state / config of the VM, suspend, save state, etc.
Here we enable the maximum Linux kernel debug level with debug=9; you can change it at run time by sending sysrq followed by a digit. With console on serial, sysrq is by sending a "break", which here you'd do with Ctrl+a, b.
| What is the easiest way to run a VM that executes a simple C program |
1,642,817,338,000 |
Let's say, we are creating a shared memory using mmap(). Let's say the total memory size is 4096. If we use a fork() system call to create children, would the children use the same memory, or will need to have their own memory to work?
|
On fork() the memory space of the parent process is cloned into the child process. As an optimization, modern operating systems use COW (copy on write), so all private memory is shared with the child process until one of the processes performs a change. Then the affected memory pages get duplicated.
The child process and the parent process run in separate memory
spaces. At the time of fork() both memory spaces have the same
content. Memory writes, file mappings (mmap(2)), and unmappings
(munmap(2)) performed by one of the processes do not affect the
other.
"Both memory spaces have the same content" includes memory allocated with mmap(). The memory mappings get cloned and mmap() or munmap() after the fork don't affect the other process anymore.
Only memory mapped with MAP_SHARED (or the Linux-specific MAP_SHARED_VALIDATE) before the fork will have changes to the contents propagated between the processes.
MAP_SHARED
Share this mapping. Updates to the mapping are visible to other processes
mapping the same region, and (in the case of file-backed mappings) are
carried through to the underlying file. (To precisely control when updates
are carried through to the underlying file requires the use of msync(2).)
There are some Linux specific mapping flags to modify the behaviour in other ways:
Memory mappings that have been marked with the madvise(2) MADV_DONTFORK flag are not inherited across a fork().
Memory in address ranges that have been marked with the madvise(2)
MADV_WIPEONFORK flag is zeroed in the child after a
fork(). (The MADV_WIPEONFORK setting remains in place for
those address ranges in the child.)
On exec() the memory image is replaced with the new process, so all memory mappings that got inherited on fork() are removed.
All process attributes are preserved during an execve(), except the following:
[…]
Memory mappings are not preserved (mmap(2)).
Attached System V shared memory segments are detached (shmat(2)).
POSIX shared memory regions are unmapped (shm_open(3)).
| How does a process and its children use memory in case of mmap()? |
1,642,817,338,000 |
I have the following situation: (The following functions are ones taken from python)
I have a process A which is running and has a cgroup memory limit set on it.
I fork a child process from A using os.fork(). Let us call it B. Then I execute os.execvp to load a shell script inside B. As per http://www.csl.mtu.edu/cs4411.ck/www/NOTES/process/fork/exec.html this process runs in the same address space as the caller (i.e. B).
The shell script running in B creates a Java program which runs idefinitely with a given heap size and automatic kill on OOM. This is achieved by passing -XX:OnOutOfMemoryError='kill -9 %p' to the java command. This creates another process C as child of B.
Looking from top, I see B as child of A and C as child of B as expected.
Below are the doubts:
Does the cgroup limit apply on only A or (A+B) or (A+B+C)?
If memory limit is reached, which all process are killed: A or (A+B) or (A+B+C)? And why? Is it identified using process pids or addresses space of the process on which limit is applied?
If in the above point, not all the process are killed is there a way to fine-tune cgroup setting to kill all the child processes as well since we would be left with orphaned processes in such cases?
I am working on Centos7 as the underlying operating system.
|
I'll address each of your questions:
The cgroup applies to A and any descendants of A. You can experiment to verify this behavior. Consider:
Create a new memory cgroup
# mkdir /sys/fs/cgroup/memory/example
Note the shell's PID:
# echo $$
679
Put the running shell into the new cgroup
# echo $$ > /sys/fs/cgroup/memory/example/cgroup.procs
Examine what processes are in the cgroup. Here 679 was the
shell's PID; 723 is the pid of cat
# cat /sys/fs/cgroup/memory/example/cgroup.procs
679
723
Start a new shell and note its PID
# bash
# echo $$
726
Examine what processes are in the cgroup. 679 was the original
shell. 726 is the second shell. 731 is cat:
# cat /sys/fs/cgroup/memory/example/cgroup.procs
679
726
731
According to the memory cgroup documentation:
When a cgroup goes over its limit, we first try
to reclaim memory from the cgroup so as to make space for the new
pages that the cgroup has touched. If the reclaim is unsuccessful,
an OOM routine is invoked to select and kill the bulkiest task in the
cgroup.
Based on that, it doesn't kill all the processes in the cgroup; it picks the one consuming the most memory.
I don't see a way to tune which process gets killed to match what you're asking here. There is a way to have some control over the OOM killer; see for example this Linux Weekly News Article. That said, if one of the parent processes gets killed, any child processes it had get reparented to the process with PID = 1 (by default).
| Cgroup memory limits and process killing |
1,642,817,338,000 |
I have 2 files: /MyDir/a and /MyDir/MySubDir/b and am running a bash script, to which I want to add code to make file /a point to file /b, but only in the current process and its descendants.
In hopes of making /MyDir/a point to /MyDir/MySubDir/b in the context of only the current process (not including its descendants, yet) I tried to first make the current process run in its own mount namespace by running a small C program in my script that performs
unshare(CLONE_NEWNS)
and then
mount --bind /MyDir/MySubDir/b /MyDir/a.
Unfortunately, this didn't work as I expected since the mount was still visible by other processes, despite the system call reporting success.
In another attempt, I tried to make the mount from the C code by calling
mount("/MyDir/a", "/MyDir/MySubDir/b", "ext3", MS_BIND, null)
But this didn't work as the mount didn't take effect at all (despite the call reporting success).
Is there a way of making /MyDir/a point to /MyDir/MySubDir/b in the context of only the current process and its descendants using a bash script?
I also read a little about chroot, but this applies only to the / directory...
Is there anything similar to chroot that applies only to a particular subdirectory?
Thanks for your time!
|
A shell-only solution would be:
For interactive shell:
# unshare --mount
# mount --bind /MyDir/MySubDir/b /MyDir/a
#
non-interactively, before a script that doesn't have to know about these settings:
# unshare --mount sh -c 'mount --bind /MyDir/MySubDir/b /MyDir/a; exec somethingelse'
The unshare manpage also warns about shared subtree mounts . If you have to disable them, consider adding for example --make-private to mount.
As Hauke told, you have to be sure to not leave the namespace just after having created it, because it will disappear.
If needed there's a method to maintain a namespace without process. Since it involves mount, it's just a bit more tricky for a mount namespace. Here's an interactive example for this:
shell1# unshare --mount
shell1# echo $$
12345
shell1#
shell2# : > /root/mntreference
shell2# mount --bind /proc/12345/ns/mnt /root/mntreference
Now as long as this reference is kept mounted, the namespace won't disappear even if there's no process using it anymore. Using nsenter --mount=/root/mntreference will enter it, so you can easily run additional scripts in it.
Using the equivalent in C shouldn't be a problem.
| Making a bind-mount take effect only in the context of the current process and its descendants |
1,642,817,338,000 |
"myapplication" needs some setup or clean up done, so I use the following wrapper script:
#!/bin/bash
echo "Do important set up stuff"
myapplication
echo "Clean up"
and put it in my path, named "myapplication" so it takes precedence over the original one automatically. This worked while testing but stopped once I actually put it into my path, giving the following error instead:
/home/user/bin/myapplication: fork: retry: No child processes
[more of the same line]
/home/user/bin/myapplication: fork: retry: No child processes
/home/user/bin/myapplication: fork: Resource temporarily unavailable
and also causing other programs to malfunction with the same error in the time after the script was launched before it aborts with the last error.
|
Once the script is in the path, the line in the script which is supposed to call the original program instead calls the script, which creates infinite non terminating recursion until some system limit is reached.
The correct approach is to do which myapplication before putting the script in PATH to find the absolute path to the myapplication original executable and then use that path to call myapplication from the script.
The lesson to be learned in general is: this error may indicate a non terminating recursion.
| wrapper script: fork: retry: No child processes |
1,642,817,338,000 |
I would like to catch all syscalls coming from a forked process, modify them, send them to the kernel, and then pass them back to the forked process. Is this possible, and if so, how might I go about this?
I've done some research, and found ptrace, but it seems a bit heavy weight because it does so many things (modifying registers, etc...). Correct me if I am incorrect, however.
|
If you can wait for version 5.11 of the kernel, it will have a new system call interception mechanism designed for fast (or less slow) emulation of system calls. The initial use case is for Wine but it is usable for other purposes, as long as a signal handler can work (it relies on SIGSYS).
| Is there a better method than ptrace for intercepting ("catching") Linux syscalls coming from a forked process? |
1,642,817,338,000 |
While I was playing around with fork() I noticed a rather strange behavior but I couldn't figure out myself why this happens.
In the example below, each time fork() is invoked the output from the printf() invocation prior to that is printed out to stdout. The value of test in the output shows that printf() does not really get executed again, otherwise this would increase test each time.
Even stranger - or maybe key to the solution - is the fact that this behavior doesn't occur when I add \n to the end of the format string of printf().
Does anybody know why this happens?
Maybe it is related to the stdout buffer? I'm not really familiar with this stuff.
Or am I doing something horribly wrong??
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <unistd.h>
int main(char* args) {
pid_t pid;
int wstatus;
int test = 0;
printf("\n[%d] parent start | test = %d ", getpid(), test++);
// This works fine
//printf("\n[%d] parent start | test = %d \n", getpid(), test++);
for(int i = 0; i < 5; i++) {
if((pid = fork()) == 0) {
//printf("\n[%d] Child spawned ", getpid());
exit(0);
}
//printf("\n[%d] Printing with fork() commented out works fine", getpid());
}
while(wait(&wstatus) > 0);
printf("\n[%d] parent end\n\n", getpid());
return 0;
}
Output:
[342470] parent start | test = 0 [342470] parent start | test = 0 [342470] parent start | test = 0 [342470] parent start | test = 0 [342470] parent start | test = 0 [342470] parent start | test = 0
[342470] parent end
In case it's of any use
$ uname -a
Linux Aspire 5.4.0-26-generic #30-Ubuntu SMP Mon Apr 20 16:58:30 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
|
The printf has written text to the stdout buffer, which is finally written in each branch of the fork via the exit call.
| Recurrent output of printf() to stdout each time fork() is invoked allthough printf() is invoked prior to fork(). And why does '\n' fixes this? [duplicate] |
1,642,817,338,000 |
I am trying to understand the behaviour of programs that launch subprocesses, when run in a pipeline.
This bash program, fork.sh, prints and returns immediately:
(sleep 1) &
echo 'here'
But when connected to a pipe, the read end seems to wait for the sleep to complete.
$ time bash fork.sh | wc
1 1 5
real 0m1.014s
I've also tried this in Ruby, with some extra calls to try to prevent the sleep from blocking:
Process.detach(fork { sleep 1 })
puts 'here'
fork {
sleep 1
Process.daemon
}
puts 'here'
But they behave the same.
I would like to know what causes this (in terms of Unix, file descriptors, etc), and if there's a way to re-write any of these so that the pipeline returns in under a second.
edit: the answer below helped me notice the problem with the Ruby example: the daemon() call must come first. I had thought it was somehow applied to the entire process.
|
child processes inherit all the file descriptors from their parents.
When executing a command (like your sleep here assuming your shell doesn't have it builtin), only the file descriptors marked with the close-on-exec flag are closed, but shells never set that flag on stdout (fd 1).
a pipe reader will only get an EOF when all the file descriptors pointing to its writing end have been closed.
time bash fork.sh | wc
You should have your sleep process (as started from fork.sh) give up on its stdout, which points to the write end of the pipe wc is reading from; in fork.sh:
(sleep 1 >/dev/null) &
echo 'here'
In this case sleep .. >&- (which closes the stdout, without redirecting it elsewhere) could work too, but I don't recommend this in general, because if the process opens some file afterwards, the returned file descriptor will be 1 = stdout, which may break assumptions and trigger bugs.
| Do subprocesses keep pipes open? |
1,642,817,338,000 |
I would like to understand properly the swapping in process and yet, couldn't find a thorough explanation how pte's flags of a page are restored once a page is swapped in back to memory- since it's information is " lost" when swapping out and the corresponding disk area's adress is inserted in the pte entry of a swapped out page. I do understand that the flags of a virtual adress are stored in vm_area_struct but couldn't trace the stage when it is used during the swap-in procedure.
another potential problem is - what happens if a parent process forked, and both parent and child's are swapped out: as far as I consider, in both page tables-the read_only flag is on but the vm_area_struct allows writing since both have VM_MAYWRITE permission for some mem-areas but once swapped out the read_only flag in the corresponding pte is "erased" .does the COW technique is still applicable once a page they both points to once the page is swapped in and the child process wants to write?
|
Like you said, the vm_area_struct tells in what memory area the fault happened, and the protection flags are contained in this struct. The function __do_page_fault calls find_vma to get a pointer to the vm_area_struct. This struct is then passed via handle_pte_fault all the way to do_swap_page (in the vm_fault *vmf parameter), which calls mk_pte with the protection bits as parameter.
Your other problem: if a COW page is swapped out and a process wants to write to it. In this case you get a page fault because the page is swapped out. The handler takes care of the situation, and the process goes to sleep until the page has been read in from disk. When the process is scheduled to run again, it re-executes the faulting write instruction and BANG! — we get a new fault, this time since the page is read-only because of the copy-on-write.
| how does pte's flags are restored when a page is swapped in from swap-area? |
1,642,817,338,000 |
If the child tries to write, it gets a new copy of the page (which is no longer write protected), does the grandchild point to that new page or the old one (which the parent holds)?
|
The process that writes to the page gets a new copy. If there are multiple processes that shared the old copy, they keep sharing the same page. It doesn't matter if the processes happen to be related.
| When parent, child and grandchild processes share a page how does copy-on-write work? |
1,642,817,338,000 |
I can't figure out what this is trying to do. The part between backticks looks like a plain old forkbomb, but the base64 doesn't seem to decode to anything sensible. Can you help?
Don't run it, obviously :)
eval $(echo "a2Vrf4xvcml\ZW%3t`r()(r{,54}|r{,});r`26a2VrZQo=" | base64 -d)
|
Pretty sure the base64 string is just a cover up and is never run. The embedded fork bomb runs first and never returns.
| What is the exact function of this malicious bash one-liner? |
1,642,817,338,000 |
I am using Ubuntu 22.04.1 on WSL 2 (though the fact that it is Unix is only relevant for this question)
How come when we run tmux from a zsh session, the process tree (which I have abridged somewhat) changes from
init(Ubuntu)─┬─SessionLeader───Relay(9)─┬─ssh-agent
└─zsh───pstree
to
init(Ubuntu)─┬─SessionLeader───Relay(9)─┬─ssh-agent
├─tmux: server───zsh───pstree
└─zsh───tmux: client
Here, pstree is just the command that tells me the process tree, hence its presence above.
When we run tmux in zsh, what happens is that zsh runs fork() to create a fork process that is a child of zsh (that is, tmux: client above). I am not sure how tmux: server, a process that is sibling to the process that spawned it, comes to be.
|
For the server, tmux forks itself twice so as to daemonize itself, detach itself for the session it was started from.
The child dies, the grand-child runs the server. That means the server has no parent.
Processes without parents are normally adopted by init, the process of id 1. On Linux, some process can be nominated as a child subreaper using the PR_SET_CHILD_SUBREAPER prctl() to take on that role for its descendants.
That or the equivalent for WSL is likely what you're observing here. That Relay(9) process is probably a child subreaper and has adopted that tmux server daemon.
You can tell who's the child subreaper in your ancestry with something like:
((zmodload zsh/system; sleep 0.2; echo $sysparams[ppid])&) | cat
I'd expect that to return the pid of the Relay(9) process.
You can follow what tmux does here by running it under:
strace -fo log -e clone,exit_group,prctl tmux
And inspect the contents of the log file afterwards. You'll see something like:
3908 execve("/usr/bin/tmux", ["tmux"], 0x7fffd0c3e990 /* 50 vars */) = 0
3908 prctl(PR_SET_NAME, "tmux: client") = 0
The parent is the client.
3908 clone(child_stack=NULL, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7f54afe20a10) = 3909
3909 clone(child_stack=NULL, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7f54afe20a10) = 3910
Forked twice for the server.
3909 exit_group(0) = ?
Child terminates.
3910 prctl(PR_SET_NAME, "tmux: server") = 0
Grand child (which by that time has already lost its parent above and been adopted by the child subreaper) runs the server.
[...]
3910 clone(child_stack=NULL, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7f54afe20a10) = 3911
3911 execve("/bin/zsh", ["-zsh"], 0x55bee5bdd170 /* 54 vars */) = 0
The server forks a process to run your $SHELL in the first pane.
| Child and sibling processes from running tmux in zsh |
1,642,817,338,000 |
I am researching how processes and shell work in Linux system. I would like to consult you to see if my conclusions are correct.
When we start the system, the kernel starts the init process, everything else is run as a sub-process with the fork of this process. For example, when I run any program, the parent process is forked for this program and then the forked process becomes the child process(or sub-process) that runs the program with exec. If this is the case, for example, when I run the bash shell, the parent process forked and exec makes the forked process the child process the bash program will run on. At this point, what stumbles upon me is how the commands we enter into the bash shell are executed. How do the built-in and external commands go through? For example, do built-in commands fork or create subprocesses for them?
|
What you are asking will be true for all Unixes, not just Gnu/Linux.
The thing to note is that after a fork, one does not need to exec. So for shell built-ins the shell will fork, and then do the built-in command.
The shell will also fork for a sub-shell.
The shell does not fork when it does not have to: e.g. for simple commands that are built in. Where simple includes not in a pipeline.
There is also (not mentioned in your question) the pipes. These are created before the fork, but wired up after the fork, and before the, optional(see built-ins), exec.
| How exactly do programs or bash shell commands work on Linux systems? |
1,642,817,338,000 |
I am experiencing a frustrating problem where my bash and sudo programs seem to be replicating thousands of processes on Mac. I have searched for all kinds of ways to stop them. I don't know what to do. I have restarted the computer. They self replicate after: pkill -f bash
I don't want to have a looped kill script battling this ongoing. I just want it to stop. Thank you so much.
The only thing I can think that I did was try to run openvpn accidentally on the wrong file type. It then gave me, fork: resource temporarily unavailable.
|
You have diagnosed that the offending command is in fact your openvpn invocation.
You should be able to kill openvpn by using
sudo pkill -f openvpn
Failing that, temporarily uninstalling openvpn or just changing the executable's name should cause it to stop respawning, at least after a reboot (I'm a bit unclear on why a reboot did not stop this in the first instance).
If it does not respond to the termination signal, you could, as a last resort, use the kill signal,
sudo pkill -KILL -f openvpn
| Bash and Sudo forking continuously, hidden fork bomb? |
1,642,817,338,000 |
We understand the COW behavior after a fork (as for example described here) as follows: fork creates a copy of the parent's page table for the child and marks the physical pages read only, so if any of the two processes tries to write it will trigger a page fault and copy the page.
What happens after the child process execs? We would assume the parent process can again write to its pages without triggering a page fault, but it has proven difficult to find exact information on how this is implemented.
Any pointers (including to code) are welcome!
|
When the child process execs, all its current pages are replaced with a brand new set of pages corresponding to the new executable image (plus heap, stack, etc.).
Modern OSes implement CoW is by maintaining a reference count for the physical pages shared between parent and child processes. If a page is shared between parent and child, the reference count will be 2. Once the child process goes through exec, the reference count for the shared pages is decremented (e.g., it's back to 1) thus any write operation by the parent process will succeed without CoW.
For your amusement, create a simple program that does fork followed by the child process sleeping for a few seconds and then doing and exec. Now observe the contents of /proc/PID/smaps of both processes before fork (only parent of course), after fork but before exec, and after exec. Pay attention to the Shared_XXX pages and the corresponding address ranges.
In terms of code, there are a few simple XV6 extensions to support copy-on-write. A simple google search might be enough. Another place to look at might be https://github.com/torvalds/linux/blob/master/kernel/fork.c. Start tracing it from the fork entry and have fun.
Fork is rather simple, once you get the hang of it, but the memory
management can be a bitch. See 'mm/memory.c': 'copy_page_range()'
| fork() and COW behavior after exec() |
1,642,817,338,000 |
I have a process P which is spawned by a process owned by root.
After P is created setguid() and setuid() are called and it runs as user U.
The process P attempts to create a file f on a folder F (in the root file system) which is owned by root and has the following privileges:
drwxrwx--- 2 root root
The function call look likes this:
open(path , O_CREAT | O_RDWR , 0660);
If I run the command ps -e -o cmd,uid,euid,ruid,suid,gid,egid,rgid,sgid
the result is the following:
/my/process 500 500 500 500 500 500 500 500
This confirm that the process P is not running as root however strange enough even if the process runs as user U the file f is create under the folder F which should be only writable by root and its group members:
-rw-rw---- 1 U U
So the file is owned by U.
If I try doing the same from the bash I get a "Permission Denied" as expected:
$ touch /F/f
touch: cannot touch `/F/f': Permission denied
If I set the folder F permissions to:
drwx------ 2 root root
then the open() call fails with "Permission Denied" as expected.
Why can P create the file in that folder when writing permission has been granted to the root group?
The ps command shows that all uid and gid are set to the related user ids so how can it possible?
These are the group memberships of root and U:
$groups root
root : root
$groups U
U : U G
So U has G as secondary group
$lid -g root
root(uid=0)
sync(uid=5)
shutdown(uid=6)
halt(uid=7)
operator(uid=11)
$lid -g U
U(uid=500)
$lid -g G
U(uid=500)
This show that only U is a member of G
|
Like @jdwolf mentions in the comments, the issue might be supplementary groups. setgid() doesn't remove them.
A simple test, ./drop here is a program that calls setregid() and setreuid() to change the GID and UID to nobody, and then runs id:
# id
uid=0(root) gid=0(root) groups=0(root)
# ./drop
uid=65534(nobody) gid=65534(nogroup) groups=65534(nogroup),0(root)
There's still the zero group. Adding setgroups(0, NULL) (before the setuid()) removes that group:
# ./drop2
uid=65534(nobody) gid=65534(nogroup) groups=65534(nogroup)
Of course, that doesn't add any of the other target user's groups.
| Linux open() syscall and folder permissions |
1,642,817,338,000 |
If I understand context switching correctly, the process involves two major steps:
The MMU is switched to one that maps the new processes virtual memory space to physical memory space.
The processor state is saved for the current process, then switched to the saved processor state for the new process. Presumably, this includes setting the program counter to begin execution from where the switched-to process last left off.
In the kernel, the function that handles all of this is called context_switch() (source code here). This function handles both of the required steps, but after setting the processor state, it then returns.
That's confusing, because it seems to me that once the program counter is manually moved to a new place, context_switch() wouldn't have an opportunity to return at all. The only explanation I can come up with is that context_switch() is both the code that switches to a new process and the code to which switched processes return. In other words, every process ends up switching from its own context_switch() to another processes' context_switch(). But then it seems unclear to me how this could work in a newly forked process. So maybe context_switch() actually runs to completion and returns, and then something else jumps to the correct part of the target process?
Is this thinking correct? At what point exactly does context_switch() move from one process to another? When does context_switch() return? When it switches to a new process, where in the new process' execution state does it end up? How does this fit in with newly forked processes?
I've been spending the last few days reading through the relevant parts of the kernel source code to try and figure this out, but I'm afraid I'm not getting any closer to understanding. Hopefully someone here can help.
|
Note the comment in line 3366 (as of 5.7.7):
/* Here we just switch the register state and the stack. */
context_switch() doesn't load the new instruction pointer (program counter) directly, it switches the stack — and the stack contains the appropriate return address. When the function returns, it returns to the new task.
When forking, the virtual return address is the same as in both processes (parent and child); the difference is the return value.
| When exactly does context_switch() switch control to a new process? |
1,642,817,338,000 |
I'm using this instruction to forward a port to another, both on a local machine:
socat -d -d TCP4-LISTEN:80,reuseaddr,fork TCP4:127.0.0.1:8000
I need to keep the port open unless the destination port get closed (connection refuse).
Is it possible to ask socat to terminate on connection refuse (with fork enabled)?
|
AFAIK this is not possible with current versions
| tell socat to stop on connection refuse with fork enabled |
1,642,817,338,000 |
I am currently taking a Computer Systems class and am having trouble with a homework problem. I have to create this specific process tree:
I also need it to stay in this state for a while (using sleep()) so a user can look it up in the terminal using pstree and see that it exists. Then it must terminate backwards (First D, then B, then C). So far, I can make the tree, but the C term terminates before the rest of the tree is made so I only end up with A->B->D. I know this is happening because of my exit(1) line, but I don't know where else to put this or if there is another way.
Code I have so far:
#include <stdio.h>
#include <sys/types.h>
#include <unistd.h>
#include <stdlib.h>
#include <sys/wait.h>
int main() {
int status = 0;
printf("I am: %d\n\n", (int)getpid());
pid_t pid = fork(); // fork a child
if(pid == 0)
{
printf("Hi I'm process %d and my parent is %d\n",getpid(),getppid());
exit(1);
}
else
{
pid_t childPid = wait(&status);
int childReturnValue = WEXITSTATUS(status);
printf("parent knows child %d finished with return value %d\n\n", (int) childPid, childReturnValue);
pid_t pid = fork(); // fork a child
if (pid == 0)
{
printf("Hi I'm process %d and my parent is %d.\n", getpid(), getppid());
pid = fork(); // fork a child
if (pid == 0)
{
printf("Hi I'm process %d and my parent is %d.\n",getpid(),getppid());
exit(3);
}
else
{
pid_t childPid = wait(&status);
int childReturnValue = WEXITSTATUS(status);
printf("parent knows child %d finished with return value %d\n\n", (int) childPid, childReturnValue);
}
exit(2);
}
else
{
pid_t childPid = wait(&status);
int childReturnValue = WEXITSTATUS(status);
printf("parent knows child %d finished with return value %d\n\n", (int) childPid, childReturnValue);
}
}
return 0;
}
Here is the output I am currently getting:
I am: 2827
Hi I'm process 2828 and my parent is 2827
parent knows child 2828 finished with return value 1
Hi I'm process 2829 and my parent is 2827.
Hi I'm process 2830 and my parent is 2829.
parent knows child 2830 finished with return value 3
parent knows child 2829 finished with return value 2
Ideally, the line "parent knows child 2828 finished with a return value 1" should be all the way at the end. Thanks in advance!
|
You have to use sleep to prevent C to quit immediately. But in your structure, you have A waiting for C to quit before it spawns B and D.
So :
put the wait bloc for C at the same place as the wait bloc for B
add a sleep before the exit of C (and also before the exit of B and
D)
as you don't want to wait for B for double of the time, ensure the
sleep of B is before the wait of D
to get the correct return value for each sub-process, you should use waitpid instead of wait.
Here is the full code :
#include <stdio.h>
#include <sys/types.h>
#include <unistd.h>
#include <stdlib.h>
#include <sys/wait.h>
#define SLEEP_TIME 5
int main() {
int status;
printf("I am: %d\n\n", (int)getpid());
pid_t c_pid = fork(); // fork a child
if(c_pid == 0)
{
printf("Hi I'm process C (%d) and my parent is %d\n",getpid(),getppid());
sleep(SLEEP_TIME);
exit(1);
}
else
{
pid_t b_pid = fork(); // fork a child
if (b_pid == 0)
{
printf("Hi I'm process B (%d) and my parent is %d.\n", getpid(), getppid());
pid_t d_pid = fork(); // fork a child
if (d_pid == 0)
{
printf("Hi I'm process D (%d) and my parent is %d.\n",getpid(),getppid());
sleep(SLEEP_TIME);
exit(3);
}
else
{
// sleep before wait - actually no effect as the wait for D also waits for SLEEP_TIME
sleep(SLEEP_TIME);
// Wait for D to quit
waitpid(d_pid, &status, 0);
int DReturnValue = WEXITSTATUS(status);
printf("parent knows child D (%d) finished with return value %d\n\n", (int) d_pid, DReturnValue);
}
exit(2);
}
else
{
sleep(SLEEP_TIME);
// Wait for B to quit
waitpid(b_pid, &status, 0);
int BReturnValue = WEXITSTATUS(status);
printf("parent knows child B (%d) finished with return value %d\n\n", (int) b_pid, BReturnValue);
// Wait for C to quit
waitpid(c_pid, &status, 0);
int CReturnValue = WEXITSTATUS(status);
printf("parent knows child C (%d) finished with return value %d\n\n", (int) c_pid, CReturnValue);
}
}
return 0;
}
Here is the corresponding output :
I am: 24450
Hi I'm process C (24451) and my parent is 24450
Hi I'm process B (24452) and my parent is 24450.
Hi I'm process D (24453) and my parent is 24452.
parent knows child D (24453) finished with return value 3
parent knows child B (24452) finished with return value 2
parent knows child C (24451) finished with return value 1
| Creating a specific process tree and terminating it |
1,642,817,338,000 |
Operating System Concepts say
fork() we can use a technique known as copy-on-write,
which works by allowing the parent and child processes initially to share the
same pages. ...
When it is determined that a page is going to be duplicated using
copy- on-write, it is important to note the location from which the
free page will be allocated. Many operating systems provide a pool
of free pages for such requests. These free pages are typically
allocated when the stack or heap for a process must expand or when
there are copy-on-write pages to be managed. Operating systems
typically allocate these pages using a technique known as
zero-fill-on-demand. Zero-fill-on-demand pages have been zeroed-out
before being allocated, thus erasing the previous contents.
Is copy-on-write not implemented based on page fault? (I guess no)
Do copy-on-write and page fault share the same pool of free pages? If not, why? (I guess no)
Is malloc() implemented based on page fault? (I guess yes, but not sure why it shares the same free page pool as copy-on-write, if that pool is not used by page fault)
Thanks.
|
(Since this is tagged linux, I’m answering in that context. None of this is exclusive to Linux.)
Is copy-on-write not implemented based on page fault?
It is based on page faults. “Copied” pages are marked read-only. When a process tries to write to them, the CPU faults and the kernel duplicates the page before restarting the write.
Do copy-on-write and page fault share the same pool of free pages? If not, why?
Yes, they do.
Is malloc() implemented based on page fault?
malloc() itself doesn’t manipulate the address space or allocated pages; it’s handled entirely by the C library. The function used to allocate memory to the heap is brk(), and yes, it relies on page faults: allocated pages are marked not-present. This relies on a “present” bit in the corresponding page table entry, which the kernel and MMU use to track whether the page is accessible in memory. Any access to a non-present page causes a fault, and the kernel allocates a page and restarts the faulting instruction.
| Is copy-on-write not implemented based on page fault? |
1,642,817,338,000 |
From my understanding whenever you type a command such as 'ls' in your shell, the parent process which is my shell duplicates itself using fork() system call and then uses the exec() system call to replace it with the new process, in this case 'ls' once the it exits, the control is handed back to my shell.
However when I was run strace on 'ls' I only see execve() call and no fork and the control is still handed back to my shell. Kind of confused here...
$ strace ls
execve("/usr/bin/ls", ["ls"], 0x7ffd938934e0 /* 25 vars */) = 0
brk(NULL) = 0x1134000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f6ea9e38000
access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=23255, ...}) = 0
|
Your understanding is correct. When you run strace ls there are even two forks. The shell forks itself and uses exec() to run strace and strace does the same to run ls.
You don't see the fork in the strace output because strace prints all system calls that originate from strace's child process and at that point in time the fork already happened:
bash forks and runs strace
strace forks
The parent strace attaches to the child process to intercept all system calls.
You only see system calls from this point on.
The child strace runs ls using execve()
One way to see the forks happen, is to attach strace "from the outside":
Use echo $$ to get the process id of the shell
Run strace -f --attach=PID with "PID" replaced by the process id from above in another console.
Run ls in the first shell
In the other console window you'll see all system calls happening in the shell and the forked children (including the fork()/clone() calls).
Use CTRL+C in the second console to stop strace.
One other thing to mention is that fork() on current Linux kernels is implemented using the clone() system call, so you'll probably see clone(…) instead of fork() in the strace output.
| Why is 'ls' being created by execve() call and not fork() |
1,642,817,338,000 |
Does the Ubuntu Linux 16.04 daemon function execute a double fork? If so, why is a double fork necessary?
[EDIT May 30 2016 8:11 AM] This is the official Linux Foundation source code for the daemon function I am referring to in this question.
92 int daemon(int nochdir, int noclose)
93 {
94 int status = 0;
95
96 openlog("daemonize", LOG_PID, LOG_DAEMON);
97
98 /* Fork once to go into the background. */
99 if((status = do_fork()) < 0 )
100 ;
101
102 /* Create new session */
103 else if(setsid() < 0) /* shouldn't fail */
104 status = -1;
105
106 /* Fork again to ensure that daemon never reacquires a control terminal. */
107 else if((status = do_fork()) < 0 )
108 ;
109
110 else
111 {
112 /* clear any inherited umask(2) value */
113
114 umask(0);
115
116 /* We're there. */
117
118 if(! nochdir)
119 {
120 /* Go to a neutral corner. */
121 chdir("/");
122 }
123
124 if(! noclose)
125 redirect_fds();
126 }
127
128 return status;
129 }
Depending on the path of execution it will either fork one or twice.
|
We appear to be referencing the daemon(3) library call, source code for which may be at #1 https://github.com/lattera/glibc/blob/master/misc/daemon.c or at #2 https://github.com/bmc/daemonize/blob/master/daemon.c. Both versions are documented in this single man page.
The source code for #1 shows a single fork(2). The source code for #2 shows a double fork(2). Superficially both functions appear to deliver the same result but by different means.
Seeing as a double fork(2) is not always necessary I suppose this counters the thrust of the second part of your question and renders it no longer necessary. However, the underlying reason for this approach was to guarantee that the forked process could not under any circumstances reacquire a controlling terminal. The newer code solves this problem by setting the child to be a new session leader.
There are other related questions on this and other StackOverflow sites that ask similar questions. Here is one.
| Does the Ubuntu Linux 16.04 daemon function execute a double fork? [closed] |
1,642,817,338,000 |
When a process fork()s children without closing and reopening standard IO, all children share the same IO file descriptors.
By default, running such forking process in a systemd unit will result in any standard output being written to the journal, as expected.
On systemd 241 (Debian buster, Linux 4.19) these journal entries have a _PID field matching the PID of the parent process (the one that systemd started), no matter what process actually wrote to stdout (or stderr).
However... on systemd 247 (Debian bullseye, Linux 5.9) the journal _PID entry correctly matches the PID of the process who actually wrote to the shared stdout file descriptor. I am guessing it does this by reading some magic flags on the socket receive logic, which is awesome.
I have read through the systemd changelog and I can't understand at what point this changed and how, or if something is just configured differently.
Is there a way to have matching _PID tags on the journal for the systemd and Linux kernel shipped with buster?
|
at what point this changed and how
at v243-534-g09d0b46ab6: "journal: refresh cached credentials of stdout streams"
Is there a way to have matching _PID tags on the journal for the systemd and Linux kernel shipped with buster?
you can try applying just that change, and rebuilding.
| How can I have the PIDs in the systemd journal for proecesses that share the standard output file descriptor? |
1,642,817,338,000 |
The GNU Screen manual says:
`-d -m'
Start `screen' in _detached mode. This creates a new session
but doesn't attach to it. This is useful for system startup
scripts.
`-D -m'
This also starts `screen' in _detached_ mode, but doesn't fork
a new process. The command exits if the session terminates.
-dm is pretty clear to me:
screen forks a new process to run the provided command (or a shell if nothing was specified).
By "fork" it means that weird Schrödinger's system call in which the source code doesn't know if it's the parent or the child until the return value is observed.
And this new process is recognized by screen as something that can be attached.
I noticed that -dm returns control of the shell, but -Dm blocks.
So my question is:
Why does -Dm block? And how is that related to its lack of forking?
What does it do instead of forking? I think it still creates a new process, because "detached mode" suggests a process identifiable by a PID which can be attached.
What's the use case of -Dm instead of -dm?
Thanks!
|
In the quoted context of the screen documentation, you can read "to fork a new process" as "to start a new child process". At the risk of heading sideways too far, here's how you can consider the process to work. To create a child, a process must use fork(2). That child process runs freely from the parent, and the child exists to exec(2) the command that's to run from the parent. The parent can choose to -
call wait(2) for the child process to complete, which means it will block until the child exits and will then receive the exit status back from the wait(2) call
continue on its way until it receives the SIGCHLD signal notifying it that the child has exited, and at this point call wait(2) to receive the exit status
exit without caring about the child, in which case the process's parent will become the new parent of the child process
Now, with that in mind, here's how it applies to screen.
Usually you would want screen -dm to create a new (child) process and detach from it, allowing your own command execution to continue. For example, this could make sense in ~/.profile or the older system-wide /etc/rc.local where you would not want a command to block. In this case the child process described above is actually another part of screen, which in turn kicks of the real command process as yet another child (a grandchild of the original parent). The two parts of screen communicate, and the child instance of screen manages its command child that does the work you really want.
Occasionally you might want to use screen under control of a supervisor such as systemd. In this instance you would use screen -Dm so that the supervisor could identify if/when the process managed by screen had exited, possibly with the intent of restarting it. If screen had detached the child process - as it would with screen -dm - the supervisor would not be able to tell easily if it was still running or not; the -Dm flags allow screen to provide all its features to the child process while still communicating its existence to the parent. In this case the middle process (the child screen) is not created and the screen parent directly controls the command child that does the work you really want.
Consider
screen -dm sleep 30 - your interactive shell that started the screen command cannot tell whether or not the sleep 30 is still running. There is no feedback.
screen -Dm sleep 30 - your interactive shell blocks until the sleep 30 exits. At this point you know it is no longer running. Clearly not so useful for an interactive session but it's excellent for a supervisor environment such as systemd.
| What's the difference between "-dm" and "-Dm" in GNU Screen? |
1,642,817,338,000 |
I've read the man pages on fork(), and they say something along the lines of "all file descriptors open in the calling process are copied".
It is not 100% clear to me if the file descriptor for the executable binary that the calling process is executing at that point in time is included in that statement.
I know the man pages say "all file descriptors", but I'm asking this because it would seem easier to me to open() the same executable binary for the forked process, rather than synchronizing two processes working with them.
So if they are indeed also copied, why?
|
There's no file descriptor to the binary file being executed, only memory mappings.
(See, e.g. ls -l /proc/self/fd and cat /proc/self/maps on Linux.)
The memory mappings will point to the same file, of course, but that's what happens with shared libraries, too. In the case of the main program file, on Linux, writes to it while it's being used by a running process are not allowed. (Though the last time I checked, that didn't apply to shared libraries.)
| Does fork() also copy the file descriptor for the executable binary that the calling process is currently executing? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.