date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,289,319,364,000 |
So, I have some jobs like this:
sleep 30 | sleep 30 &
The natural way to think would be:
kill `jobs -p`
But that kills only the first sleep but not the second.
Doing this kills both processes:
kill %1
But that only kills at most one job if there are a lot of such jobs running.
It shouldn't kill processes with the same name but not run in this shell.
|
Use this:
pids=( $(jobs -p) )
[ -n "$pids" ] && kill -- "${pids[@]/#/-}"
jobs -p prints the PID of process group leaders. By providing a negative PID to kill, we kill all the processes belonging to that process group (man 2 kill). "${pids[@]/#/-}" just negates each PID stored in array pids.
| How to kill all jobs in bash? |
1,289,319,364,000 |
I'd like to run and configure a process similarly to a daemon from a script.
My shell is zsh emulated under Cygwin and the daemon is SFK, a basic FTP server.
For what matters here, the script startserv.sh can be drafted as follows:
#!/bin/sh
read -s -p "Enter Password: " pw
user=testuser
share=/fshare
cmd="sfk ftpserv -user=$user -pw=$pw -usedir $share=$share"
$cmd &
After running the script startserv.sh, it stops (ends?) without showing any prompt, then:
CTRL+C ends both the script and the background job process;
Hitting Enter the script ends the process remains in the background.
Anyway I can see it only via ps and not jobs, so, when I want to close the process, I have to send a brutal kill -9 signal, which is something I'd like to avoid in favour of CTRL+C.
An alternative would be running the whole script in background. 'Would be', but the read command is unable get the user input if the script is run as startserv.sh &.
Note that I need an ephemeral server, not a true daemon: that is, I want the server process to run in the background, after the script ends, to perform simple interactive shell tasks (with a virtual machine guest), but I don't need the process to survive the shell; therefore nohup seems not appropriate.
|
Hitting Enter the script ends the process remains in the background.
Almost! Actually, the script has already exited by the time you press Enter. However, that's how you can get your prompt back (because your shell prints its $PS1 all over again).
The reason why hitting Ctrl + C terminates both of them is because the two of them are linked. When you execute your script, your shell initiates a subshell in which to run it. When you terminate this subshell, your background process dies, probably from a SIGHUP signal.
Separate the script, the background process and the subshell
Using nohup, you might be able to get rid of this little inconvenience.
#!/bin/sh
read -s -p "Enter Password: " pw
user=testuser
share=/fshare
cmd="sfk ftpserv -user=$user -pw=$pw -usedir $share=$share"
nohup $cmd &
The disown alternative
If you can switch from /bin/sh to /bin/bash, you can give a try to disown as well. To know more, just type help disown in a bash instance.
disown -h $cmd &
Killing the background process "nicely"
Now, when it comes to killing your process, you can perfectly do a "Ctrl + C" using kill. Just don't send a brutal SIGKILL. Instead, you could use:
$ kill -2 [PID]
$ kill -15 [PID]
Which will send a nice SIGINT (2) or SIGTERM (15) to your process. You may also want to print the PID value after starting the process:
...
nohup $cmd &
echo $!
... or even better, make the script wait for a SIGINT, and send it back to the background process (this will keep your script in the foreground though):
#!/bin/sh
read -s -p "Enter Password: " pw
user=testuser
share=/fshare
cmd="sfk ftpserv -user=$user -pw=$pw -usedir $share=$share"
nohup $cmd &
# Storing the background process' PID.
bg_pid=$!
# Trapping SIGINTs so we can send them back to $bg_pid.
trap "kill -2 $bg_pid" 2
# In the meantime, wait for $bg_pid to end.
wait $bg_pid
If a SIGINT isn't enough, just use a SIGTERM instead (15) :
trap "kill -15 $bg_pid" 2 15
This way, when your script receives a SIGINT (Ctrl + C, kill -2) or a SIGTERM while waiting for the background process, it'll just relay the signals to it. If these signals do kill the sfk instance, then the wait call will return, therefore terminating your script as well :)
| Start a background process from a script and manage it when the script ends |
1,289,319,364,000 |
Often times I find myself in need to have the output in a buffer with all the features (scrolling, searching, shortcuts, ...) and I have grown accustomed to less.
However, most of the commands I use generate output continuously. Using less with continuous output doesn't really work the way I expected.
For instance:
while sleep 0.5
do
echo "$(cat /dev/urandom | tr -cd 'a-zA-Z0-9' | head -c 100)"
done | less -R
This causes less to capture the output until it reaches maximum terminal height and at this point everything stops (hopefully still accepting data), allowing me to use movement keys to scroll up and down. This is the desired effect.
Strangely, when I catch-up with the generated content (usually with PgDn) it causes less to lock and follow new data, not allowing me to use movement keys until I terminate with ^C and stop the original command. This is not the desired effect.
Am I using less incorrectly? Is there any other program that does what I wish? Is it possible to "unlock" from this mode?
Thank you!
|
Works OK for me when looking at a file that's being appended to but not when input comes from a pipe (using the F command - control-C works fine then).
See discussion at Follow a pipe using less? - this is a known bug/shortcoming in less.
| Is there any way to exit “less” follow mode without stopping other processes in pipe? |
1,289,319,364,000 |
this question is a follow-up to: How to suspend and resume processes
I have started firefox from a bash session in gnome-terminal.
The process tree looks like this:
$ ps -e -o pid,ppid,cmd -H
1828 1 gnome-terminal
26677 1828 bash
27980 26677 /bin/sh /usr/lib/firefox-3.6.15/firefox
27985 27980 /bin/sh /usr/lib/firefox-3.6.15/run-mozilla.sh /usr/lib/firefox-3.6.15/firefox-bin
27989 27985 /usr/lib/firefox-3.6.15/firefox-bin
28012 27989 /usr/lib/firefox-3.6.15/plugin-container /usr/lib/adobe-flashplugin/libflashplayer.so 27989 plugin true
When I hit CTRL+Z in bash, it will suspend firefox. When I issue the command bg (or fg) it will resume firefox. This is as expected.
When I issue the command kill -s SIGTSTP 27980 in another terminal, it will print the line [1]+ Stopped firefox in the first terminal (just like when i hit CTRL+Z), but it does not suspend firefox. I asume it only suspends the shell script.
When I issue the command kill -s SIGTSTP 27989 (note the PID) in another terminal, it will suspend firefox. The first terminal does not take note of this.
How does bash suspend the entire process tree? does it just traverse the tree and SIGTSTP all of the children?
|
Shell jobs live in "process groups"; look at the PGRP column in extended ps output. These are used both for job control and to determine who "owns" a terminal (real or pty).
POSIX (taken from System V) uses a negative process ID to indicate a process group, since the process group is identified by the first process in the group ("process group leader"). So you would use ps to determine the process group, then kill -s TSTP "-$pgrp". (Try ps -u"$USER" -opid,ppid,pgrp,cmd.)
In your process tree, the process group starts with the firefox script launched by bash, so the process group would be 27980 and the command would be kill -s TSTP -27980.
Naturally, to resume the process group, issue kill -s CONT -- -27980.
| How to suspend and resume processes like bash does |
1,289,319,364,000 |
This is what I'm running:
alexandma@ALEXANDMA-1-MBP ./command.sh &
[2] 30374
alexandma@ALEXANDMA-1-MBP
[2] + suspended (tty output) ./command.sh
I don't want it to start suspended, I want it to keep running in the background. I'm going to be running a bunch of these in a loop, so I need something that will work that way.
How can I keep it running?
|
It stops because of the reason given: it tries to output to tty. You can try to redirect the output if ./command.sh supports that, or run the command in a tmux or screen window of it's own. E.g.
tmux new-window -n "window name" ./command.sh
and then view the list of windows created with tmux list-windows and attach to tmux with tmux attach.
That way the program will still wait for input/output to happen, but you can easily provide input once you go to the appropriate window and the output will just be captured without any activity.
| When I run `./command.sh &`, the background task is suspended. How can I keep it running? |
1,289,319,364,000 |
Sometimes, some time after I've backgrounded a process with bg in bash, when I press Enter in the same shell to redisplay the prompt (just to check that I'm still in bash when some output from the background process has been displayed), the background process seems to stop spontaneously.
If I do bg again the same problem recurs.
The only way to fix it seems to be fg.
Why does this happen?
|
This usually happens if the process tries to read from its stdin stream. When the process is in the background, it receives a TTIN signal and is thus frozen (same behavior as a STOP signal). There is also the dual signal TTOU when a background process tries to write to its terminal.
Bringing it to the foreground resumes the process and allows it to read from your terminal.
Demo:
$ cat t.sh
#! /bin/sh
sleep 1
read dummy
$ ./t.sh &
[1] 3364
$
[1]+ Stopped ./t.sh
$ ps aux|grep t.sh
me 3364 0.0 0.0 11268 1200 pts/0 T 17:04 0:00 /bin/sh ./t.sh
One of the ways of avoiding this is to use nohup, but this can have strange effects if the program doesn't deal with having its input stream redirected to /dev/null.
| Why do backgrounded processes sometimes stop spontaneously? |
1,289,319,364,000 |
suspend is a builtin command in Bash. When would you naturally use this command and find it useful?
|
Let's say you lack both GNU screen and tmux (and X11, and virtual consoles) but want to switch between a login shell and another interactive shell.
You would first login on the console, and then start a new shell, temporarily blocking the login shell. To get the login shell back to do some work there, you'd do suspend. Then you would fg to get the interactive shell back to continue with whatever it was you did there.
In fact, with job control, the login shell could spawn a number of interactive shells as background jobs that you could switch to with fg %1, fg %2 etc., but to get back to the login shell, you would need to use suspend unless you wanted to manually kill -s STOP $$.
Also note that Ctrl+Z at the prompt in an interactive shell won't suspend it.
EDIT: I initially had a long hypothetical section about the use of suspend in a script, but since the command requires job control and since non-interactive shells usually don't have job control, I deleted that section.
Deleted section with suspend replaced by kill -s STOP $$ (this really doesn't belong to the answer any more, but it may be interesting to others anyway):
Let's say you have a background process (a script) in a script, and that this background process at some stage needs to stop and wait for the parent process to tell it to go on. This could be so that the parent has time to extract and move files into place or something like that.
The child script would suspend (kill -s STOP $$), and the parent script would send a CONT signal to it when it was okay to continue.
It gives you the opportunity to implement a sort of synchronisation between a parent process and a child process (although very basic as the parent shell process more or less needs to guess that the child process is suspended, although this can be fixed by having the child trap CONT and not suspend if that signal is received too early).
| What is a practical example of using the suspend command in Bash? |
1,289,319,364,000 |
Say I start a process in the terminal and it sends output to standard error while it runs. I want to move the process into the background and also silence it at the same time.
Is there a way to do this without stopping the process and starting it again using & and > /dev/null 2>&1 ? I'm wondering if there is some command that performs bg and can change the output descriptors of the target process too.
|
Too late. After a process is started, shell has no more control on process file descriptors so you can not silence it by a shell command.
You can only try to kill a SIGHUP to the process. If your process handles it correctly, It should detach from controlling tty. Unluckily, many software do not handle it correctly and simply die.
| How can I move a process into the background and also silence its output? |
1,289,319,364,000 |
What's the difference between a process group and a job? If I type pr * | lpr then is it both a process group as well a job?
What exactly is the difference between a process group ID and a job ID?
Edit: I know it appears similar to What is the difference between a job and a process?, but it is slightly different. Also, I didn't understand this concept from this thread.
|
A process group is a unix kernel concept. It doesn't come up very often. You can send a signal to all the processes in a group, by calling the kill system call or utility with a negative argument.
When a process is created (with fork), it remains in the same process group as its parent. A process can move into another group by calling setpgid or setpgrp. This is normally performed by the shell when it starts an external process, before it executes execve to load the external program.
The main use for process groups is that when you press Ctrl+C, Ctrl+Z or Ctrl+\ to kill or suspend programs in a terminal, the terminal sends a signal to a whole process group, the foreground process group. The details are fairly complex and mostly of interest to shell or kernel implementers; the General Terminal Interface chapter of the POSIX standard is a good presentation (you do need some unix programming background).
Jobs are an internal concept to the shell. In the simple cases, each job in a shell corresponds to a process group in the kernel.
| Difference between process group and job? |
1,289,319,364,000 |
I have a basic understanding of how to get a job in foreground switch to background and vice-versa but I am trying to come up with a way so that I can run multiple jobs in the background.I tried to put multiple jobs in the background but only one of which was in running state.I want to have a scenario where I can run multiple jobs in the background.
I came across this website where I see multiple jobs running in the background.Can someone please break it down for me as to how can I run multiple jobs in the background?
|
You can use the & to start multiple background jobs.
Example to run sequentially:
(command1 ; command2) &
Or run multiple jobs in parallel
command1 & command2 &
This will start multiple jobs running in the background.
If you want to keep a job running in the background, once you exit the terminal you can use nohup. This will ensure that SIGHUP, is not sent to a process once you exit the terminal.
example:
nohup command &
| How to run multiple background jobs in linux? |
1,289,319,364,000 |
In Bash, how do you programmatically get the job id of a job started with &?
It is possible to start a job in the background with &, and then interact with it using its job id with Bash builtins like fg, bg, kill, etc.
For instance, if I start a job like
yes > /dev/null &
I can then kill it with the following command (assuming this job gets job id 1):
kill %1
When creating a new job with &, how do you programmatically get the job id of the newly created job?
I realize you can get the process id (not the job id) with $!, but I am specifically wondering about how you can get the job id.
|
The command jobs prints the currently running background jobs along with their ID:
$ for i in {1..3}; do yes > /dev/null & done
[1] 3472564
[2] 3472565
[3] 3472566
$ jobs
[1] Running yes > /dev/null &
[2]- Running yes > /dev/null &
[3]+ Running yes > /dev/null &
So, to get the id of the last job launched that is still running, since it will be marked with a +, you could do (with GNU grep):
$ jobs | grep -oP '\d(?=]\+)'
3
Or, more portably:
$ jobs | sed -n 's/^\[\([0-9]*\)\]+.*/\1/p'
However, note that if you suspend one of the jobs, then that will take the +. So you might want to just take the last line:
$ jobs | tail -n1 | cut -d' ' -f1 | tr -d '][+'
3
| How to programmatically get the job id of a newly backgrounded process in Bash |
1,289,319,364,000 |
I know that the jobs command only shows jobs running in current shell session.
Is there a bash code that will show jobs across shell sessions (for example, jobs from another terminal tab) for current user?
|
It's unclear (to me) what you'd want this global jobs call to do beyond just listing all processes owned by the current user (ps x), but you could filter that listing by terminal and/or by state.
List your processes that are:
Connected to any terminal: ps x |awk '$2 ~ /pts/'
Stopped by job control (Ctrl+z without bg): ps x |awk '$3 ~ /T/'
Running in the foreground: ps x |awk '$3 ~ /\+/'
AWK is really powerful. Here we're just calling out fields by number and looking at their values ($) with regular expression matchers (~). You can invert a match with !~. These are conditions and we're simply using AWK's default action (print matches, which could also be written like { print $0 } to print the whole line).
You can add the title with 'NR == 1 || … (number of records = 1, aka line 1) in the AWK command. These can be combined, e.g. ps x |awk 'NR == 1 || $3 ~ /T/ || $2 ~ /pts/. Using regular expressions, you can combine the first two bullets as ps x |awk '$3 ~ /[T+]/'. These conditional statements are logical, so you can combine || with && and parentheses, e.g. ps x |awk 'NR == 1 || ($3 ~ /\+/ && $2 ~ /pts/)'
I suspect what you want is: ps x |awk 'NR == 1 || $2 ~ /pts/
I ran this on a Linux system. If you're running something else, your terminal emulators may be named differently. Run ps $$ to take a look at how your active shell appears on the list and build a regular expression from the second column.
| List all jobs in all shell sessions (not just the current shell), by current user |
1,289,319,364,000 |
I use wget command in the background like this
wget -bq
and it prints
Continuing in background, pid 31754.
But when I type the command jobs, I don't see my job(although the downloading is not finished).
|
When using wget with -b or --background it puts itself into the background by disassociating from the current shell (by forking off a child process and terminating the parent). Since it's not the shell that puts it in the background as an asynchronous job, it will not show up as a job when you use jobs.
To run wget as an asynchronous (background) job in the shell, use
wget ... URL &
If you do this, you may additionally want to redirect output to some file (which wget does automatically with -b), or discard it by redirecting to /dev/null, or use -q or --quiet.
| Why can't I see the "wget" job when I execute it in the background? |
1,289,319,364,000 |
If I begin a process and background it in a terminal window (say ping google.com &), I can kill it using kill %1 (assuming it is job 1).
However if I open another terminal window (or tab) the backgrounded process is not listed under jobs and cannot be killed directly using kill.
Is it possible to kill this process from another terminal window or tab?
Note: I am using the Xfce Terminal Emulator 0.4.3 and bash (although if a solution exists in another common shell but not bash I am open to that as well)
|
Yes, all you need to know is the process id (PID) of the process. You can find this with the ps command, or the pidof command.
kill $(pidof ping)
Should work from any other shell. If it doesn't, you can use ps and grep for ping.
| How can I kill a job that was initiated in another shell (terminal window or tab)? |
1,289,319,364,000 |
How these process concepts are related together - background, zombie, daemon and without controlling terminal?
I feel that they are somehow close, especially through the concept of controlling terminal, but there is still not much info for me to tell a story, like if you need to explain something to a child reading an article about Linux without lying too much.
UPDATE #1: For example (I don't know if that's true)
background -- zombie - foreground process can not become zombie, because zombie is a background process that was left without a parent
daemon -- without ctty - all daemons run without ctty, but not all processes withoutctty are daemons
background -- daemon - a background process can be retrieved to run interactively again, daemon is not
zombie -- without ctty - zombie is indifferent if there is ctty attached to it or not
background -- without ctty - processes sent to background while they have ctty, and become daemons or die if the ctty is taken from them
|
In brief, plus links.
zombie
a process that has exited/terminated, but whose parent has not yet acknowledged the termination (using the wait() system calls). Dead processes are kept in the process table so that their parent can be informed that their child of the child process exiting, and of their exit status. Usually a program forking children will also read their exit status as they exit, so you'll see zombies only if the parent is stopped or buggy.
See:
Can a zombie have orphans? Will the orphan children be disturbed by reaping the zombie?
How does Linux handle zombie processes?
Linux man page waitpid(2)
controlling terminal, session, foreground, background
These are related to job control in the context of a shell running on a terminal. A user logs in, a session is started, tied to a terminal (the controlling terminal) and a shell is started. The shell then runs processes and sends them on the foreground and background as the user wishes (using & when starting the process, stopping it with ^Z, using fg and bg).
Processes in the background are stopped if reading or writing from the terminal; processes in the foreground receive the interrupt signal if ^C is hit on the terminal. (It's the kernel's terminal driver that handles those signals, the shell controls which process (group) is sent to the foreground or background.
See:
Difference between nohup, disown and &
Bash reference manual: Job Control Basics
daemon
A process running as a daemon is usually something that shouldn't be tied to any particular terminal (or a login session, or a shell). It shouldn't have a controlling terminal, so that it won't receive signals if the terminal closes, and one usually doesn't want it to do I/O on a terminal either. Starting a daemon from the command line requires breaking all ties to the terminal, i.e. starting a new session (in the job control sense, above) to get rid of the controlling terminal, and closing the file handles to the terminal. Of course something started from init, systemd or similar outside a login session wouldn't have these ties to begin with.
Since a daemon doesn't have a controlling terminal, it's not subject to job control, and being in the "foreground" or "background" in the job control sense doesn't apply. Also, daemons usually re-parent to init which cleans them as they exit, so you don't usually see them as zombies.
See:
What's the difference between running a program as a daemon and forking it into background with '&'?
Linux man page daemon(7).
| Background, zombie, daemon and without ctty - are these concepts connected? |
1,289,319,364,000 |
In bash, if I run kill %1, it kills a backgrounded command in the current shell (the most recent one, I believe).
Is there an equivalent of this in fish? I haven't been able to find it online in a bit of web searching.
I'm not sure if I did it wrong, but
$ ps
PID TTY TIME CMD
73911 pts/5 00:00:00 fish
73976 pts/5 00:00:00 ps
$ sleep 100
^Z⏎
$ kill %1
$ ps
PID TTY TIME CMD
73911 pts/5 00:00:00 fish
74029 pts/5 00:00:00 sleep
74121 pts/5 00:00:00 ps
|
Your command has worked, but because the job is stopped it has not responded to the signal.
From your example where it didn't seem to work, try continuing the process with fg or bg, or forcibly terminate the process with kill -SIGKILL %1, and it will exit.
kill %1 works immediately in bash and zsh because it is a builtin command in these shells and sends SIGCONT in addition to SIGTERM (or the specified signal).
| kill %1 equivalent in fish |
1,289,319,364,000 |
I am trying to create a function that can run an arbitrary command, interact with the child process (specifics omitted), and then wait for it to exit. If successful, typing run <command> will appear to behave just like a bare <command>.
If I weren't interacting with the child process I would simply write:
run() {
"$@"
}
But because I need to interact with it while it runs, I have this more complicated setup with coproc and wait.
run() {
exec {in}<&0 {out}>&1 {err}>&2
{ coproc "$@" 0<&$in 1>&$out 2>&$err; } 2>/dev/null
exec {in}<&- {out}>&- {err}>&-
# while child running:
# status/signal/exchange data with child process
wait
}
(This is a simplification. While the coproc and all the redirections aren't really doing anything useful here that "$@" & couldn't do, I need them all in my real program.)
The "$@" command could be anything. The function I have works with run ls and run make and the like, but it fails when I do run vim. It fails, I presume, because Vim detects that it is a background process and doesn't have terminal access, so instead of popping up an edit window it suspends itself. I want to fix it so Vim behaves normally.
How can I make coproc "$@" run in the "foreground" and the parent shell become the "background"? The "interact with child" part neither reads from nor writes to the terminal, so I don't need it to run in the foreground. I'm happy to hand over control of the tty to the coprocess.
It is important for what I'm doing that run() be in the parent process and "$@" be in its child. I can't swap those roles. But I can swap the foreground and background. (I just don't know how to.)
Note that I am not looking for a Vim-specific solution. And I would prefer to avoid pseudo-ttys. My ideal solution would work equally well when stdin and stdout are connected to a tty, to pipes, or are redirected from files:
run echo foo # should print "foo"
echo foo | run sed 's/foo/bar/' | cat # should print "bar"
run vim # should open vim normally
Why using coprocesses?
I could have written the question without coproc, with just
run() { "$@" & wait; }
I get the same behavior with just &. But in my use case I am using the FIFO coproc sets up and I thought it best not to oversimplify the question in case there's a difference between cmd & and coproc cmd.
Why avoiding ptys?
run() could be used in an automated context. If it's used in a pipeline or with redirections then there wouldn't be any terminal to emulate; setting up a pty would be a mistake.
Why not using expect?
I'm not trying to automate vim, send it any input or anything like that.
|
In your example code, Vim gets suspended by the kernel via a SIGTTIN signal as soon as it tries to read from the tty, or possibly set some attributes to it.
This is because the interactive shell spawns it in a different process-group without (yet) handing over the tty to that group, that is putting it “in background”. This is normal job control behavior, and the normal way to hand over the tty is to use fg. Then of course it’s the shell that goes to the background and thus gets suspended.
All this is on purpose when a shell is interactive, otherwise it would be as if you were allowed to keep typing commands at the prompt while eg editing a file with Vim.
You could easily work around that by just making your whole run function a script instead. That way, it would be executed synchronously by the interactive shell with no competing of the tty. If you do so, your own example code already does all you are asking, including concurrent interaction between your run (then a script) and the coproc.
If having it in a script is not an option, then you might see whether shells other than Bash would allow for a finer control over passing the interactive tty to child processes. I personally am no expert on more advanced shells.
If you really must use Bash and really must have this functionality through a function to be run by the interactive shell, then I’m afraid the only way out is to make a helper program in a language that allows you to access tcsetpgrp(3) and sigprocmask(2).
The purpose would be to do in the child (your coproc) what has not been done in the parent (the interactive shell) in order to forcefully grab the tty.
Keep in mind though that this is explicitly considered bad practice.
However, if you diligently don’t use the tty from the parent shell while the child still has it, then there might be no harm done. By “don’t use” I mean don’t echo don’t printf don’t read to/from the tty, and certainly don’t run other programs that might access the tty while the child is still running.
A helper program in Python might be something like this:
#!/usr/bin/python3
import os
import sys
import signal
def main():
in_fd = sys.stdin.fileno()
if os.isatty(in_fd):
oldset = signal.pthread_sigmask(signal.SIG_BLOCK, {signal.SIGTTIN, signal.SIGTTOU})
os.tcsetpgrp(in_fd, os.getpid())
signal.pthread_sigmask(signal.SIG_SETMASK, oldset)
if len(sys.argv) > 1:
# Note: here I used execvp for ease of testing. In production
# you might prefer to use execv passing it the command to run
# with full path produced by the shell's completion
# facility
os.execvp(sys.argv[1], sys.argv[1:])
if __name__ == '__main__':
main()
Its equivalent in C would be only a bit longer.
This helper program would need to be run by your coproc with an exec, like this:
run() {
exec {in}<&0 {out}>&1 {err}>&2
{ coproc exec grab-tty.py "$@" {side_channel_in}<&0 {side_channel_out}>&1 0<&${in}- 1>&${out}- 2>&${err}- ; } 2>/dev/null
exec {in}<&- {out}>&- {err}>&-
# while child running:
# status/signal/exchange data with child process
wait
}
This setup worked for me on Ubuntu 14.04 with Bash 4.3 and Python 3.4 for all your example cases, sourcing the function by my main interactive shell and running run from command prompt.
If you need to run a script from the coproc, it might be necessary to run it with bash -i, otherwise Bash might start with pipes or /dev/null on stdin/stdout/stderr rather than inheriting the tty grabbed by the Python script. Also, whatever you run within the coproc (or below it) would better not invoke additional run()s. (not sure actually, haven’t tested that scenario, but I suppose it would need at least careful encapsulation).
In order to answer your specific (sub-)questions I need to introduce a bit of theory.
Every tty has one, and only one, so-called “session”. (Not every session has a tty, though, such as the case for the typical daemon process, but I suppose this is not relevant here).
Basically, every session is a collection of processes, and is identified through an id corresponding to the “session leader”’s pid. The “session leader” is thus one of those processes belonging to that session, and precisely the one that first started that specific session.
All processes (leader and not) of a particular session can access the tty associated to the session they belong to. But here comes the first distinction: only one process at any one given moment can be the so-called “foreground process”, while all the other ones during that time are “background processes”. The “foreground” process has free access to the tty. On the contrary, “background” processes will be interrupted by the kernel should they dare to access their tty. It’s not like that background processes are not allowed at all, it’s rather that they get signaled by the kernel of the fact that “it’s not their turn to speak”.
So, going to your specific questions:
What exactly do "foreground" and "background" mean?
“foreground” means “being legitimately using the tty at that moment”
“background” means simply “not being using the tty at that moment”
Or, in other words, again by quoting your questions:
I want to know what differentiates foreground and background processes
Legitimate access to the tty.
Is it possible to bring a background process to the foreground while the parent continues to run?
In general terms: background processes (parent or not) do continue to run, it’s just that they get (by default) stopped if they try to access their tty. (Note: they can ignore or otherwise handle those specific signals (SIGTTIN and SIGTTOU) but that is usually not the case, therefore the default disposition is to suspend the process)
However: in case of an interactive shell, it’s the shell that so chooses to suspend itself (in a wait(2) or select(2) or whatever blocking system-call it thinks it’s the most appropriate one for that moment) after it hands the tty over to one of its children that was in background.
From this, the precise answer to that specific question of yours is: when using a shell application it depends on whether the shell you’re using gives you a method (builtin command or what) to not stop itself after having issued the fg command. AFAIK Bash doesn’t allow you such choice. I don’t know about other shell applications.
what makes cmd & different from cmd?
On a cmd, Bash spawns a new process belonging to its own session, hands the tty over to it, and puts itself on waiting.
On a cmd &, Bash spawns a new process belonging to its own session.
how to hand over foreground control to a child process
In general terms: you need to use tcsetpgrp(3). Actually, this can be done by either a parent or a child, but recommended practice is to have it done by a parent.
In the specific case of Bash: you issue the fg command, and by doing so, Bash uses tcsetpgrp(3) in favor of that child then puts itself on waiting.
From here, one further insight you might find of interest is that, actually, on fairly recent UNIX systems, there is one additional level of hierarchy among the processes of a session: the so-called “process group”.
This is related because what I’ve said so far with regard to the “foreground” concept is actually not limited to “one single process”, it’s rather to be expanded to “one single process-group”.
That is: it so happens that the usual common case for “foreground” is of only one single process having legitimate access to the tty, but the kernel actually allows for a more advanced case where an entire group of processes (still belonging to the same session) have legitimate access to the tty.
It’s not by mistake, in fact, that the function to call in order to hand over tty “foregroundness” is named tcsetpgrp, and not something like (e.g.) tcsetpid.
However, in practical terms, clearly Bash does not take advantage of this more advanced possibility, and on purpose.
You might want to take advantage of it, though. It all depends on your specific application.
Just as a practical example of process grouping, I could have chosen to use a “regain foreground process group” approach in my solution above, in place of the “hand over foreground group” approach.
That is, I could have make that Python script use the os.setpgid() function (which wraps the setpgid(2) system call) in order to reassign the process to the current foreground process-group (likely the shell process itself, but not necessarily so), thus regaining the foreground state that Bash had not handed over.
However that would be quite an indirect way to the final goal and might also have undesirable side-effects due to the fact that there are several other uses of process-groups not related to tty control that might end up involve your coproc then. For instance, UNIX signals in general can be delivered to an entire process group, rather than to a single process.
Finally, why is so different to invoke your own example run() function from a Bash’s command prompt rather than from a script (or as a script) ?
Because run() invoked from a command prompt is executed by Bash’s own process(*), while when invoked from a script it’s executed by a different process(-group) to which the interactive Bash has already happily handed the tty over.
Therefore, from a script, the last final “defense” that Bash puts in place to avoid competing the tty is easily circumvented by the simple well known trick of saving&restoring the stdin/stdout/stderr’s file-descriptors.
(*) or it might possibly spawn a new process belonging to its own same process-group. I actually never investigated what exact approach an interactive Bash uses to run functions but it doesn’t make any difference tty-wise.
HTH
| Run command in background with foreground terminal access |
1,289,319,364,000 |
As an example:
I have working shell script which starts up weblogic (which will continue to run) and then do deployment
At the end I bring background process back to foreground, so that shell script does not exit (Exited with code 0)
#!/bin/bash
set -m
startWebLogic.sh &
# create resources in weblogic
# deploy war
# ...
jobs -l
fg %1
I had to use set -m, to allow job control, but I also found it is not cleaniest way to use it in non-interactive shells.
Is there a better way to handle it?
|
As far as I understand your question you are looking for wait:
#!/bin/bash
startWebLogic.sh &
# create resources in weblogic
# deploy war
# ...
wait
It does not "bring the process to foreground". But it does wait until weblogic returns. So the shell script does not exit until weblogic exits. Which is the effect you achieved with fg %1.
from help wait:
Waits for each process identified by an ID, which may be a process ID or a
job specification, and reports its termination status. If ID is not
given, waits for all currently active child processes, and the return
status is zero.
| Clean way to bring back background process to foreground in shell script |
1,289,319,364,000 |
I have a c program executable or shell script which I want to run very often. If I want to stop/pause or to notify something I will send signal to that process. So every time I have to check the pid of that process and I have to use kill to send a signal.
Every time I have to check pid and remembering that upto system shutdown, really bad. I want that process has to run on particular pid only like init always run on 1.
Is there any C api for that? and Also needed script for bash program.
|
I don't think you can reserve or assign PIDs. However, you could start your process in a script like this:
myprocess &
echo "$!" > /tmp/myprocess.pid
This creates a "pid file", as some other people have referred to it. You can then fetch that in bash with, e.g., $(</tmp/myprocess.pid) or $(cat /tmp/myprocess.pid).
Just beware when you do this that if the process died and the pid was recycled, you'll be signalling the wrong thing. You can check with:
pid=$(cat /tmp/myprocess.pid)
if [ "$(ps -o comm= -p "$pid")" = "myprocess" ]; then
...send your signal...
else echo "Myprocess is dead!"
fi
See comments if "$(ps -o comm= -p "$pid")" looks strange to you. You may want to do a more vigorous validation if there is a chance of someone doing something devious with the content of /tmp/myprocess.pid (which should not be writeable by other users!).
| Run a process to particular/dedicated pid only |
1,289,319,364,000 |
tmux and screen let you run different processes (e.g. vim, a bash script, mysql, psql, etc) in different virtual windows. But traditional Unix job control (using CTRL-z, fg, bg, and jobs) seem to give you some of the same functionality.
Are there any advantages of multitasking using traditional job control over the newer ways via tmux and screen?
|
Suppose you've just started a program outside screen. Suddenly you realize you wanted to do something else in that terminal. Ctrl+Z.
Screen and tmux introduce a layer of isolation between the application and the terminal. This isn't always a good thing. For example, I find their scrollback a lot less convenient than xterm's, so I rarely use screen unless I intend to (be able to) connect to that session remotely.
If you've set up environment variables, a current directory and other parameters in a shell (which may be in a screen window), carrying those settings over to a new screen window can be a lot of work. It's convenient to be able to run several programs in that terminal.
Sometimes you want to run a program in the background and have nothing to to with it any more: nohup program & disown %-.
If you have a GUI application that occasionally misbehaves, it can be convenient to start it from a terminal and fg; Ctrl+C or kill %1 it if needed.
Screen and tmux may not be installed.
| What are the virtues of multitasking with traditional job control vs Tmux/Screen? |
1,541,850,455,000 |
I'm using a bash script script.sh containing a command cmd, launched in background:
#!/bin/bash
…
cmd &
…
If I open a terminal emulator and run script.sh, cmd is properly executed in background, as expected. That is, while script.sh has ended, cmd continues to run in background, with PPID 1.
But, if I open another terminal emulator (let say xfce4-terminal) from the previous one (or at the beginning of desktop session, which is my real use case), and execute script.sh by
xfce4-terminal -H -x script.sh
cmd is not properly executed anymore: It is killed by the termination of script.sh. Using nohup to prevent this is not sufficient. I am obliged to put a sleep command after it, otherwise cmd is killed by the termination of script.sh, before being dissociated from it.
The only way I found to make cmd properly execute in background is to put set -m in script.sh. Why is it necessary in this case, and not in the first one? Why this difference in behaviour between the two ways of executing script.sh (and hence cmd)?
I assume that, in the first case, monitor mode is not activated, as one can see by putting set -o in script.sh.
|
The process your cmd is supposed to be run in will be killed by the SIGHUP signal between the fork() and the exec(), and any nohup wrapper or other stuff will have no chance to run and have any effect. (You can check that with strace)
Instead of nohup, you should set SIGHUP to SIG_IGN (ignore) in the parent shell before executing your background command; if a signal handler is set to "ignore" or "default", that disposition will be inherited through fork() and exec(). Example:
#! /bin/sh
trap '' HUP # ignore SIGHUP
xclock &
trap - HUP # back to default
Or:
#! /bin/sh
(trap '' HUP; xclock &)
If you run this script with xfce4-terminal -H -x script.sh, the background command (xclock &) will not be killed by the SIGHUP sent when script.sh terminates.
When a session leader (a process that "owns" the controlling terminal, script.sh in your case) terminates, the kernel will send a SIGHUP to all processes from its foreground process group; but set -m will enable job control and commands started with & will be put in a background process group, and they won't signaled by SIGHUP.
If job control is not enabled (the default for a non-interactive script), commands started with & will be run in the same foreground process group, and the "background" mode will be faked by redirecting their input from /dev/null and letting them ignore SIGINT and SIGQUIT.
Processes started this way from a script which once run as a foreground job but which has already exited won't be signaled with SIGHUP either, since their process group (inherited from their dead parent) is no longer the foreground one on the terminal.
Extra notes:
The "hold mode" seems to be different between xterm and xfce4-terminal (and probably other vte-based terminals). While the former will keep the master side of the pty open, the latter will tear it off after the program run with -e or -x has exited, causing any write to the slave side to fail with EIO. xterm will also ignore WM_DELETE_WINDOW messages (ie it won't close) while there are still processes from the foreground process group running.
| Process killed before being launched in background |
1,541,850,455,000 |
There is no manual entry for ctrl-z or fg.
What does "fg" stand for in the context of job control? In other words, typing ctrl-z will suspend the current job drop me back into the shell, and the command "fg" re-activates the suspended job. What does "fg" stand for?
|
ForeGround.
As with other bash built-ins, there is help for it:
$ help fg
fg: fg [job_spec]
Move job to the foreground.
Place the job identified by JOB_SPEC in the foreground, making it the
current job. If JOB_SPEC is not present, the shell's notion of the
current job is used.
Exit Status:
Status of command placed in foreground, or failure if an error occurs.
$
Also, bg is BackGround, and ^z is in the bash man page, under Job Control:
If the operating system on which bash is running supports job control,
bash contains facilities to use it. Typing the suspend character (typ‐
ically ^Z, Control-Z) while a process is running causes that process to
be stopped and returns control to bash. Typing the delayed suspend
character (typically ^Y, Control-Y) causes the process to be stopped
when it attempts to read input from the terminal, and control to be
returned to bash. The user may then manipulate the state of this job,
using the bg command to continue it in the background, the fg command
to continue it in the foreground, or the kill command to kill it. A ^Z
takes effect immediately, and has the additional side effect of causing
pending output and typeahead to be discarded.
| What does "fg" stand for? |
1,541,850,455,000 |
From gnome-terminal I know the ability to suspend a job with C-z, and then send it to the background. When I close the terminal the process does not end. Where is the job being managed from, or is it lost?
|
Your background job continues executing until someone tells it to stop by sending it a signal. There are several ways it might die:
When the terminal goes away for any reason, it sends a HUP signal (“hangup”, as in modem hangup) to the shell running inside it (more precisely, to the controlling process) and to the process in the foreground process group. A program running in the background is thus not affected, but…
When the shell receives that HUP signal, it propagates it to the background jobs. So if the background process is not ignoring the signal, it dies at this point.
If the program tries to read or write from the terminal after it's gone away,
the read or write will fail with an input/output error (EIO). The program may then decide to exit.
You (or your system administrator), of course, may decide to kill the program at any time.
If your concern is to keep the program running, then:
If the program may interact with the terminal, use Screen or Tmux to run the program in a virtual terminal that you can disconnect from and reconnect to at will.
If the program just needs to keep running and is not interactive, start it with the nohup command (nohup myprogram --option somearg), which ensures that the shell won't send it a SIGHUP, redirects standard input to /dev/null and redirects standard output and standard error to a file called nohup.out.
If you've already started the program and don't want it to die when you close your terminal, run the disown built-in, if your shell has one. If it doesn't, you can avoid the shell's propagation of SIGHUP by killing the shell with extreme prejudice (kill -KILL $$ from that shell, which bypasses any exit trigger that the indicated process has).
If you've already started the program and would like to reattach it to another terminal, there are ways, but they're not 100% reliable. See How can I disown a running process and associate it to a new screen shell? and linked questions.
| Where do background jobs go? |
1,541,850,455,000 |
I understand that when I run exit it terminates my current shell because exit command run in the same shell. I also understand that when I run exit & then original shell will not terminate because & ensures that the command is run in sub-shell resulting that exit will terminate this sub-shell and return back to original shell. But what I do not understand is why commands with and without & looks exactly the same under pstree, in this case sleep 10 and sleep 10 &. 4669 is the PID of bash under which first sleep 10 and then sleep 10 & were issued and following output was obtained from another shell instance during this time:
# version without &
$ pstree 4669
bash(4669)───sleep(6345)
# version with &
$ pstree 4669
bash(4669)───sleep(6364)
Should't the version with & contain one more spawned sub-shell (e.g. in this case with PID 5555), like this one?
bash(4669)───bash(5555)───sleep(6364)
PS: Following code was omitted from output of pstree beginning for better readability:
systemd(1)───slim(1009)───ck-launch-sessi(1370)───openbox(1551)───/usr/bin/termin(4510)───bash(4518)───screen(4667)───screen(4668)───
|
Until I started answering this question, I hadn’t realised that using the & control operator to run a job in the background starts a subshell. Subshells are created when commands are wrapped in parentheses or form part of a pipeline (each command in a pipeline is executed in its own subshell).
The Lists of Commands section of the Bash manual (thanks jimmij) states:
If a command is terminated by the control operator ‘&’, the shell executes
the command asynchronously in a subshell. This is known as executing the
command in the background. The shell does not wait for the command to
finish, and the return status is 0 (true).
As I understand it, when you run sleep 10 & the shell forks to create a new child process (a copy of itself) and then immediately execs to replace this child process with code from the external command (sleep). This is similar to what happens when a command is run as normal (in the foreground). See the Fork–exec Wikipedia article for a short overview of this mechanism.
I couldn’t understand why Bash would run backgrounded commands in a subshell but it makes sense if you also want to be able to run shell builtins such as exit or echo to be run in the background (not just external commands).
When it’s a shell builtin that’s being run in the background, the fork happens (resulting in a subshell) without an exec call to replace itself with an external command. Running the following commands shows that when the echo command is wrapped in curly braces and run in the background (with the &), a subshell is indeed created:
$ { echo $BASH_SUBSHELL $BASHPID; }
0 21516
$ { echo $BASH_SUBSHELL $BASHPID; } &
[1] 22064
$ 1 22064
In the above example, I wrapped the echo command in curly braces to avoid BASH_SUBSHELL being expanded by the current shell; curly braces are used to group commands together without using a subshell. The second version of the command (ending with the & control operator) clearly demonstrates that terminating the command with the ampersand has resulted in a subshell (with a new PID) being created to execute the echo builtin. (I’m probably simplifying the shell’s behaviour here. See mikeserv’s comment.)
I would never have thought of running exit & and had I not read your question, I would have expected the current shell to quit. Knowing now that such commands are run in a subshell, your explanation that it’s the subshell which exits makes sense.
“Why is subshell created by background control operator (&) not displayed under pstree”
As mentioned above, when you run sleep 10 &, Bash forks itself to create the subshell but since sleep is an external command, it calls the exec() system call which immediately replaces the Bash code and data in the child process with a running copy of the sleep program. By the time you run pstree, the exec call will already have completed and the child process will now have the name “sleep”.
While away from my computer, I tried to think of a way of keeping the subshell running long enough for the subshell to be displayed by pstree. I figured we could run the command through the time builtin:
$ time sleep 11 &
[2] 4502
$ pstree -p 26793
bash(26793)─┬─bash(4502)───sleep(4503)
└─pstree(4504)
Here, the Bash shell (26793) forks to create a subshell (4502) in order to execute the command in the background. This subshell runs its own time builtin command which, in turn, forks (to create a new process with PID 4503) and execs to run the external sleep command.
Using named pipes, jimmij came up with a clever way to keep the subshell created to run exit alive long enough for it to be displayed by pstree:
$ mkfifo file
$ exit <file &
[2] 6413
$ pstree -p 26793
bash(26793)─┬─bash(6413)
└─pstree(6414)
$ echo > file
$ jobs
[2]- Done exit < file
Redirecting stdin from a named pipe is clever as it causes the subshell to block until it receives input from the named pipe. Later, redirecting the output of echo (without any arguments) writes a newline character to the named pipe which unblocks the subshell process which, in turn, runs the exit builtin command.
Similarly, for the sleep command:
$ mkfifo named_pipe
$ sleep 11 < named_pipe &
[1] 6600
$ pstree -p 26793
bash(26793)─┬─bash(6600)
└─pstree(6603)
Here we see that the subshell created to run the command in the background has a PID of 6600. Next, we unblock the process by writing a newline character to the pipe:
$ echo > named_pipe
The subshell then execs to run the sleep command.
$ pstree -p 26793
bash(26793)─┬─pstree(6607)
└─sleep(6600)
After the exec() call, we can see that the child process (6600) is now running the sleep program.
| Why is subshell created by background control operator (&) not displayed under pstree |
1,541,850,455,000 |
I am trying to create a fun Terminal Screensaver which consists of the cmatrix package (one that turns terminal in one similar to the movie The Matrix) and xprintidle to determine idle time of the Terminal.
I took some help from this Thread answer at SuperUser and using a shell-script similar to it as follows:
screensaver.sh
#!/bin/sh
set -x # Used for debugging
IDLE_TIME=$((60*1000)) #a minute in milliseconds
screen_saver(){
# My screensaver function
cmatrix -abs
}
sleep_time=$IDLE_TIME
triggered=false
while sleep $(((sleep_time+999)/1000)); do
idle=$(xprintidle)
if [ $idle -ge $IDLE_TIME ]; then
if ! $triggered; then
screen_saver
triggered=true
sleep_time=$IDLE_TIME
fi
else
triggered=false
sleep_time=$((IDLE_TIME -idle+100))
fi
done
The script runs flawlessly when I run it in foreground using:
./screensaver.sh
and I can see the matrix terminal triggered.
However If I run it in background with &; the function screen_saver() is triggered in the background and I cannot view it. The only possible way to see the matrix terminal is using fg which brings it foreground.
Question
Is it possible to use the fg command in the function screen_saver() like:
screen_saver(){
cmatrix -abs && fg
}
or similar option to bring it to the foreground within the shell-script?
I wish to add this script into my .bashrc so that it actually becomes a customizable feature. Is this possible?
|
The fg job commands are connected to interactive sessions only, and have no meaning in batch mode (scripts).
Technically, you could enable them with set -m, but it still would make little (if any) sense, as this only relates to interactive applications where you can send jobs to back by ^Z, but since you don't have that interaction in a shell script, it makes it pointless.
The answer is: in practice, No, you can't.
| Is it possible to use commands like `fg` in a shell-script? |
1,541,850,455,000 |
I have a script which starts a number of background processes and if works fine when called from the cmdline.
However the same script is also called during my xsession startup and additionally on some udev events. In both cases the background processes disappear.
I had put a sleep 10 into the script and I could see, that the bg processes are indeed started, but once the script exists it takes the bg processes with it. I tried to solve with by invoking the bg processes with start_stop_deamon --background, but this does not make a difference. Hoever, I can invoke the script from a console and exit the session and the bg processes are still running.
Other than fixing my immediate problem (though any help would be much appreciated), I am keen to understand the logic behind it all. I suspect something related to the absence of a terminal.
|
Protect your processes with nohup:
nohup command-name &
You can also use this technique if you want to ignore stdout and stderr redirection to nohup.out:
command-name & disown
| Understanding when background process gets terminated |
1,541,850,455,000 |
I want to run a sequence of command pipelines with pv on each one. Here's an example:
for p in 1 2 3
do
cat /dev/zero | pv -N $p | dd of=/dev/null &
done
The actual commands in the pipe don't matter (cat/dd are just an example)...
The goal being 4 concurrently running pipelines, each with their own pv output. However when I try to background the commands like this, pv stops and all I get are 4 stopped jobs. I've tried with {...|pv|...}&, bash -c "...|pv|..." & all with the same result.
How can I run multiple pv command pipelines concurrently?
|
Found that I can do this with xargs and the -P option:
josh@subdivisions:/# seq 1 10 | xargs -P 4 -I {} bash -c "dd if=/dev/zero bs=1024 count=10000000 | pv -c -N {} | dd of=/dev/null"
3: 7.35GiB 0:00:29 [ 280MiB/s] [ <=> ]
1: 7.88GiB 0:00:29 [ 312MiB/s] [ <=> ]
4: 7.83GiB 0:00:29 [ 258MiB/s] [ <=> ]
2: 6.55GiB 0:00:29 [ 238MiB/s] [ <=> ]
Send output of the array to iterate over into stdin of xargs; To run all commands simultaneously, use -P 0
| How can I run multiple pv commands in parallel? |
1,541,850,455,000 |
In Bash 4.X It it possible to do something like:
command that expects input &
echo some output | %1
Where %1 represents the first backgrounded command?
|
Once you start:
rm -i -- * &
rm has been started with whatever stdin was in your shell at the time you invoked that command.
If it was the terminal, then rm will typically be suspended (with the SIGTTIN signal) as soon as it tried to read from it (since it's not in the foreground process group of the terminal).
If you want it to read from something else, you have to tell it to reopen its file descriptor 0 on something else.
You could do that with a debugger (here assuming you're on Linux):
rm_pid=$!
coproc yes
gdb --pid="$rm_pid" --batch \
-ex "call close(0)" \
-ex "call open(\"/proc/$$/fd/$COPROC\", 0)" /bin/rm
kill -s CONT "$rm_pid"
Above, we're starting yes in background with its stdin and stdout redirected to a pipe. The other end of that pipe is in the shell (process $$) on file descriptor ${COPROC[0]} aka $COPROC.
Then, with gdb, we're telling rm to close its fd 0, and reopen it on that same pipe.
| Pipe command output to input of running/backgrounded command |
1,541,850,455,000 |
I have the following script:
suspense_cleanup () {
echo "Suspense clean up..."
}
int_cleanup () {
echo "Int clean up..."
exit 0
}
trap 'suspense_cleanup' SIGTSTP
trap 'int_cleanup' SIGINT
sleep 600
If I run it and press Ctrl-C, Int clean up... show and it exits.
But if I press Ctrl-Z, the ^Z characters are displayed on the screen and then it hangs.
How can I:
Run some cleanup code on Ctrl-Z, maybe even echoing something, and
proceed with the suspension afterwards?
Randomly reading through the glibc documentation, I found this:
Applications that disable the normal interpretation of the SUSP
character should provide some other mechanism for the user to stop the
job. When the user invokes this mechanism, the program should send a
SIGTSTP signal to the process group of the process, not just to the
process itself.
But I'm not sure if that's applicable here, and in any case it doesn't seem to work.
Context: I'm trying to make an interactive shell script which supports all the suspense-related features that Vim/Neovim supports. Namely:
Ability to suspend programatically (with :suspend, instead of just letting the user press Ctrl-z)
Ability to perform an action before suspending (autosave in Vim)
Ability to perform an action before resuming (VimResume in NeoVim)
Edit: Changing sleep 600 to for x in {1..100}; do sleep 6; done also doesn't work.
Edit 2: It works when replacing sleep 600 with sleep 600 & wait. I'm not at all sure why or how that works, or what are the limitations of something like this.
|
Signals handling on Linux and other UNIX-like systems is a very
complex subject with many actors at play: kernel terminal driver,
parent -> child process relation, process groups, controlling
terminal, shell handling of signals with job control enabled/disabled, signal
handlers in individual processes and possibly more.
First, Control-C,
Control-Z keybindings are not handled by shell
but by the kernel. You can see the default definitions with stty -a:
$ stty -a
speed 38400 baud; rows 64; columns 212; line = 0;
intr = ^C; quit = ^\; erase = ^H; kill = ^U; eof = ^D; eol = <undef>; eol2 = <undef>; swtch = <undef>; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R; werase = ^W; lnext = ^V; discard = ^O; min = 1; time = 0;
-parenb -parodd -cmspar cs8 -hupcl -cstopb cread -clocal -crtscts
-ignbrk -brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl ixon -ixoff -iuclc -ixany -imaxbel iutf8
opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0
isig icanon iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprt echoctl echoke -flusho -extproc
Here we see intr = ^C and susp = ^Z. stty in turn gets this
information from the kernel using TCGETS ioctl
syscall:
$ strace stty -a |& grep TCGETS
ioctl(0, TCGETS, {B38400 opost isig icanon echo ...}) = 0
The default keybindings are defined in Linux kernel code in
#define INIT_C_CC { \
[VINTR] = 'C'-0x40, \
[VQUIT] = '\\'-0x40, \
[VERASE] = '\177', \
[VKILL] = 'U'-0x40, \
[VEOF] = 'D'-0x40, \
[VSTART] = 'Q'-0x40, \
[VSTOP] = 'S'-0x40, \
[VSUSP] = 'Z'-0x40, \
[VREPRINT] = 'R'-0x40, \
[VDISCARD] = 'O'-0x40, \
[VWERASE] = 'W'-0x40, \
[VLNEXT] = 'V'-0x40, \
INIT_C_CC_VDSUSP_EXTRA \
[VMIN] = 1 }
The default actions are also defined:
static void n_tty_receive_char_special(struct tty_struct *tty, unsigned char c,
bool lookahead_done)
{
struct n_tty_data *ldata = tty->disc_data;
if (I_IXON(tty) && n_tty_receive_char_flow_ctrl(tty, c, lookahead_done))
return;
if (L_ISIG(tty)) {
if (c == INTR_CHAR(tty)) {
n_tty_receive_signal_char(tty, SIGINT, c);
return;
} else if (c == QUIT_CHAR(tty)) {
n_tty_receive_signal_char(tty, SIGQUIT, c);
return;
} else if (c == SUSP_CHAR(tty)) {
n_tty_receive_signal_char(tty, SIGTSTP, c);
return;
}
The signal finally goes to __kill_pgrp_info() that says:
/*
* __kill_pgrp_info() sends a signal to a process group: this is what the tty
* control characters do (^C, ^Z etc)
* - the caller must hold at least a readlock on tasklist_lock
*/
That's important for our story - the signal generated with
Control-C and Control-Z is
sent to foreground process
group created by parent
interactive shell whose leader is a newly run script. The script and
its children belong to
one group.
Therefore, as correctly noted in the comments by user Kamil
Maciorowski,
when you send Control-Z after starting your
script SIGTSTP signal is received both by the script and sleep
because when a signal is sent to a group it received by all processes
in the group. It would be easy to see if you removed traps from your
code so that it looks like that (BTW, always add a
https://en.wikipedia.org/wiki/shebang_(unix), it's not defined what
should happen if there is no shebang))
#!/usr/bin/env bash
# suspense_cleanup () {
# echo "Suspense clean up..."
# }
# int_cleanup () {
# echo "Int clean up..."
# exit 0
# }
# trap 'suspense_cleanup' SIGTSTP
# trap 'int_cleanup' SIGINT
sleep 600
Run it (I named it sigtstp.sh) and stop it:
$ ./sigtstp.sh
^Z
[1]+ Stopped ./sigtstp.sh
$ ps aux | grep -e '[s]leep 600' -e '[s]igtstp.sh'
ja 27062 0.0 0.0 6908 3144 pts/25 T 23:50 0:00 sh ./sigstop.sh
ja 27063 0.0 0.0 2960 1664 pts/25 T 23:50 0:00 sleep 600
ja is my username, yours will be different, PIDs will also be
different but what matters is that both process are in stopped state
as indicated by letter 'T'. From man ps:
PROCESS STATE CODES
(...)
T stopped by job control signal
That means that both processes got SIGTSTP signal. Now, if both
processes, including sigstop.sh get signal, why isn't
suspense_cleanup() signal handler run? Bash does not execute it
until sleep 600 terminates. It's requirement imposed by
POSIX:
When a signal for which a trap has been set is received while the
shell is waiting for the completion of a utility executing a
foreground command, the trap associated with that signal shall not be
executed until after the foreground command has completed.
(notice though in the open-source world and IT in general standard is
just a collection of hints and there is no legal requirement to force
anyone to follow them). It wouldn't help if you slept less, say 3
seconds because sleep process would be stopped anyway so it would
never complete. In order for suspense_cleanup() to be called
immediately we have to run it in the background and run wait as also
explained in the above POSIX link:
#!/usr/bin/env bash
suspense_cleanup () {
echo "Suspense clean up..."
}
int_cleanup () {
echo "Int clean up..."
exit 0
}
trap 'suspense_cleanup' SIGTSTP
trap 'int_cleanup' SIGINT
sleep 600 &
wait
Run it and stop it:
$ ./sigstop.sh
^ZSuspense clean up...
Notice that both sleep 600 and sigtstp.sh are now gone:
$ ps aux | grep -e '[s]leep 600' -e '[s]igtstp.sh'
$
It's clear why sigtstp.sh is gone - wait was interrupted by signal,
it's the last line in the script so it exits. It's even more
surprising when you realize that if you sent SIGINT sleep would still
run even after death of sigtstp.sh:
$ ./sigtstp.sh
^CInt clean up...
$ ps aux | grep -e '[s]leep 600' -e '[s]igtstp.sh'
ja 32354 0.0 0.0 2960 1632 pts/25 S 00:12 0:00 sleep 600
But, due to its parent death it would be adopted by init:
$ grep PPid /proc/32354/status
PPid: 1
The reason for that is when shell runs a child in the background it
disables default SIGINT handler which is to terminate process
(signal(7)) in
it](https://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html):
If job control is disabled (see the description of set -m) when the
shell executes an asynchronous list, the commands in the list shall
inherit from the shell a signal action of ignored (SIG_IGN) for the
SIGINT and SIGQUIT signals. In all other cases, commands executed by
the shell shall inherit the same signal actions as those inherited by
the shell from its parent unless a signal action is modified by the
trap special built-in (see trap)
Some SO references:
https://stackoverflow.com/questions/46061694/bash-why-cant-i-set-a-trap-for-sigint-in-a-background-shell/46061734#46061734,
https://stackoverflow.com/questions/45106725/why-do-shells-ignore-sigint-and-sigquit-in-backgrounded-processes. If
you want to kill all children after receiving SIGINT you have to do it
manually in the trap handler. Notice, however, that SIGINT is still
delivered to all children but just ignored - if you didn't use sleep
but a command that installs its own SIGINT handler if would run (try
tcpdump for example)! Glibc
manual
says:
Note that if a given signal was previously set to be ignored, this
code avoids altering that setting. This is because non-job-control
shells often ignore certain signals when starting children, and it is
important for the children to respect this.
But why is sleep dead after sending SIGTSTP to it if we don't kill it
ourselves and SIGTSTP should only stop it, not kill it? All stopped
process belonging to orphaned process group get SIGHUP from
kernel:
If the exit of the process causes a process group to become orphaned,
and if any member of the newly-orphaned process group is stopped, then
a SIGHUP signal followed by a SIGCONT signal shall be sent to each
process in the newly-orphaned process group.
SIGHUP terminates the process if no custom handler for it was
installed (signal(7)):
Signal Standard Action Comment
SIGHUP P1990 Term Hangup detected on controlling terminal
or death of controlling process
(notice that if you ran sleep under strace things would get even more
complex...).
OK, so how about coming back to your original question:
How can I:
Run some cleanup code on Ctrl-Z, maybe even echoing something, and
proceed with the suspension afterwards?
The way I would do it is:
#!/usr/bin/env bash
suspense_cleanup () {
echo "Suspense clean up..."
trap - SIGTSTP
kill -TSTP $$
trap 'suspense_cleanup' SIGTSTP
}
int_cleanup () {
echo "Int clean up..."
exit 0
}
trap 'suspense_cleanup' SIGTSTP
trap 'int_cleanup' SIGINT
sleep 600 &
while true
do
if wait
then
echo child died, exiting
exit 0
fi
done
Now suspense_cleanup() will be called before stopping the process:
$ ./sigtstp.sh
^ZSuspense clean up...
[1]+ Stopped ./sigtstp.sh
$ ps aux | grep -e '[s]leep 600' -e '[s]igtstp.sh'
ja 4129 0.0 0.0 6920 3196 pts/25 T 00:29 0:00 bash ./sigtstp.sh
ja 4130 0.0 0.0 2960 1660 pts/25 T 00:29 0:00 sleep 600
$ fg
./sigtstp.sh
^ZSuspense clean up...
[1]+ Stopped ./sigtstp.sh
$ fg
./sigtstp.sh
^CInt clean up...
$ ps aux | grep -e '[s]leep 600' -e '[s]igtstp.sh'
ja 4130 0.0 0.0 2960 1660 pts/25 S 00:29 0:00 sleep 600
$ grep PPid /proc/4130/status
PPid: 1
And you can sleep less, say 10 seconds and see that script would exit
if sleep finished:
#!/usr/bin/env bash
suspense_cleanup () {
echo "Suspense clean up..."
trap - SIGTSTP
kill -TSTP $$
trap 'suspense_cleanup' SIGTSTP
}
int_cleanup () {
echo "Int clean up..."
exit 0
}
trap 'suspense_cleanup' SIGTSTP
trap 'int_cleanup' SIGINT
# sleep 600 &
sleep 10 &
while true
do
if wait
then
echo child died, exiting
exit 0
fi
done
Run it:
$ time ./sigtstp.sh
child died, exiting
real 0m10.007s
user 0m0.003s
sys 0m0.004s
| How to cleanup on suspense (ctrl-z) in a Bash script? |
1,541,850,455,000 |
I had some stress-testing scripts that were running in parallel and they would hang after finishing and would wait for a RETURN keystroke to exit. After investigating I discovered that it is not peculiar to my scripts but to all sorts of scripts ran on bash and that it depends on the size of the output produced (at least in my system: Ubuntu precise)
For instance the following:
find . &
hangs and waits for a RETURN keystroke if sufficient output is produced (try .. or ../.. to get more output), otherwise (i.e. if "little" output is produced) exits without hanging.
Since I find this kind of feature annoying in my particular case is there a way around it?
|
The command does not hang. You think that the command is hanging because you don't see the prompt. The prompt is there. You don't see the prompt because it was pushed up by the output of the background process. Pressing enter after the long output of a background process causes the shell to "execute" the empty line and print a new prompt.
Try the following to convince yourself:
execute find . &
wait until output done
see blinking cursor or something but no prompt
type echo foo
press enter
see foo printed and a new prompt
More experiments:
seq 10 &
this will print the numbers 1 to 10 and then print a prompt.
seq 10000 &
this will print the numbers 1 to 10000 and then you have blinking cursor and no prompt. But the prompt is there. try echo foo and press enter and your will see foo printed and a new prompt.
(sleep 2; seq 10) &
this command emulates the waiting time of a command with long output but does not have a long output. On my system this has the following effect: first sleep 2 is executed in the background. moments later the shell prints the prompt. then, after 2 seconds, seq 10 is executed in the background. this will print ten lines and push the prompt up. then the background job is done.
So you see that the background job always finishes and you always have a prompt, you just don't always see the prompt. When the background job is done quickly then the shell prints the prompt at the end and you see the prompt. When the background job takes a while printing it's output then the shell has already printed a prompt but that prompt gets pushed up so you don't see it anymore.
Even more experiments:
Try seq 10000 & or any other large number where you don't see a prompt at the end of the ouput. Now try half that number. In this example seq 5000 &. Do you see a prompt? If you do then try a larger number. For example seq 7500 &. If you don't see a prompt then try a smaller number. For example seq 2500 &. Keep doing this until you have number where you see the prompt pushed just a few lines up. The number will vary from run to run because what we have here is practically a race condition between the background process and the shell process.
| why do background jobs hang depending on the size of the output? |
1,541,850,455,000 |
If I run jobs -l on command prompt it shows me the status of running jobs but if I run below file ./tmp.sh
#! /bin/bash
jobs -l
It shows empty output.
Why is that and how can I obtain information about a status of a particular job inside of a shell script?
|
jobs
shows the jobs managed by the current shell. Your script runs inside its own shell, so jobs there will only show jobs managed by the script's shell...
To see this in action, run
#!/bin/bash
sleep 60 &
jobs -l
To see information about "jobs" started before a shell script, inside the script, you need to treat them like regular processes, and use ps etc. You can limit yourself to processes started by the parent shell (the shell from which the script was started, if any) by using the $PPID variable inside the script, and looking for processes sharing the same parent PID.
| Why does the jobs command not work in shell script? |
1,541,850,455,000 |
I have multiple scripts that detach a process from bash using nohup and &>/dev/null &. My question is, how do I kill the process after completely detaching it from bash. using killall or pidof ScriptName doesn't work.
|
nohup should only affect the hangup signal. So kill should still work normally. Maybe you are using the wrong pid or process name; compare with pstree -p or ps -ef.
If you still suspect nohup, maybe you could try disown instead.
$ sleep 1000 &
$ jobs -p
13561
$ disown
$ jobs -p
$ pidof sleep
13561
$ kill 13561
$ pidof sleep
$
| How do I kill a process after detaching it from bash? |
1,541,850,455,000 |
I understand from Informit article that sessions and process groups are used to terminate descendant processes on exit and send signals to related processes with job control.
I believe this information can be extracted at any point using the PPID of every process. Do these concepts exist in place just to have a data structure that enables getting descendants of a process quickly?
Do session and process groups get employed in things other than job control and termination of descendants? do they store any context information?
Any good references will be helpful.
|
Process groups exist primarily to determine which processes started from a terminal can access that terminal. Only processes in the foreground process group may read or write to their controlling terminal; background processes are stopped by a SIGTTIN or SIGTTOU signal.
You can send a signal atomically to all the processes in a process group, by passing a negative PID argument to kill. This also happens when a signal is generated by the terminal driver in response to a special character (e.g. SIGINT for Ctrl+C).
Sessions track which process groups are attached to a terminal. Only processes running in the same session as the controlling process are foreground or background processes.
It is not possible to determine process groups or sessions from the PPID. You would have no way to know whether the parent of a process is in the same process group or a different one, and likewise for sessions.
| What is the purpose of abstractions, session, session leader and process groups? |
1,541,850,455,000 |
I have this small script I call prompt-to-run.
prompt_acc=''
read -p 'run `'"$1"'`
' -i "$1" -e prompt_acc
$prompt_acc
It lets me create a script that fills in a command for me, but gives me the chance to edit or skip running it without stopping the whole script.
I have a different script, which we can call long-running-script I want to run that I want to run in its own terminal, since after taking some input it sits there and outputs more text continuously. I want to be able to start running it from a script containing several prompt-to-run invocations, and then get back my original terminal so I can run the next prompt-to-run invocation.
I've made long-running-script internally open a new terminal, so manually typing out
long-running-script &
starts up the program I want to run and gives me back my prompt in the original terminal. But prompt-to-run 'long-running-script &' doesn't return the terminal prompt. I suppose this is because the command is being run from inside an environment variable, $prompt_acc, so it's not being interpreted the way I want.
Is there a way to change one or both of prompt-to-run or long-running-script to get what I want?
|
The only processing done on the expansion of a variable is word splitting and wildcard expansion. Other shell metacharacters are ignored.
If you want the contents of the variable to be executed as if you'd typed the command, use the eval command:
eval "$prompt_acc"
This will perform all normal shell processing of the command, including quote processing, executing multiple commands separated by ;, backgrounding with &, I/O redirection, etc.
BTW, prompt_acc is just an ordinary shell variable, it's not an environment variable. The export command is the way to put variables into the environment.
| how can I run a command from an environment variable and have the internal trailing ampersand work? |
1,541,850,455,000 |
I have a master process (run-jobs below) that starts other jobs as its sub-processes. When the master process fails (e.g. database failure), it exits with a non-0 status code, which is good, and can be verified by looking into $? variable (echo $?).
However, I'd also like to inspect the exit codes of the sub-processes in case the master job fails. Is there a convenient way to check the exit code of process_1 and process_2 below, once the master process is gone?
This is simplified output of ps auxf:
vagrant 5167 | \_ php app/console run-jobs
vagrant 5461 | \_ php process_1
vagrant 5517 | \_ php process_2
|
Processes report their exit status to their parent and if their parent is dead to the process of id 1 (init), though with recent versions of Linux (3.4 or above), you can designate another ancestor as the child subreaper for that role (using prctl(PR_SET_CHILD_SUBREAPER)).
Actually, after they die, processes become zombies until their parent (or init) retrieves their exit status (with waitpid() or other).
In your case, you're saying the children are dying after (as a result of?) run-jobs dying. That means they'll report their exit status to init or to the process designated as child sub-reaper.
If init doesn't log that (and it generally doesn't) and if you don't use auditing or process accounting, that exit status will be lost.
If on a recent version of Linux, you can create your own sub-reaper to get the pid and exit status of those orphan processes. Like with perl:
$ perl -MPOSIX -le '
require "syscall.ph";
syscall(&SYS_prctl,36,1) >= 0 or die "cannot set subreaper: $!";
# example running 1 child and 2 grand children:
if (!fork) {
# There, you would run:
# exec("php", "run-jobs");
if (!fork) {exec "sleep 1; exit 12"};
if (!fork) {exec "sleep 2; exit 123"};
exit(88)
}
# now reporting on all children and grand-children:
while (($pid = wait) > 0) {
print "$pid: " . WEXITSTATUS($?)
}'
22425: 88
22426: 12
22427: 123
If you wanted to retrieve information on the dying processes (like command line, user, ppid...), you'd need to do that while they're still in the zombie state, that is before you've done a wait() on them.
To do that you'd need to use the waitid() API with the WNOWAIT option (and then get the information from /proc or the ps command). I don't think perl has an interface to that though, so you'd need to write it in another language like C.
| Get the exit code of processes forked from the master process |
1,541,850,455,000 |
I've run into a couple of similar situations where I can break a single-core bound task up into multiple parts and run each part as separate job in bash to parallelize it, but I struggle with collating the returned data back to a single data stream. My naive method so far has to create a temporary folder, track the PID's, have each thread write to a file with its pid, then once all jobs complete read all pids and merge them into a single file in order of PID's spawned. Is there a better way to handle these kind of multiple-in-one-out situations using bash/shell tools?
|
My naive method so far has to create a temporary folder, track the PID's, have each thread write to a file with its pid, then once all jobs complete read all pids and merge them into a single file in order of PID's spawned.
This is almost exactly what GNU Parallel does.
parallel do_stuff ::: job1 job2 job3 ... jobn > output
There are some added benefits:
The temporary files are automatically removed, so there is no cleanup - even if you kill GNU Parallel.
You only need temporary space for the currently running jobs: The temporary space for completed jobs is freed when the job is done.
If you want output in the same order as the input use --keep-order.
If you want output mixed line-by-line from the different jobs, use --line-buffer.
GNU Parallel has quite a few features for splitting up a task into smaller jobs. Maybe you can even use one of those to generate the smaller jobs?
| How to merge data from multiple background jobs back to a single data stream in bash |
1,541,850,455,000 |
I often start a long running command that is bound to either CPU, Disk, RAM or Internet connection. While that is running, I find that I want to run a similar command. Let's say downloading something big.
Then I start a shell with wget ...1 and let it run. I could open another shell and run the other wget ...2, but now they would fight for the bandwidth. When I just type the second command into the running shell, it will execute that later on, since wget is not interactive.
Is there a reasonable way to do this with either Bash or Fish Shell?
|
If you have already running wget http://first in foreground you can pause it with CTRL+z and then return it back to work with another command right after it:
fg ; wget http://second
It should work in most cases.
If that way is not acceptable, you should go with lockfile. Or even just to monitor the process via ps (lockfile is better).
| Queue a task in a running shell |
1,541,850,455,000 |
I'm interested in wrapping a command such that it only runs at most once every X duration; essentially, the same functionality as the lodash throttle function. I'd basically like to be able to run this:
throttle 60 -- check-something
another-command
throttle 60 -- check-something
another-command
throttle 60 -- check-something
For each of those throttle commands, if it's been less than 60 seconds since check-something was run (successfully), the command is skipped. Does anything like this already exist? Is it easy to do with a shell script?
|
I'm not aware of anything off-the-shelf, but a wrapper function could do the job. I've implemented one in bash using an associative array:
declare -A _throttled=()
throttle() {
if [ "$#" -lt 2 ]
then
printf '%s\n' "Usage: throttle timeout command [arg ... ]" >&2
return 1
fi
local t=$1
shift
if [ -n "${_throttled["$1"]}" ]
then
if [ "$(date +%s)" -ge "${_throttled["$1"]}" ]
then
"$@" && _throttled["$1"]=$((t + $(date +%s)))
else
: printf '%s\n' "Timeout for: $1 has not yet been reached" >&2
fi
else
"$@" && _throttled["$1"]=$((t + $(date +%s)))
fi
}
The basic logic is: if the command has an entry in the _throttle array, check the current time against the array value; if the timeout has expired, run the command and -- if the command was successful -- set a new timeout value. If the timeout has not yet expired, (don't) print an informative message. If, on the other hand, the command does not (yet) have an entry in the array, run the command and -- if the command was successful -- set a new timeout value.
The wrapper function doesn't distinguish commands based on any arguments, so throttle 30 ls is the same to it as throttle 30 ls /tmp. This is easily changed by replacing the array references and assignments of "$1" to "$@".
Also note that I dropped the -- from your example syntax.
Also note that this is limited to seconds-level resolution.
If you have bash version 4.2 or later, you may save the call to the external date command by using a feature of the printf built-in instead:
...
_throttled["$1"]=$((t + $(printf '%(%s)T\n' -1)))
...
... where we're asking for the time formatted in seconds (%s) explicitly of the current time (-1).
Or in bash 5.0 or later:
_throttled["$1"]=$((t + EPOCHSECONDS))
| How to wrap a command such that its execution is throttled (that is, it executes at most once every X minutes) |
1,541,850,455,000 |
I don't understand the problem about which the standard shell in Debian (dash) complains:
test@debian:~$ sh
$ man ls
ctrl+Z
[1] + Stopped man ls
$ jobs
[1] + Stopped man ls
$ fg %man
sh: 3: fg: %man: ambiguous
Shouldn't fg %string simply bring the job whose command begins with string to the foreground? Why is %man ambiguous?
|
This looks like a bug; the loop which handles strings in this context doesn't have a valid exit condition:
while (1) {
if (!jp)
goto err;
if (match(jp->ps[0].cmd, p)) {
if (found)
goto err;
found = jp;
err_msg = "%s: ambiguous";
}
jp = jp->prev_job;
}
If a job matches the string, found is set, and err_msg is pre-loaded; then it goes round the loop again, after setting jp to the previous job. When it reaches the end, the first condition matches, so control goes to err, which prints the error:
err:
sh_error(err_msg, name);
I guess there should be a goto gotit somewhere...
The following patch fixes this (I've sent it to the upstream maintainer):
diff --git a/src/jobs.c b/src/jobs.c
index c2c2332..37f3b41 100644
--- a/src/jobs.c
+++ b/src/jobs.c
@@ -715,8 +715,14 @@ check:
found = 0;
while (1) {
- if (!jp)
- goto err;
+ if (!jp) {
+ if (found) {
+ jp = found;
+ goto gotit;
+ } else {
+ goto err;
+ }
+ }
if (match(jp->ps[0].cmd, p)) {
if (found)
goto err;
| Job control in dash |
1,541,850,455,000 |
I have a script I'm trying to launch with
php ./Script.php &
The task goes into the background but it is stopped. When I try to run it with bg it simply stays stopped.
$ jobs
[1]+ Stopped php ./Script.php
$ bg
[1]+ php ./Script.php &
[1]+ Stopped php ./Script.php
$
ps shows the job state as T
$ ps ax | grep Script
951 pts/5 T 0:00 php ./Script.php
cat /proc/PID/wchan shows "pipe_wait"
Update:
This happens with any PHP script, even <?php sleep(1000);
Update:
Thanks for all the great answers and comments. I understand why this is happening on a technical level now, but I'm not sure what about some versions of PHP are causing this. But, that is not a *nix question.
|
Almost certainly the command tried to read from the terminal when it was not in the foreground process group. This will cause the kernel to send SIGTTIN to the process, which by default will stop the process. A simple test would be to redirect stdin to /dev/null. If the process runs in the background without being stopped then the problem was SIGTTIN as described.
| What would stop a task from being run in the background? |
1,541,850,455,000 |
When I gedit files from the command line, it's always locking the terminal, and I'm tired of explicitly commanding a detached process for it.
I tried to alias gedit as something like gedit $* & disown, but either that's not the right syntax or you're not allowed to overload executable binary commands with aliases (tried using that in a .bash_aliases function,
alias gedit=editorz
function editorz()
{
gedit $* & disown
}
), but it doesn't take.
So how do I make the command gedit test.txt not lock the originating terminal window?
|
That should work: are you sure your .bash_aliases is read? (It's not a standard file, but it might be sourced by your ~/.bashrc. If you're confused about .bashrc and .bash_profile, see Difference between .bashrc and .bash_profile.)
There's a bug in your function: it should be
editorz () {
gedit "$@" & disown
}
Your version doesn't work on file names containing spaces or shell wildcards. The function keyword is optional.
You can call the function gedit (and dispense with the alias altogether), but then you need to tell the shell that the call inside the function is to the command and not to the function:
gedit () {
command gedit "$@" & disown
}
Note that if you've accidentally started gedit in the foreground (i.e. locking your terminal), you can put it in the background by pressing Ctrl+Z in the terminal, and entering the command bg.
| How can I turn the behavior of `gedit something & disown` into the default behavior when calling gedit from the command line? |
1,541,850,455,000 |
According to the documentation:
To prevent the shell from sending the SIGHUP signal to a particular
job, it should be removed from the jobs table with the disown builtin
or marked to not receive SIGHUP using disown -h.
https://www.gnu.org/software/bash/manual/html_node/Signals.html
Note the OR in the quote.
I can confirm that simply using disown without -h and re-logging that the process did not exit:
#!/bin/bash
( sleep 10s; echo 1 > b ) &
disown
It seems that the -h option is not necessary? If it works without then what is its purpose?
|
Without -h the job is removed from the table of active jobs, with -h it is not.
Everything is in the manual:
disown [-ar] [-h] [jobspec ...]
(...)
If the -h option is given, each jobspec is not removed
from the table, but is marked so that SIGHUP is not sent to the
job if the shell receives a SIGHUP.
To see the difference run jobs after disowning the job with and without -h.
| Clarification for Bash documentation on disown builtin option -h |
1,541,850,455,000 |
Last night, before abandoning my computer for the evening, I started a bunch of compiler jobs so they'd be ready in the morning, using make -f alpha.mak &>alpha.out &. When I came back and hit return, I saw the following output:
[1] Done make -f alpha.mak &>alpha.out
[2]- Done make -f beta.mak &>beta.out
[3]+ Done make -f gamma.mak &>gamma.out
My question: What do the + and - symbols mean in that output?
I'm using bash on RedHat 6.
|
According to the Bash Reference Manual: Job Control:
In output pertaining to jobs (e.g., the output of the jobs command), the current job is always flagged with a +', and the previous job with a-'.
| Background task finished notification syntax |
1,541,850,455,000 |
I need to retrieve the PID of a process piped into another process that together are spawned as a background job in bash. Previously I simply relied on pgrep, but as it turns out there can be a delay of >2s before pgrep is able to find the process:
#!/bin/bash
cmd1 | cmd2 &
pid=$(pgrep cmd1) # emtpy in about 1/10
I found that some common recommendations for this problem are using process substitution rather than a simple pipe (cmd1 >(cmd2) & pid=$!) or using the jobs builtin.
Process substitution runs an entire subshell (for the entire runtime), so I would rather use jobs for now, but I want to avoid making the same mistake twice...
Can I 100% depend on jobs to be aware of both processes if I perform the lookup immediately after spawning them?
#!/bin/bash
cmd1 | cmd2 &
pid=$(jobs -p %cmd1) # 10/10?
This is probably on account of running jobs in background, or maybe a quirk of set -x, but the following example usually lists the executed commands in any which order. The jobs output appears to be correct so far, but I want to entirely rule out the possibility that jobs could be executed before the jobs have been started (or at least that jobs will fail to list the two processes)!?
#!/bin/bash
set -x
tail -f /dev/null | cat &
jobs -l
kill %tail
Example:
+ jobs -l
[1]+ 2802325 Running tail -f /dev/null
2802326 | cat &
+ tail -f /dev/null
+ kill %tail
Likewise, in the process substitution case, can I rely on pid=$! to always work? It is designed to "expand to the process ID of the most recently executed background (asynchronous) command" after all?
|
When a background job is a pipeline of the form cmd1 | cmd2, it's still a single background job. There's no way to know when cmd1 starts.
Each & creates one background job. As soon as cmd & returns, the shell is aware of that background job: cmd & jobs lists cmd. cmd & pid=$! sets pid to the process ID that runs cmd.
The pipeline cmd1 | cmd2 creates two more subprocesses: one to run cmd1 and one to run cmd2. Both processes are children of the subprocess that runs the background job. Here's how the process tree looks like for bash -c '{ sleep 123 | sleep 456; } & jobs -p; sleep 789':
PID PPID CMD
268 265 | \_ bash -c { sleep 123 | sleep 456; } & sleep 789
269 268 | \_ bash -c { sleep 123 | sleep 456; } & sleep 789
270 269 | | \_ sleep 123
271 269 | | \_ sleep 456
272 268 | \_ sleep 789
268 is the original bash process. 269 is the background job that jobs -p prints. 270 and 271 are the left- and right-hand sides of the pipe, both children of the main process of the background job (269).
The version of bash I tested with (5.0.17 on Linux) optimizes cmd1 | cmd2 & without braces. In that case, the left-hand side of the pipe runs in the same process as the background job:
PID PPID CMD
392 389 | \_ bash -c sleep 123 | sleep 456 & jobs -p; sleep 789
393 392 | \_ sleep 123
394 392 | \_ sleep 456
395 392 | \_ sleep 789
You can't rely on this behavior to be stable across versions of bash, or possibly even across platforms, distributions, libc versions, etc.
jobs -p %cmd1 looks for a job whose code starts with cmd1. What it finds is cmd1 | cmd2. jobs -p %?cmd2 finds the same job¹. There's no way to access the process IDs running cmd1 and cmd2 through built-in features of bash.
If you need to know for sure that cmd1 has started, use a process substitution.
cmd1 >(cmd2)
You don't get to know when cmd2 starts and ends.
If you need to know when cmd1 and cmd2 start and end, you need to make them both jobs, and have them communicate through a named pipe.
tmp=$(mktemp -d) # Remove this in cleanup code
mkfifo "$tmp/pipe"
cmd1 >"$tmp/pipe" & pid1=$!
cmd2 <"$tmp/pipe" & pid2=$!
…
The jobs command is not very useful in scripts. Use $! to remember PIDs of background jobs.
¹ Or at least it should. My version complains about an ambiguous job spec, which has to be a bug since it's saying that despite there being only a single job.
| Reliable way to get PID of piped background process |
1,541,850,455,000 |
I like my background processes to freely write to the tty. stty -tostop is already the default in my zsh (I don't know why, perhaps because of OhMyzsh?):
❯ stty -a |rg tostop
isig icanon iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprt
But I still occasionally get my background processes suspended (this is not a consistent behavior, and I don't know how to reproduce it):
[1] + 3479064 suspended (tty output)
|
A process can be sent that SIGTTOU signal (which causes that message), when it makes a TCSETSW or TCSETS ioctl() for instance (like when using the tcsetattr() libc function) to set the tty line discipline settings while not in the foreground process group of a terminal (like when invoked in background from an interactive shell), regardless of whether tostop is enabled or not (which only affects writes to the terminal).
$ stty echo &
[1] 290008
[1] + suspended (tty output) stty echo
See info libc SIGTTOU on a GNU system for details:
Macro: int SIGTTOU
This is similar to SIGTTIN, but is generated when a process in a
background job attempts to write to the terminal or set its modes.
Again, the default action is to stop the process. SIGTTOU is
only generated for an attempt to write to the terminal if the
TOSTOP output mode is set
(emphasis mine)
I believe it's not the only ioctl() that may cause that. From a cursory look at the Linux kernel source code, looks like TCXONC (tcflow()), TCFLSH (tcflush()) should too.
| zsh: Why do I get suspended background processes even when I have `stty -tostop`? |
1,541,850,455,000 |
I started a for loop in an interactive bash session. For this question we can assume the loop was something like
for i in dir/*; do
program "$i"
done > log
The command takes a lot longer than expected and still runs. How can I see the current progress of the running for loop.
Non-Solutions:
Look at log.
Doesn't work because: program is expected to run silently. You can think of program as a validator. If the loop completes without any output then I'm happy.
Stop the loop. Add some kind of progress indication (for instance echo "$i"). Start the loop again.
Doesn't work because: The loop already ran for hours. I don't want to throw away all the time and energy invested in the current run. I assume everything works fine. I'm just curious and want to know the current progress.
Ctrl+Z then set -x; fg
Doesn't work because: bash doesn't continue the loop when using fg. After fg only the current/next command inside the loop will run, then the loop exits. You can try it yourself using for i in {1..200}; do printf '%s ' $i; /usr/bin/sleep 0.5; done.
|
Wildcards as dir/* always expand in the same order. This feature together with the current value of $i can be used to show a progress information.
The current value of $i can be retrieved by looking at the processes running on your system. The following command prints the currently running instance of program and its arguments.
If there are multiple program processes you may want to use the --parent ... option for pgrep to only match processes started by the shell running the for loop you want to inspect.
ps -o args= -fp $(pgrep program)
Extract the value of $i manually, then get your progress using
a=(dir/*)
n=$(printf %s\\0 "${a[@]}" | grep -zFxnm1 'the exact value of $i' | cut -f1 -d:)
echo "Progress: $n / ${#a[@]}"
This works under two assumptions
The content of dir/ does not change while running the loop. If program creates, renames, or deletes files then the numbers are likely wrong.
Calling program starts a new process. If program is a bash function then there won't be a sub-process for program. If the function function itself calls another program that starts a new process then you can look for that sub-program. However, if program is a bash built-in (commands that are listed by help) or a pure bash function (one that uses only bash built-ins) then you are out of luck.
If you have problem 2 or your for loop has a different structure than the one in the question (for example program < "$1" or a very long loop body with many different programs) then you might be able to get some progress information from lsof by looking at the files opened by your current shell session or its child processes.
| Show progress of a for loop after it was started |
1,541,850,455,000 |
I have set up an alias for my mutt: alias mutt='$HOME/.mutt/run-notmuch-offlineimap & ; mutt'.
Note: Changing my Alias to alias mutt='$HOME/.mutt/run-notmuch-offlineimap 2> /dev/null & ; mutt' or to alias mutt='$HOME/.mutt/run-notmuch-offlineimap 2>&1 >/dev/null & ; mutt' produces the exact same result.
The script run-notmuch-offlineimap looks like that:
#!/usr/bin/env zsh
notmuch="$(which notmuch)"
$notmuch new --quiet
$notmuch tag +inbox -- "folder:dev/dev|INBOX or folder:pers/pers|INBOX"
$notmuch tag +sent -- "folder:dev/dev|Sent or folder:pers/pers|Sent"
$notmuch tag +drafts -- "folder:dev/dev|Drafts or folder:pers/pers|Sent"
$notmuch tag +github -- "folder:dev/dev|github and not tag:github"
# test if the offlineimap instance of account dev is already running
if [[ $(pgrep -f 'offlineimap.*dev.*') == "" ]]
then
offlineimap -c "$HOME/.fetch-send-mail/dev/dev.imap" -u quiet
fi
# test if the offlineimap instance of account dev is already running
if [[ $(pgrep -f 'offlineimap.*pers.*') == "" ]]
then
offlineimap -c "$HOME/.fetch-send-mail/pers/pers.imap" -u quiet
fi
(the result would be the exact same if I'd used bash in this script)
When i start mutt, this is what happens:
~
$ mutt
[1] 31303
Mailbox is unchanged.
# some seconds afterwards:
~
$
[1] + done $HOME/.mutt/run-notmuch-offlineimap
~
The Message "Mailbox is unchanged" is from mutt itself, so that's expected. However, can i prevent the [1] messages to be shown? E.g. when i execute mutt, it should only print this (and nothing else):
~
$ mutt
Mailbox is unchanged.
~
$
how can i achieve that?
|
If you are in zsh then you can change the alias to launch the background process with &! instead of just &. This will immediately disown the process.
alias mutt='$HOME/.mutt/run-notmuch-offlineimap &! mutt'
If you are in bash then you can use disown after the command to have a similar effect, but you will still get the first job control message listing the pid.
alias mutt='$HOME/.mutt/run-notmuch-offlineimap & disown; mutt'
You can avoid both by using a sub-shell:
alias mutt='($HOME/.mutt/run-notmuch-offlineimap &); mutt'
| prevent "[1] + done $scriptname" and "[1] 31303" to be shown |
1,541,850,455,000 |
From man zsh, you can see:
If you [...] try to exit again, the shell will not warn you a second time; the suspended jobs will be terminated, and the running jobs will be sent a SIGHUP signal, if the HUP option is set.
Is there a way to prevent zsh from quitting if a job is in the background?
|
Looking at the source code, there doesn't seem to be a direct way of doing this. The check for background jobs is performed in zexit, which is called when the main zsh process decides to try exiting, whether from exit, from logout, from an end-of-file or various other circumstances. To cancel the order to exit, stopmsg has to be 0. After getting the “you have (running|suspended|stopped) jobs” warning, stopmsg is 2. The only way to bring stopmsg down is by preparing to execute a command (from a non-empty command line) twice. So if you run exit, the next command you enter is not protected from exiting.
I can't think of a way to hook the exit builtin (or replace it by a function, etc.) that would allow faking a no-op command afterwards to reset stopmsg.
You can disable the exit builtin, or replace it by a function that checks whether there are jobs, e.g.
function exit {
emulate -L zsh
setopt extended_glob
if [[ -n ${jobstates[(r)s*]} ]]; then
echo "you have suspended jobs"
return 1
fi
if [[ -n ${jobstates[(r)^done:*]} ]]; then
echo "you have running jobs"
return 1
fi
builtin exit "$@"
}
But this only takes care of exit, not of other methods.
For end of file, you can setopt ignore_eof.
| Prevent Zsh from quitting if job in background |
1,541,850,455,000 |
I have written following function in my bashrc. It checks if a job is running every 5 seconds and sends email when job is finished.
function jobcheck () {
time=`date`
jobstatus=`jobs $1`
while [ 1 ]; do
if (( ! `echo $jobstatus | grep "Running" | wc -l` )); then
"sendEmail command to send email"
break;
fi
sleep 5;
done
}
I want to call this function as below (for job number 2, for exmaple)
jobcheck 2
and proceed to run other commands on the commandline. But the while loop keeps on running and I do not get command prompt.
If I run it as
jobcheck 2 &
Then it gives error that bash: jobs 2 not found
How do I run this function in background?
|
You will have to rewrite your function to be able to do that.
When you start a background job with &, the shell does indeed keep track of that, and you can indeed figure out more information by using the jobs builtin. However, that information is specific to that instance of the shell; if you run your function with & itself, then a separate shell is spawned which is not the shell with the background jobs, and therefore you can't access the information about the original shell's jobs from that separate shell.
However, there's a simple way to fix that:
rewrite your function so it runs in terms of process IDs (PIDs) rather than job numbers. That is, have it check whether a process still exists (e.g., by way of parsing ps output, of by checking whether /proc/pid exists)
run your new function with %2 rather than 2 as argument. That is, give it the percent sign followed by the job id you want to monitor; the percent sign is used by Bourne shells to replace a job id by the pid of the given job.
With that, it should just work.
| bash function that sends email after job is finished |
1,541,850,455,000 |
While studying the internals of bash's job control mechanism under Linux I have came across a little problem of understanding. Let's assume the following scenario:
script is executed in background
user@linux:~# ./script.sh &
same script is executed in foreground at the same time
user@linux:~# ./script.sh
Now the first execution of the script is performed as a background job during the second execution of the script in the foreground. Bash now performs a blocking wait call with the PID of the foreground process until it terminates and then it gets the corresponding information. After the termination of the forground process bash controls the status of all background jobs and informs about every changes before returning to prompt. This is usually the default behavior called "+b".
But there is an other mode called "-b". In this mode bash informs immediately about every background job status change. In my understanding this is done by sending the signal SIGCHLD by the background process. But how could bash react to this signal of a terminated background process and print a message to the terminal although it executes a blocking wait call. Because in my opinion signals are only handled before returning to user mode.
Does bash call wait with the option WNOHANG within a loop until the current foreground terminates?
Furthermore, when operating in mode "-b" bash can write to the terminal although it doesn't belong to the foreground process group of the terminal. Even when I set the option "tostop" with stty, bash can write to the terminal without being part of the foreground process group. Does bash get any special permissions because it is part of the controlling process group of the terminal.
I hope, I could make clear where my problems of understanding are.
|
Yes, bash uses waitpid() with WNOHANG in a loop. You can see this in waitchld() in jobs.c:
static int
waitchld (wpid, block)
pid_t wpid;
int block;
{
...
do
{
/* We don't want to be notified about jobs stopping if job control
is not active. XXX - was interactive_shell instead of job_control */
waitpid_flags = (job_control && subshell_environment == 0)
? (WUNTRACED|wcontinued)
: 0;
if (sigchld || block == 0)
waitpid_flags |= WNOHANG;
...
if (block == 1 && queue_sigchld == 0 && (waitpid_flags & WNOHANG) == 0)
{
internal_warning (_("waitchld: turning on WNOHANG to avoid indefinite block"));
waitpid_flags |= WNOHANG;
}
pid = WAITPID (-1, &status, waitpid_flags);
...
/* We have successfully recorded the useful information about this process
that has just changed state. If we notify asynchronously, and the job
that this process belongs to is no longer running, then notify the user
of that fact now. */
if (asynchronous_notification && interactive)
notify_of_job_status ();
return (children_exited);
}
The notify_of_job_status() function simply writes to the bash process' standard error stream.
I can't unfortunately say much about whether setting tostop with stty should influence the shell that you do this in.
| How does bash's job control handle stopped or terminated background jobs? |
1,541,850,455,000 |
Possible Duplicate:
How can I disown a running process and associate it to a new screen shell?
The problem is that the process is not a job inside of my active shell (as I've logged in from remote because my X-system has crashed), meaning: I guess disown, nohup, CTRL+Z & bg, screen et al. will not work.
Motivation:
I started a long running job inside of a gnome-terminal. I want to detach it from the its parent, as I the X-system has crashed. I fear when I kill the X-system, also the Gnome Desktop + Gnome-Terminal + my precious script will be lost. So I still can login from remote and want to save the script process and restart X. How do I do that?
|
Depending on the scripting language you use to run the job you could use
setpgrp()
Perl: setpgrp PID, PGRP
to detach the running process from the controlling terminal, so once it starts the controlling terminal can exit without harming the running process.
Now from what you are describing you will have the controlling terminal and shell by running Gnome Terminal and starting your job from there so nohup should work just fine for you.
| How do I detach a process from its parent? [duplicate] |
1,484,929,944,000 |
Job control is probably my favorite thing about Linux. I find myself, often, starting a computationally demanding process that basically renders a computer unusable for up to days at a time, and being thankful that there is always CTRL-Z and fg, in case I need to use that particular computer during that particular time period.
Sometimes, however, I want to insert a job onto the job stack. I've never figured out how to do that.
I'm using bash.
It would probably look something like:
$ ./reallybigjob
^Z
[1]+ Stopped reallybigjob
$ (stuff I don't know) && ./otherbigjob
$ fg
|
There is no job stack, each job is handled independently.
You can do fg; otherbigjob. That will put reallybigjob back to the foreground, then when the fg command stops run otherbigjob. This isn't the same thing as queuing otherbigjob for execution after the first job: if you press Ctrl+Z then otherbigjob starts immediately. If you press Ctrl+C then reallybigjob is killed. You can't leave otherbigjob in the background and queue another job after it.
If the jobs are CPU-intensive, then the batch utility can let you schedule the next job when the CPU isn't busy.
| Basic job control: stop a job, add a job onto the stack, and `fg` |
1,484,929,944,000 |
I have a process that is blocking in nature. It is executed first. In order to execute the second process, I moved the first process to the background and executed the second process. Using the wait statement, I am waiting in the terminal. However, it seems that upon exiting the shell (pressing CTRL+C), the first process was not exited smoothly. Below is the shell script:
execute.sh
#!/bin/sh
# start process one in background
python3 -m http.server 8086 &
# start second process
firefox "http://localhost:8086/index.html"
wait
I found a similar question here but it seems not working properly. Basically, when I call ./execute.sh a second time, the server says "Address already in use". It means the server could not exit peacefully. On the other hand, if I execute the sever manually in a terminal, it exits smoothly.
|
You could also trap for an interrupt to ensure the process is killed if ctrl+c is pressed
#!/bin/sh
trap ctrl_c INT
ctrl_c () {
kill "$bpid"
exit
}
# start process one in background
python3 -m http.server 8086 &
bpid=$!
# start second process
firefox "http://localhost:8086/index.html"
wait
| Call other process after executing a blocking process in shell |
1,484,929,944,000 |
I have a Bash script of a thousand lines each containing an ffmpeg command. I start this with source script and it runs just fine.
However, when I try to take control of this script in various ways, things go completely awry:
If I do Ctrl + Z to pause the whole thing, only the current ffmpeg command is paused, and the next one is started! Not what I wanted!
If I do Ctrl + C to stop everything, the script jumps to the next ffmpeg command, and I have to press once for every line in the script to
finally stop everything. Sheer hell.
I tried using ps -ef from another shell to locate the source command to pause/kill it from there, but it does not exist in the list.
So how can I pause/stop the parent script the way I wish? Or possibly, how can I execute the script in a different way to begin with that gives me the proper control over it?
|
Try running the script as a script instead of sourcing it:
$ bash <scriptname>
| Job control over a Bash script |
1,484,929,944,000 |
Here https://unix.stackexchange.com/a/104825/109539 man says that to stop background process kill + PID must be used. However I can't stop background process using kill +PID, only kill + JOB ID
[KPE@home Temp]$ jobs
[KPE@home Temp]$ ps
PID TTY TIME CMD
13270 pts/0 00:00:00 bash
23257 pts/0 00:00:00 ps
[KPE@home Temp]$ mc &
[1] 23258
[KPE@home Temp]$ ps
PID TTY TIME CMD
13270 pts/0 00:00:00 bash
23258 pts/0 00:00:00 bash
23262 pts/0 00:00:00 mc
23264 pts/0 00:00:00 ps
[1]+ Stopped . /usr/libexec/mc/mc-wrapper.sh
[KPE@home Temp]$ kill -s 15 23262
[KPE@home Temp]$ ps
PID TTY TIME CMD
13270 pts/0 00:00:00 bash
23258 pts/0 00:00:00 bash
23262 pts/0 00:00:00 mc
23266 pts/0 00:00:00 ps
[KPE@home Temp]$ kill %1
[1]+ Terminated . /usr/libexec/mc/mc-wrapper.sh
[KPE@home Temp]$ ps
PID TTY TIME CMD
13270 pts/0 00:00:00 bash
23267 pts/0 00:00:00 ps
[KPE@home Temp]$
So the question - how to stop bg process via kill by pid
|
If you C-Z the mc program (=send it a SIGSTP or SIGSTOP = suspend it (will show as "Stopped" in the shell)), it won't be immediately receptive to any more signals (other than SIGKILL, but using that one isn't very nice) until it is resumed. Once you resume it with SIGCONT, it will accept your SIGTERM signal (and the signals that queued up for it while it was suspended).
kill -CONT $!; kill -TERM $! # $! refers to the pid of the last-spawned job
kill %1 works because shell's built-in kill probably does these two steps under the hood.
| Linux stop background process via kill + PID |
1,484,929,944,000 |
I need to remotely install a program on a Linux computer. I do:
./configure
make
make install
However I seem to get issues when I run ./configure (it's a separate problem) where the configuration screen essentially freezes; it doesn't move past a certain check. I need to stop the configuration so I do Ctrl+z, and that lets me use the terminal again.
However, it seems to me the process does not stop. I see the config.log file continue to grow in bytes (gets to be 40+MB). This is a problem since now the process is ongoing and creating this log file that I don't know to what size it will grow to.
I need to reboot the computer in order to stop the configure script now working in the background. I can't see find it's PID when I use the top command to view the processes.
How can I stop ./configure script through the terminal successfully?
|
Control+ Z suspends (TSTP/SIGSTOP signal) the most recent foreground process, which returns you back to your shell. From the shell, the bg command sends the suspended process to the background while the fg commands brings it back to foreground. Try Control+C, which sends SIGINT, killing the process. Some software reacts to SIGINT in other ways, like cleaning up before exiting.
| How to stop ./configure script? |
1,484,929,944,000 |
I went through the top answer to this question: Difference between nohup, disown and &, and I read specfically the following:
With disown the job is still connected to the terminal, so if the
terminal is destroyed (which can happen if it was a pty, like those
created by xterm or SSH, and the controlling program is
terminated, by closing the xterm or terminating the SSH
connection), the program will fail as soon as it tries to read from
standard input or write to the standard output.
It looks like disown cannot prevent a job from being killed if the terminal closes. nohup can, but it can only be used when the job starts.
Assuming that I understood this correctly, what can I do to make sure that a job that I started without nohup does not get killed when I close my terminal, terminate my SSH connection?
Would issuing setopt NO_HUP from the command line do it? And if so, wouldn't that affect all running jobs that I started from the same terminal?
|
For a Standard Shell (bash) (POSIX.1)
Start it with &, make it read from something other then the default stdin (==/dev/tty == /dev/stdin == /dev/fd/0) + make it write to something other than the default stdout (==/dev/tty == /dev/stdin == /dev/fd/1) (same for stderr) and make sure the job isn't or doesn't get suspended(=stopped). If it must get stopped or must read from the terminal and it is to continue after the job's terminal hangs up, make sure the processes in the job have a handler (trap) for the SIGHUP signal. If it must write to the terminal and it is to survive a terminal hang up, make sure the processes that write to the terminal have a handler for SIGPIPE.
Explanation:
Background processes get sent SIGTTIN the moment they try to read from the terminal. The default disposition for SIGTTIN is to stop (=suspend) the process. A terminal hangup will cause the first generation of processes of your background jobs to get reparented to init, causing the process group of the first generation of your job's processes to become an orphaned process group. (Orphaned process groups are those where none of the members have a parent in a different process group but in the same session.) The system will send all stopped (suspended) orphaned process groups SIGHUP followed by SIGCONT (to make sure they get the SIGHUP) on a terminal hangup because by definition, none of the processes in that process group can be awoken by a parent from the same session. Consequently those processes must be awoken by the system, but at the same time in a way that signals that process that it got awoken due to the terminal hanging up rather than due to normal operation. SIGHUP is the mechanism that does that and the default disposition for SIGHUP is to abort.
Hanging up a terminal will also cause subsequent writes to that terminal to raise SIGPIPE, which is also deadly if unhandled.
TL;DR
If your orphaned process groups aren't suspended (via SIGTTIN or ^Z or otherwise, they don't have to be afraid of the SIGHUP signal and if they output to a file rather than the terminal on both stdout and stderr and read from a file rather than the terminal, then they don't have to be afraid of SIGPIPE.
If you're running on top of a terminal multiplexer rather than a raw terminal, youd don't have to be afraid of either SIGHUP or SIGPIPE.
For zsh
I played with zsh and it turns out that while bash does behave like I described above (the above described behavior should be conforming to POSIX.1), zsh sends SIGHUP to running background jobs even.
setopt NO_HUP makes it (or the system; not sure here) only send SIGHUP in situations described above.
For testing this, you can try running something like:
( rm -f hup; trap 'echo HUP > hup' HUP; sleep 100)
in various ways (background, foreground, stopped) and disconnecting with it. Then you can check whether the hup file was created, which would mean the job did get a SIGHUP.
| Preventing terminal disconnection from killing a running job in zsh [duplicate] |
1,484,929,944,000 |
I'm currently working on an audit-script for a huge platform. In the main-script we use traps and in one of the traps I ask the user for clean up the files. The script has no output in standard output, so running the script in the bg is obvious.
When sending a SIGQUIT to the bg-job it stops, and I have to put it manually in the foreground with fg to get the prompt.
What I tried:
I played around with set -m to active job control and put an fg into my trap-function.
With set -m my shell close after the script finished, without it I get debug-output, that my script doesn't have job control. Even with set -m, the job doesn't come into foreground.
For this, my questions are:
Is it possible to force a job to come into foreground at some point?
I know this is not the common way to use job-control inside a script. What is "best practise" for this?
Is job-control inside a script only for sub-shells/child-processes or can I use it to control the job I started?
Edit:
as lcd047 suggested it is much more elegant to use screen or tmux, to keep the scripts clean and simple.
|
Is job-control inside a script only for sub-shells/child-processes or can I use it to control the job I started?
Yes. It's the parent shell that does the job control, you can't put the child process to foreground from within itself.
Edit: You can however still do it like this:
Child script:
#! /bin/sh
...
trap "kill -s USR1 $PPID" TTOU
...
echo -n Cleanup?
read yn </dev/tty
...
Parent script:
#! /bin/sh
...
trap "fg %1" USR1
...
child &
...
wait
...
This installs a signal handler for SIGTTOU in the child, and another
signal handler for SIGUSR1 in the parent. When the child tries to output something to the terminal it receives a SIGTTOU. It then sends a SIGUSR1 to the parent, which in turn runs fg %1 and puts the child to foreground.
The above assumes %1 to be the child process. In practice you probably have a single process in background anyway.
| Force a job to come in foreground when asked for user input |
1,484,929,944,000 |
If I have the following jobs running in a shell
-> % jobs -l
[1] 83664 suspended nvim
[2] 84330 suspended python
[3] 84344 suspended python
[4] 84376 suspended nvim
[5] - 84701 suspended python
[6] + 84715 suspended python
How can i return to the nth job, suppose I want to return to job 4, or job 1, how can I do that without having to kill all those which are before it?
|
To return to job 4, run:
fg %4
The command fg tells the shell to move job %4 to the foreground. For more information, run help fg at the command prompt.
When you want to suspend the job you are working on, run bg. For more information, run help bg at the command prompt.
For more detail than you'd likely want to know, see the section in man bash entitled JOB CONTROL.
| Return to a particular job in the jobs list |
1,484,929,944,000 |
Does the following way
$ (sleep 123 &)
$ jobs
$
remove the process group of sleep 123 from bash's job control? What is the difference between the above way and disown then?
Note that the sleep 123 process is still in the same process group led by the now disappearing subshell, and in the same process session as the interactive shell, so share the same controlling terminal.
Does not being in the shell's job control explain that the sleep 123 process will not receive any signal (including SIGHUP) sent from the bash process?
Thanks.
|
Yes, it removes it, so to speak.
When you run a (...) subshell from an interactive bash script, a new process group (job) is created, which becomes the foreground process group on the terminal, and in the case where the subshell contains any command terminated by &, (eg. sleep 3600 &) that command will be started in the very same foreground process group, with SIGINT and SIGQUIT ignored and its input redirected from /dev/null). See here for some links to the standard.
When the (...) subshell exits, that foreground job is removed from the shell's jobs table, and the sleep 3600 command will continue to run outside of the control of the shell.
This is quite different from the case where (sleep 3600 &) is run from a non-interactive script with no job control, where everything (the main shell, its (...) subshell, and any "background" commands (foo &), inside or outside the (...) subshell) is run in the same process group.
| Does ` (sleep 123 &)` remove the process group from bash's job control? |
1,484,929,944,000 |
So I've got an SSH reverse tunnel open, and I'm using tail to pipe the output of the sshd log into awk to detect certain login events and trigger an action. It looks like this:
ssh -NR 2222:127.0.0.1:22 server &
tail -fn 0 /var/log/auth.log | \
awk '/Invalid user [a-z]+ from 127.0.0.1/
{system("rsync -a source dest")}'
(To be clear, I'm initiating these failed logins from the server myself, on purpose, as a way to trigger rsync on the client machine, as suggested to me in this thread.)
What I'd like to do now is be able to suspend the whole detection process, so that I can make it ignore a given login attempt. My thinking is to do one of three things:
Prevent ssh from producing the "Invalid user" message,
Prevent tail from outputting it, or
Prevent awk from seeing it.
I've tried suspending and then resuming all three processes, and here's what happens:
ssh: While the tunnel is suspended, the server waits indefinitely when trying to connect back to the client. If I specify a ConnectionTimeout option when connecting from the server, I can make the connection fail and get it to produce a different error message – success! – but I feel like this approach is asking for trouble with race conditions.
tail & awk: Input is accumulated while these programs are suspended, rather than ignored. Output is merely suppressed until the process is resumed.
is there any way to accomplish what I'm after?
|
You could suspend only awk and flush the pipe it's reading from under its feet before resuming it. On Linux-based systems and with GNU dd:
Where $pid is the PID of awk:
kill -s STOP "$pid"
to stop it and
dd iflag=nonblock if="/proc/$pid/fd/0" > /dev/null; kill -s CONT "$pid"
to resume.
That's assuming the pipe did not get full (64KiB by default on Linux), as otherwise, tail could have been blocked as well. To work around that you could flush the pipe with:
socat -T0.2 -u "/proc/$pid/fd/0" /dev/null
That would flush what's currently in the pipe and keep reading until there's no more coming for 0.2 seconds to allow tail to catch up with what it hasn't read from the file yet if any.
| Force a process to ignore/discard accumulated input while suspended? |
1,484,929,944,000 |
I went through the Jobs & Signals documentation in Zsh, but some things aren't still clear to me. It says:
If the MONITOR option is set, an interactive shell associates a job with each pipeline.
What exactly is a pipeline and what is the relationship between a pipeline, a job and a process? Is MONITOR enabled by default?
What type of operations can one do on jobs with Zsh?
|
if you type something like ls -l|grep foo your shell will start two processes (ls and grep). It will (because of the pipe |) also connect them to one pipeline. An interactive shell will also provide job control. This means you can do things like pausing a job or putting it in background.
Typing sleep 10& will run a process, the shell will also assign it a job and put that job in background. You can type jobs to see that job running. Also have a look at fg and bg as a way to put a job to foreground or background.
The Monitor option is exactly the one enabling job control. The documentation says:
MONITOR (-m, ksh: -m)
Allow job control. Set by default in interactive shells.
| Pipelines, Jobs and Processes in Zsh |
1,484,929,944,000 |
I'm running an ant (Java build tool) script on CentOS 5.5 that execs another java process. When I run the ant script in the background:
ant -f myfile.xml &> foo.out &
The forked process' state changes to stop and waits for input. As soon as I bring the process to the foreground it starts again (no input required on my part)
This does not occur on other machines running the same OS, CentOS 5.5.
|
I found the answer. A little googling brought up this page:
http://ant.apache.org/manual/running.html#background
Looks like ant immediately tries to read from standard input, which causes the background process to suspend
| ant script stops, waiting for input when run in background |
1,484,929,944,000 |
I'm writing a shell wrapper script that is supposed to act as a pager (receiving input on stdin and performing output on a tty).
This wrapper prepares environment and command lines, then launches two processes in a pipeline: first one is a non-interactive filter and the last one is the actual pager that needs to control a tty (or, rather, it spawns the actual pager itself):
#!/bin/bash
...
col -bx | bat "${bat_args[@]}" --paging=always
The resulting process tree is thus:
<parent>
\- wrapper.sh
|- col -bx (dies)
\- bat ...
\- less ...
The col -bx process exits after filtering the input and is reaped by the shell.
Is it possible to get rid of the shell process, such that it won't hang around for as long as the pager is running?
I thought of a workaround by using process substitution:
exec bat "${bat_args[@]}" --paging=always < <(col -bx)
However, the col -bx process is not reaped by the shell and remains in the Z state. Is there a "right" way to write this wrapper?
|
You can also do:
#!/bin/zsh -
col -bx | exec bat "${bat_args[@]}" --paging=always
Or:
#!/bin/ksh -
col -bx | exec bat "${bat_args[@]}" --paging=always
(not with the pdksh-derived implementations of ksh)
#!/bin/bash -
shopt -s lastpipe
col -bx | exec bat "${bat_args[@]}" --paging=always
In the end functionally equivalent to what you do with a redirection from a process substitution but skip that extra fd shuffling and named pipe creation (or /dev/fd/x opening on systems that have them (most)).
In any case, to reap a process, you need to wait for it. With ksh/zsh/bash -O lastpipe and those approaches above, col will be a child of the process that eventually executes bat, so bat will get a SIGCHLD when col dies, what it does with it is up to it. If it doesn't handle that SIGCHLD (few applications do for processes they've not spawned themselves), col will show as a zombie until bat terminates, after which the col process will be re-parented to the child sub-reaper or init and taken care of there.
In any case, note that in A | B, ksh doesn't wait for A either unless you set the pipefail option.
Now, per POSIX under XSI option, and that applies also on Linux,
If the parent process of the calling process has set its SA_NOCLDWAIT flag or has set the action for the SIGCHLD signal to SIG_IGN:
The process' status information (see Status Information), if any, shall be discarded.
The lifetime of the calling process shall end immediately. If SA_NOCLDWAIT is set, it is implementation-defined whether a SIGCHLD signal is sent to the parent process.
If a thread in the parent process of the calling process is blocked in wait(), waitpid(), or waitid(), and the parent process has no remaining child processes in the set of waited-for children, the wait(), waitid(), or waitpid() function shall fail and set errno to [ECHILD].
Otherwise:
Status information (see Status Information) shall be generated.
The calling process shall be transformed into a zombie process. Its status information shall be made available to the parent process until the process' lifetime ends.
[...]
(emphasis mine).
Now, not all shells will let you ignore SIGCHLD, it being so essential to its functioning (the whole point of shells is to run commands and report their exit status), but it looks like current versions of bash (5.2.21 in my test) at least do, so in:
#!/bin/bash -
shopt -s lastpipe
trap '' CHLD
col -bx | exec bat "${bat_args[@]}" --paging=always
The process running col would never become a zombie as long as bat doesn't un-ignore SIGCHLD before col exits.
In any case, zombies are not necessarily a bad thing. The zombie stage is part of the life a process, most processes go through there between the time they die and their parent (or init or child sub-reaper) acknowledge it.
| Is there a way to `exec` a pipeline or exit shell after launching a pipeline? |
1,484,929,944,000 |
Until recently I was pretty satisfied with how the fg and jobs command worked in my zsh, i.e.:
just fg -> foreground the most recently backgrounded job again
jobs -> display command name (incl. args) and perhaps even PID (don't recall)
After the latest Fedora 33 updates these zsh behaviors changed in a (for me) pretty annoying way:
fg now foregrounds the job with the lowest job id (i.e. not the most recently backgrounded job)
'jobs' output is much less verbose, e.g.:
[3] + suspended (signal) mutt
[4] - suspended
where job 4 is a vim session ...
So I presume that some zsh defaults changed. Thus my question: How do I configure the more useful behaviors for fg and jobs back?
(That means how do I get jobs to always display command names with arguments and pid and fg to foreground the most recently backgrounded job?)
(I'm currently at zsh-5.8-3.fc33.x86_64.)
Edit 1: A sample session:
$ zsh
~ $ man man
zsh: suspended man man
~ $ vim blah
zsh: suspended
~ $ jobs
[1] - suspended man man
[2] + suspended
~ $ fg
[2] - continued
zsh: suspended
zsh: suspended
~ $ jobs
[1] + suspended man man
[2] - suspended
Note that I suspended the foreground jobs via CtrlZ each time. Look for the + marker in the jobs output. What also surprises me is that I get 2 zsh: suspended lines after suspending vim for the second time. Looks like the suspend signal is delivered to the already suspended man process, again?
Edit 2: The job-control issue only appears if one job is vim. Thus, some details on how vim is invoked:
$ which vim
vim=__vi_internal_vim_alias
$ alias vim
vim=__vi_internal_vim_alias
$ typeset -f __vi_internal_vim_alias
__vi_internal_vim_alias () {
(
test -f /usr/bin/vim && exec /usr/bin/vim "$@"
test -f /usr/bin/vi && exec /usr/bin/vi "$@"
)
}
Ok, these definitions don't come from my profile. It appears they come from a system change. If I invoke vim as \vim the job control issues don't appear anymore. Looks like the sub-shell messes with zsh's command line string creation and other things. See also other related reports.
So where does this come from:
$ cd /etc
$ grep -r __vi_internal_vim_alias . -r 2>/dev/null
./profile.d/vim.sh:__vi_internal_vim_alias()
./profile.d/vim.sh: alias vi=__vi_internal_vim_alias
./profile.d/vim.sh: alias vim=__vi_internal_vim_alias
./profile.d/vi.sh:__vi_internal_vim_alias()
./profile.d/vi.sh: alias vi=__vi_internal_vim_alias
./profile.d/vi.sh: alias vim=__vi_internal_vim_alias
$ rpm -qf ./profile.d/vim.sh
vim-enhanced-8.2.2146-2.fc33.x86_64
$ rpm -qf ./profile.d/vi.sh
vim-minimal-8.2.2146-2.fc33.x86_64
|
It's caused by Fedora started defining system-wide vim aliases that start vim in a sub-shell.
Since this seems to break stuff left and right those aliases are now being rolled back: https://bugzilla.redhat.com/show_bug.cgi?id=1918575
| Configure zsh jobs output and fg behavior |
1,484,929,944,000 |
I have this Bash script named as s in current directory:
#!/bin/bash
pipe_test() {
( set -m; (
$1
); set +m ) |
(
$2
)
}
pipe_test "$1" "$2"
If I call e.g.
./s yes less
the script gets stopped. (Similar thing happens if I use any other pager I tried instead of less, i.e. more and most.) I can continue it by fg builtin, though.
I want to have job control (enabled by set -m) for the subshell to have a distinct process group ID for the processes of the subshell.
Information about my system:
$ bashbug
...
Machine: x86_64
OS: linux-gnu
Compiler: gcc
Compilation CFLAGS: -g -O2 -fdebug-prefix-map=/build/bash-cP61jF/bash-5.0=. -fstack-protector-strong -Wformat -Werror=format->
uname output: Linux jarnos-OptiPlex-745 5.4.0-29-generic #33-Ubuntu SMP Wed Apr 29 14:32:27 UTC 2020 x86_64 x86_64 x86_64 GNU>
Machine Type: x86_64-pc-linux-gnu
Bash Version: 5.0
Patch Level: 16
Release Status: release
$ less --version
less version: 551
|
The reason why that happens is because enabling job control (set -m) brings along not just process grouping, but also the machinery for handling "foreground" and "background" jobs. This "machinery" implies that each command run in turn while job control is enabled becomes the foreground process group.
Therefore, in short, when that sub-shell (the left part of your pipeline) enables job control it literally steals the terminal from the entire pipeline, which had it until then and which, in your example, includes the less process, thus making it become background and, as such, not allowed to use the terminal any more. It therefore gets stopped because less does keep accessing the terminal.
By issuing fg you give the terminal back to the entire pipeline, hence to less, and all ends well. Unless you run additional commands within the job-controlling sub-shell, because in such case each additional command would steal the terminal again.
One way around it is to simply run your job-controlled sub-sub-shell in background:
( set -m; (
$1
) & set +m ) |
(
$2
)
You will have the command expressed by $1 run in its distinct process group as you wish, while the backgrounded mode prevents stealing the terminal, thus leaving it to the pipeline and hence to $2.
Naturally this requires that the command in $1 does not want to read the terminal itself, otherwise it will be the one to get stopped as soon as it attempts to do it.
Also, likewise to as I said above, any additional job-controlled sub-sub-shell you might like to add would require the same "backgrounding" treatment, all along until you set +m, otherwise each additional job-controlled sub-sub-shell would steal the terminal again.
That said, if all you need process grouping for is to kill processes, you might consider using pkill to target them. For instance pkill -P will send a signal to the processes whose parent is the indicated PID. This way you can target all children (but not grand-children) of your sub-process by just knowing the sub-process's PID.
| less stops my script; why is that and how to avoid? |
1,484,929,944,000 |
I have the following code in my ~/.zshrc:
nv() (
if vim --serverlist | grep -q VIM; then
if [[ $# -eq 0 ]]; then
vim
elif [[ $1 == -b ]]; then
shift 1
IFS=' '
vim --remote "$@"
vim --remote-send ":argdo setl binary ft=xxd<cr>"
vim --remote-send ":argdo %!xxd<cr><cr>"
elif [[ $1 == -d ]]; then
shift 1
IFS=' '
vim --remote-send ":tabnew<cr>"
vim --remote "$@"
vim --remote-send ":argdo vsplit<cr>:q<cr>"
vim --remote-send ":windo diffthis<cr>"
elif [[ $1 == -o ]]; then
shift 1
IFS=' '
vim --remote "$@"
vim --remote-send ":argdo split<cr>:q<cr><cr>"
elif [[ $1 == -O ]]; then
shift 1
IFS=' '
vim --remote "$@"
vim --remote-send ":argdo vsplit<cr>:q<cr><cr>"
elif [[ $1 == -p ]]; then
shift 1
IFS=' '
vim --remote "$@"
vim --remote-send ":argdo tabedit<cr>:q<cr>"
elif [[ $1 == -q ]]; then
shift 1
IFS=' '
vim --remote-send ":cexpr system('$*')<cr>"
else
vim --remote "$@"
fi
else
vim -w /tmp/.vimkeys --servername VIM "$@"
fi
)
Its purpose is to install a nv function to start a Vim instance as well as a Vim server.
And if a Vim server is already running, the function should send the file arguments it received to the server.
So far, it worked well.
I have the following mapping in my ~/.vimrc:
nno <silent><unique> <space>R :<c-u>sil call <sid>vim_quit_reload()<cr>
fu! s:vim_quit_reload() abort
sil! update
call system('kill -USR1 $(ps -p $(ps -p $$ -o ppid=) -o ppid=)')
qa!
endfu
Its purpose is to restart Vim, by sending the signal USR1 to the parent shell.
I also have the following trap in my ~/.zshrc which restarts Vim when it catches the signal USR1.
catch_signal_usr1() {
trap catch_signal_usr1 USR1
clear
vim
}
trap catch_signal_usr1 USR1
So far, it worked well too.
But I have noticed that if I suspend Vim by pressing C-z, from the shell, even though the Vim process is still running, I can't resume it (with $ fg) because the shell doesn't have any job.
Here's a minimal zshrc with which I'm able to reproduce the issue:
catch_signal_usr1() {
trap catch_signal_usr1 USR1
vim
}
trap catch_signal_usr1 USR1
func() {
vim
}
And here's a minimal vimrc:
nnoremap <space>R :call Func()<cr>
function! Func()
call system('kill -USR1 $(ps -p $(ps -p $$ -o ppid=) -o ppid=)')
qa!
endfunction
If I start Vim with the function:
$ func
Then, restart Vim by pressing Space R, then suspend it by pressing C-z, once I'm back in the shell, I can see the Vim process running:
$ ps -efH | grep vim
user 24803 24684 10 03:56 pts/9 00:00:01 vim
user 24990 24684 0 03:56 pts/9 00:00:00 grep vim
But I can't resume it:
$ fg
fg: no current job
If I start Vim with the $ vim command instead of the $ func function, I can restart the Vim process, suspend it and resume it. The issue seems to come from the function $ func.
Here's my environment:
vim --version: VIM - Vi IMproved 8.1 Compiled by user
Operating system: Ubuntu 16.04.4 LTS
Terminal emulator: rxvt-unicode v9.22
Terminal multiplexer: tmux 2.7
$TERM: tmux-256color
Shell: zsh 5.5.1
How to start Vim from a function and still be able to resume it after suspending it?
Edit:
More information:
(1) What shows up on your terminal when you type Ctrl+Z?
Nothing is displayed when I type C-z.
(A) If I start Vim with the $ vim command here's what is displayed after pressing C-z:
ubuntu% vim
zsh: suspended vim
I can resume with $ fg.
(B) If I start Vim with the $ func function:
ubuntu% func
zsh: suspended func
I can also resume with $ fg.
(C) If I start Vim with the $ vim command, then restart Vim by pressing Space R:
ubuntu% vim
zsh: suspended catch_signal_usr1
Again, I can resume with $ fg.
(D) But, if I start Vim with the $ func function and restart it by pressing Space R:
ubuntu% func
ubuntu%
Nothing is displayed when I'm back at the prompt, and I can't resume Vim with $ fg.
(2) What does your shell say if you type jobs?
$ jobs has no output. Here's its output in the four previous cases:
(A)
ubuntu% jobs
[1] + suspended vim
(B)
ubuntu% jobs
[1] + suspended (signal) func
(C)
ubuntu% jobs
[1] + suspended (signal) catch_signal_usr1
(D)
ubuntu% jobs
ubuntu%
It seems the issue is specific to zsh at least up to 5.5.1, as I can't reproduce with bash 4.4.
|
The problem is starting a background job from a trap. The job seems to get “lost” sometimes. Changing vim to vim & makes the job be retained sometimes, so there may be a race condition.
You could avoid this by not starting the job from a trap. Set a flag in the trap, and fire up vim outside the trap, in the precmd hook. Here's an adaptation of your minimum example.
restart_vim=
catch_signal_usr1() {
trap catch_signal_usr1 USR1
restart_vim=1
}
precmd () {
if [[ -n $restart_vim ]]; then
restart_vim=
vim
fi
}
trap catch_signal_usr1 USR1
func() {
vim
}
You lose the ability of popping Vim up to the foreground while editing a command prompt, but that doesn't really work anyway since vim and zsh would be competing for the terminal.
In your real code, you may run into trouble because you're starting vim from a subshell. Don't run the nv function in a subshell: use braces { … }around the body, not parentheses. Uselocal IFSto make theIFS` variable local.
| How to start Vim from a trap and still be able to resume it after suspending it? |
1,484,929,944,000 |
This is the output:
[USER@SERVER ~] ping localhost
PING localhost (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.037 ms
64 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.024 ms
64 bytes from localhost (127.0.0.1): icmp_seq=3 ttl=64 time=0.030 ms
64 bytes from localhost (127.0.0.1): icmp_seq=4 ttl=64 time=0.026 ms
64 bytes from localhost (127.0.0.1): icmp_seq=5 ttl=64 time=0.026 ms
^Z
[1]+ Stopped ping localhost
[USER@SERVER ~] jobs
[1]+ Stopped ping localhost
[USER@SERVER ~] bg %1
[1]+ ping localhost &
64 bytes from localhost (127.0.0.1): icmp_seq=6 ttl=64 time=0.034 ms
[USER@SERVER ~] 64 bytes from localhost (127.0.0.1): icmp_seq=7 ttl=64 time=0.030 ms
64 bytes from localhost (127.0.0.1): icmp_seq=8 ttl=64 time=0.032 ms
[USER@SERVER ~] ^C
[USER@SERVER ~] ^C
[USER@SERVER ~] 64 bytes from localhost (127.0.0.1): icmp_seq=9 ttl=64 time=0.031 ms
^C
[USER@SERVER ~] 64 bytes from localhost (127.0.0.1): icmp_seq=10 ttl=64 time=0.031 ms
64 bytes from localhost (127.0.0.1): icmp_seq=11 ttl=64 time=0.028 ms
ki64 bytes from localhost (127.0.0.1): icmp_seq=12 ttl=64 time=0.030 ms
ll %64 bytes from localhost (127.0.0.1): icmp_seq=13 ttl=64 time=0.031 ms
1
[1]+ Terminated ping localhost
[USER@SERVER ~]
of:
1) I start to ping localhost
2) CTRL+Z
3) bg %1
4) CTRL+C doesn't work.
5) I have to type "kill %1" to kill it..
What is the real-life use of the "bg" command? Where is it used in the real world?
|
You use bg normally to run programs in the background, which has no console interaction, like most program with a graphical user interface.
Example: You wanted to run xterm & but forgot the & to run the terminal emulator in the background. So you stop the (blocking) foreground xterm process with Ctrl-Z and continue it in the background with bg.
If you want to send Ctrl-C to a background process, put it first with fg in the foreground again (or use kill -2 %1).
| What is the real-world use of the bg command? |
1,484,929,944,000 |
There is list of jobs running.
pdf is opened
image is opened
text file opened
It's all in fg/bg .
Is there any option to close the particular job forcefully? Also, I want to know if the jobs can be closed with the help of a command.
|
As long as the jobs were all started from your current shell: use 'jobs' to get a list of backgrounded jobs. Each will have a numeric identifier, starting from '1'. Then you can bring the job to the foreground with fg %1, send it to the background if it's paused with bg %1, or kill it with kill %1 (use the correct number for the job you're trying to kill, of course).
$ jobs
$ sleep 20 &
[1] 1770
$ sleep 30 &
[2] 1771
$ sleep 40 &
[3] 1773
$ jobs
[1] Running sleep 20 &
[2]- Running sleep 30 &
[3]+ Running sleep 40 &
$ kill %2
$ jobs
[1] Running sleep 20 &
[2]- Terminated sleep 30
[3]+ Running sleep 40 &
$ jobs
[1]- Running sleep 20 &
[3]+ Running sleep 40 &
$
| how to close the jobs one by one? is there any options? |
1,484,929,944,000 |
I have two shell-scripts, say client.sh and server.sh, which has to work simultaneously and give some useful output in watch-way.
And I am able to use only one terminal. So I should switch between them to see what's happening but without stopping them (Ctrl+Z).
I can't figure out how to do this.
When I run
./server.sh &
then type Enter in terminal (before launching the client.sh), but it shows
[1]+ Stopped ./server.sh
It there a handy way to switch between jobs in terminal to see their output(internal state) without stopping them?
|
Use screen:
$ screen -S my-job
This will start a new screen session named "my-job" and connect to it.
$ ./server.sh
This will start your server.sh script on the first (default) terminal attached to the screen Session. Now press Ctrl-A followed by Ctrl-C. This will Create a new terminal and switch to it. Now you can run:
$ ./client.sh
and observe its output. To switch back and forth, press Ctrl-A followed by Ctrl-A again.
To disconnect from screen while still leaving your programs running, press Ctrl-A followed by d to Detach. To reconnect and view your output again, use:
$ screen -x my-job
You can also view both scripts' output at once, while attached by doing the following:
Press Ctrl-A followed by s to Split your view.
Press Ctrl-A followed by Tab to move down to the lower split
Ctrl-A followed by " to open a list of active terminals, and select the 1th terminal (the 0th is connected to the top split by default).
| How to use one terminal with multiple interactive jobs without stopping them? |
1,484,929,944,000 |
Here is the scenario:
Let's say that I log into my server via ssh and start an emacs or vi (or whatever other program) session. Then my ssh connection disconnects.
Is there a way for me to reconnect to those programs via a new ssh session. In other words, when I log back into my server through a new ssh session?. In other words, how can I "pick up" where I left off.
I am assuming that programs are not automatically stopped when the first ssh account drops out...are they?
I read somewhere that I can use screen or tmux, I am wondering if there is a simple way, if not please let me know.
Thanks
|
You can use screen
Suppose you have logged in using SSH, then simply run following command to create screen session called 'mysession'
screen -S mysession
in case your connection disconnected, then you can simply attach your session using:
screen -x mysession
Check this link for more information about screen
| How to resume a program when logged in with different session |
1,484,929,944,000 |
My question has nothing to do with WCE (wait and cooperative exit).
Assuming i have a script launched in an interactive shell (bash) as a foreground job:
#! /bin/bash
# script name: foreback.sh
sleep 100m & # child process in bg
sleep 200m & # ditto
sleep 300m & # ditto
sleep 7000m # child process in fg
exit 0
When this script runs, pressing Ctrl-C kills all foreground processes (the process for my script - the parent - and the 4th sleep child process as expected.
My question is: How are those background childs recognized as such when the foreground process group receives SIGINT?
Look at the following ps - output before sending the signal:
TT TPGID PPID PID PGID SESS STAT COMMAND
pts/0 9373 9259 9282 9282 9282 Ss | \_ /bin/bash
pts/0 9373 9282 9373 9373 9282 S+ | | \_ /bin/bash ./foreback.sh
pts/0 9373 9373 9374 9373 9282 S+ | | \_ sleep 100m
pts/0 9373 9373 9375 9373 9282 S+ | | \_ sleep 200m
pts/0 9373 9373 9376 9373 9282 S+ | | \_ sleep 300m
pts/0 9373 9373 9377 9373 9282 S+ | | \_ sleep 7000m
Parent and child processes seem to be some kind of "foreground complex", because they all belong to the same terminal process group (TPGID) that is the PID of the parent foreground process and the STAT column shows a plus sign.
When sending SIGINT to the foreground process group via Ctrl-C or sending SIGINT to the process group (PGID) via kill -INT -- -pgid, how does the shell know which processes to terminate and which to leave alive?
After Ctrl-C or the above mentioned processgroup kill, my ps - output looks like this:
TT TPGID PPID PID PGID SESS STAT COMMAND
pts/0 9282 9259 9282 9282 9282 Ss+ | \_ /bin/bash
pts/0 9282 1742 9374 9373 9282 S \_ sleep 100m
pts/0 9282 1742 9375 9373 9282 S \_ sleep 200m
pts/0 9282 1742 9376 9373 9282 S \_ sleep 300m
The three childs launched in background from parent remain alive, the STAT column indicates they are in background by the missing plus sign, and the terminal process group is now the group of the interactive shell. This is how it should be.
But I cannot see any "flag" that shows "Don't kill me, I'm a background process" at the time, when SIGINT is sent to the foreground processgroup.
I guess that the following is happening:
SIGINT is sent to the foreground process group
The first process that receives the signal and reacts upon it, is the process which PID equals the TPGID (the foreground process group leader).
When this process dies, the shell "remembers" which child processes were launched as background processes and changes their TPGID from the original one to the one of the interactive shell, so that they don't belong to the old foreground process group any longer. Remaining childs that were not launched with & do not undergo this change of their TPGID, receive SIGINT and react upon it (terminate if not handled otherwise)
I comb thru a myriad of websites since weeks, but i cannot find a proper answer.
Any ideas????
Thanks
|
Non-interactive shells don't do job control by default, so all processes run by a script are in the same process group as the process that executed the shell. That process group will have been made the foreground process group of the terminal or not by the interactive shell starting the script depending on whether the script was started/put in foreground or not.
But when job control is disabled, POSIX shells have this requirement:
2.11. Signals and Error Handling
If job control is disabled (see the description of set -m) when the shell executes an asynchronous list, the commands in the list shall inherit from the shell a signal action of ignored (SIG_IGN) for the SIGINT and SIGQUIT signals. In all other cases, commands executed by the shell shall inherit the same signal actions as those inherited by the shell from its parent unless a signal action is modified by the trap special built-in (see trap)
So in effect it's a crude form of "job" control where asynchronous lists are immune to Ctrl + c and Ctrl + \.
On Linux:
$ sh -c 'sleep 10 & grep SigIgn "/proc/$!/status"'
SigIgn: 0000000000000006
$ kill -l INT QUIT
2
3
(SigIgn above is a bitmask, with 2nd and 3rd bit set in that case corresponding to SIGINT and SIGQUIT).
Note that when the terminal sends those ^C or ^\ characters upon pressing those key combinations, the shell is not involved in the delivery of those signals. It's the kernel (the tty driver) that sends the signal to the processes in the foreground process group of the terminal.
The TPGID is an attribute of the terminal device (in your case /dev/pts/0), not really the process. ps will always show the same value for all processes running in a session attached to a terminal. When putting a job (process group) in foreground, the interactive shell will do a tcsetpgrp(terminal_fd, its_pid) to tell the terminal driver: this is the process group that is in foreground for that terminal, anything else is in background.
The fact that the surviving sleep processes are in background after ^C is just because the script terminated, so the interactive shell waiting for it will have change back the terminal's foreground process group to its own.
| How does bash recognize background child processes launched by a foreground process (script) on receiving Ctrl-C |
1,484,929,944,000 |
I'm on OpenSUSE 12.1, so no tmux, and we're not allowed to install anything - wget is too old to download a binary as well. Often I and other users have to run long scripts that take several hours, and our SSH client will crash in the middle. I'm aware that this is a bad practice but my opinion isn't valued.
What's a good way to "schedule" or somehow run these long scripts without the danger of them ending if the client crashes? Cron jobs maybe?
|
One option would be screen, if it is available. (You mentioned tmux, but not screen)
Another option would be to run the script with "nohup" which will disassociate it from your shell. You would then need to use its pid to monitor it. Redirecting the output to files would also be recommended.
| What's the best way to run a long script without the SSH client crashing? |
1,500,454,082,000 |
I was trying to demonstrate job control to a friend, when I ran into an unexpected behaviour. For certain commands, kill works with the job number, but not with the process ID.
An example of the bahaviour I expected:
user@host:~$ sleep 1h &
[1] 4518
user@host:~$ sleep 2h &
[2] 4519
user@host:~$ kill %2
[2]+ Terminated sleep 2h
user@host:~$ kill 4518
[1]+ Terminated sleep 1h
In both cases, sleep is terminated. One by job number, and one by PID. Now I tried this originally with the command cat, and it didn't work out as expected:
user@host:~$ cat &
[1] 4521
user@host:~$ cat &
[2] 4522
[1]+ Stopped cat
user@host:~$ kill %2
[2]+ Terminated cat
user@host:~$ kill 4521
user@host:~$ jobs
[1]+ Stopped cat
user@host:~$ kill 4521
user@host:~$ jobs
[1]+ Stopped cat
user@host:~$ kill %1
[1]+ Terminated cat
So kill did not work on the PID of my process, but it worked with the job number. I don't think this should happen. Why is this the case?
I'm using Debian 9 with bash 4.4.12(1)-release.
EDIT: In the process of trying to solve this, I have become aware that the state of cat being "stopped" can make it unresponsive to the default signal SIGTERM. But if that is true, then the kill command should fail with both the process ID and the job number. Shouldn't it?
|
What the kill builtin really does in these circumstances is not documented in the Bourne Again shell's manual, but it is documented in the Z shell and Korn shell manuals:
Korn shell: If the signal being sent is TERM (terminate) or HUP (hangup), then the job or process will be sent a CONT (continue) signal if it is stopped.
Z shell: If the signal being sent is not KILL or CONT, then the job will be sent a CONT signal if it is stopped.
The Bourne Again shell manual should similarly read something like: If the signal being sent is TERM or HUP and a targetted process is part of a job, then the job will be sent a CONT signal if it is stopped or job control is not available in the current terminal.
Because that is what it actually does.
| why does kill work differently on job numbers than PIDs? |
1,500,454,082,000 |
Which signal will be sent to the running process after sending the Ctrlc after 500ms of Ctrlz?
I have tried to give the Ctrlc after Ctrlz but I didn't get the exact answers for this.
|
Ctrl+C will send SIGINT to the foreground process group.
As you have backgrounded the process beforehand by Ctrl+Z, Ctrl+C won't give you the desired result expectedly.
| Result of combination of Ctrl+c and then Ctrl+z in shell |
1,500,454,082,000 |
As far as I've seen, pressing Ctrl-Z on any terminal multiplexer, or trying to start them in the background, does nothing or crashes.
I know that, in a sense, terminal multiplexers are a "replacement" for job control, and usually, they have their own mechanisms for suspending and resuming. Still, I was wondering if I could integrate them in some way into a workflow based on the shell's job control.
Answer:
Screen suspends with "C-a z"
Tmux suspends with "C-b C-z"
Zellij suspends with "C-o d", but unlike the previous ones, it doesn't place the process on the shell's job control.
|
Do you want to suspend a job inside a screen window?
Just use Ctrl z inside the screen window (as usual). This doesn't suspend screen, though.
Do you want to suspend screen itself?
Use Ctrl az inside any screen window. But notice that although this suspends the user-facing part of the screen application, it doesn't suspend the applications being managed through screen. This is because screen is designed that its user-facing part can be detached with Ctrl ad, and the managed processes continue to run.
| Does any terminal multiplexer (screen, tmux, zellij) support job suspension (Ctrl-Z) in Bash? |
1,500,454,082,000 |
In an X session, I can follow these steps:
Open a terminal emulator (Xterm).
Bash reads .bashrc and becomes interactive. The command prompt
is waiting for commands.
Enter vim 'my|file*' '!another file&'.
Vim starts, with my|file* and !another file& to be edited.
Press CTRL-Z.
Vim becomes a suspended job, the Bash prompt is presented again.
I cannot figure out a script to carry out steps 1
and 2 without relinquishing step 3 (job-control). It would receive the files as
arguments:
script 'my|file*' '!another file&'
Can you please help me?
The script would be executed by a file-manager, the selected text files being supplied as arguments.
Do not worry, I am sane and usually do not name my files like that.
On the other hand, the script should not break if such
special chars (*!&|$<newline>...) happen to show up in the file names.
I used Vim just for a concrete example. Other programs which run interactively in the terminal and receive arguments would benefit from a solution.
My attempts/research
xterm -e vim "$@"
Obviously fails. That Xterm has no shell.
Run an interactive bash subshell with initial commands
without returning to the
super shell immediately
looked promising. The answers there explain how a different
file (instead of .bashrc) can be specified for Bash to source.
So I created this ~/.vimbashrc:
. ~/.bashrc
set -m
vim
and now, calling
xterm -e bash --init-file ~/.vimbashrc
results in a new terminal with Bash and a suspendable Vim.
But this way I cannot see how the files Vim should open could be specified.
|
I could think of a couple of methods, and I think the first one is less hackish on Bash mainly because it seems (to me) to have quirks that can be more easily handled. However, as that may also be a matter of taste, I'm covering both.
Method one
The "pre-quoting" way
It consists in making your script expand its $@ array, thus doing it on behalf of the inner interactive bash, and you might use the /dev/fd/X file-descriptor-on-filesystem facility (if available on your system) as parameter to the --init-file argument. Such file-descriptor might refer to a Here String where you would handle the filenames, like this:
xterm -e bash --init-file /dev/fd/3 3<<<". ~/.bashrc; set -m; vim $@"
One bonus point of this file-descriptor-on-filesystem trick is that you have a self-contained solution, as it does not depend on external helper files such as your .vimbashrc. This is especially handy here because the contents of the --init-file is dynamic due to the $@ expansion.
On the other hand it has a possible caveat wrt the file-descriptor's actual persistence from the script all the way through to the inner shell. This trick works fine as long as no intermediate process closes the file-descriptors it received from its parent. This is common behavior among many standard tools but should e.g. a sudo be in the middle with its default behavior of closing all received file-descriptors then this trick would not work and we would need to resort to a temporary file or to your original .vimbashrc file.
Anyway, using simply $@ as above would not work in case of filenames containing spaces or newlines because at the time the inner bash consumes the sequence of commands those filenames are not quoted and hence the spaces in filenames are interpreted as word separators as per standard behavior.
To address that we can inject one level of quoting, and on Bash's versions 4.4 and above it is only a matter of using the @Q Parameter Transformation syntax onto the $@ array, like this:
xterm -e bash --init-file /dev/fd/3 3<<<". ~/.bashrc; set -m; vim ${@@Q}"
On Bash's versions below 4.4 we can obtain the same by using its printf %q, like this (as Here Document for better readability, but would do the same as a Here String like above):
printf -v files ' %q' "$@"
xterm -e bash --init-file /dev/fd/3 3<<EOF
. ~/.bashrc
set -m
vim $files
EOF
On a side-note, depending on your setup you might consider sourcing /etc/bash.bashrc too, prior to the user's .bashrc, as that is Bash's standard behavior for interactive shells.
Note also that I left the set -m command for familiarity to your original script and because it's harmless, but it is actually superfluous here because --init-file implies an interactive bash which implies -m. It would instead be needed if, by taking your question's title literally, you'd wish a job-controlling shell yet not a fully interactive one. There are differences in behavior.
Method two
The -s option
The Bash's (and POSIX) -s option allows you to specify arguments to an interactive shell just like you do to a non-interactive one1. Thus, by using this -s option, and still as a self-contained solution, it would be like:
xterm -e bash --init-file /dev/fd/3 -s "$@" 3<<'EOF'
. ~/.bashrc
set -m # superfluous as bash is run with `--init-file`; you would instead need it for a job-controlling yet "non-interactive" bash (ie no `--init-file` nor `-i` option)
exec <<<'exec < /dev/tty; vim "$@"'
EOF
Quirky things to note:
the Here Document's delimiter specification must be within single quotes or the $@ piece inside the Here Document would be expanded by your script (without correct quoting etc.) instead of by the inner bash where it belongs. This is as opposed to the "pre-quoting" method where the Here Document's delimiter is deliberately non-quoted
the Here String (the exec <<<... stdin redirection piece) must be the single-quoted type as well2, or the "$@" piece inside it would be expanded by the inner bash at a time when its $@ array is not yet populated
specifically, we need such stdin redirection (the one made via the exec <<< Here String) as a helper just to make the inner bash "defer" the execution of the commands needing a fully populated $@ array
inside such helper stdin redirection (the Here String piece) we need to make the inner bash redirect its own stdin again, this time back to its controlling terminal (hence the exec < /dev/tty line) to make it recover its interactive functionality
we need all commands meant to be executed after the exec < /dev/tty (ie the vim "$@" here) to be specified on the same line3 as the exec < /dev/tty because after such redirection the Here String will no longer be read4. In fact this specific piece looks better as a Here String, if it can be short enough like in this example
This method may be better used with an external helper file like your .vimbashrc (though dropping the self-contained convenience) because the contents of such file, with regard to the filename-arguments problem, can be entirely static. This way, your script invoked by the file-manager would become as simple as:
xterm -e bash --init-file .vimbashrc -s "$@"
and its companion .vimbashrc would be like:
. ~/.bashrc
# (let's do away with the superfluous `set -m`)
exec <<<'exec < /dev/tty && vim "$@"' # let's also run vim only if the redirection to the controlling tty succeeded
The companion file still has the quirks but, besides any "cleanliness" consideration, one possible bonus point of this latter version (non self-contained) is that its whole xterm -e ... command, away of the "$@" part, could be used directly by your file-manager in place of your "script", if it were so kind to allow you to specify a command which it dutifully splits on spaces to produce the canonical "argv" array along with the filenames arguments.
Note also that this whole -s method, in all its versions, by default updates the user's .bash_history with the commands specified inside the helper stdin redirection, which is why I've been keeping that specific piece as essential as possible. Naturally you can prevent such updating by using your preferred way, e.g. by adding unset HISTFILE in the --init-file.
--
Just as a comparison, using this method with dash would be far more convenient because dash does populate the $@ array for the script specified by the ENV environment variable, and thus a self-contained solution would be as simple as:
xterm -e env ENV=/dev/fd/3 dash -s "$@" 3<<'EOF' # `dash` run through `env` just to be positive that `ENV` is present when dash starts
vim "$@"
EOF
HTH
1with the exception that $0 can't be specified, as opposed to when invoking a shell with -c option
2if you used an additional Here Document it would need to have its delimiter quoted as well
3actually on the same "complete_command" as defined by POSIX, meaning that you can still span multiple lines as long as they are part of the same "complete_command", e.g. when such lines have the line-continuation backslash or are within a compound block
4this should be standard behavior, presumably deriving from the first, brief, paragraph of this sub-section
| Programmaticaly open new terminal with Bash and run commands, keeping job-control |
1,500,454,082,000 |
I want to start a web server via Python. When this succeeds, I want to open the page in the default browser (on macOS, you can do this with the open command), and after that, I want to resume the previous script again.
This does not work:
#!/usr/bin/env bash
cd wwwroot
python3 -m http.server &
open http://0.0.0.0:8000
fg 1
I could not use jobs, and open the URL, and after that just run the Python script. However, I don't want to reload the page of the URL. Python will continue to run, until stopped by Ctrl+C.
Perhaps, the open command needs to have a sleep command when Python is not ready yet...
|
After some experimentation, it seems that $! is the variable you want. Example:
(sleep 1 && open http://0.0.0.0:8000) &
disown $!
python3 -m http.server
The sleep makes sure (well, very likely at least) the server is up-and-running before the open is executed. However, there's still the potential for failure if python3 -m http.server takes longer than a second – make certain to document this possibility.
| How to start a job, do something different, and resume it again |
1,500,454,082,000 |
When I do
( sleep 1; read x ; echo x=$x; echo done ) &
then with the default terminal settings, the job gets stopped by SIGTTIN.
If I do
( ( sleep 1; read x ; echo x=$x; echo done ) & )
the read syscall inside read gets EOF (returns with 0)` and no stopping by SITTIN happens.
What is the explanation for these behavior
|
That's because in the second case the backgrounded command will be run in a subshell, and as there's no job control in subshells, the background mode will be faked by redirecting the input from /dev/null and ignoring the SIGINT and SIGQUIT signals.
See also these answers:
Background process of subshell strange behaviour
Process started by script does not receive SIGINT
Does ` (sleep 123 &)` remove the process group from bash's job control?
Process killed before being launched in background
| Behavior of directly vs indirectly backgrounded children on read |
1,500,454,082,000 |
I am developing a web application in Phoenix and I also just started discovering Unix process management.
I put my app in background, like this:
vagrant@dev:/srv/my_app$ iex -S mix phoenix.server &
[1] 8726
Then I'd like to cd to another directory and do some other work in the main prompt. However, as soon as I do that, the background process stops.
vagrant@dev:/srv/my_app$ cd ..
[1]+ Stopped iex -S mix phoenix.server (wd: /srv/my_app)
I noticed this happens only in this particular case, because it's a prompt. It doesn't happen with other non-interactive processes (I'm free to change directory and all of that).
I tried the same with irb, another prompt, and I get exactly the same behavior.
Why is this happening, and is there any workaround for having a prompt in the background and change directory without it being stopped?
|
Your shell didn't stop, the progress you sent to the background did (the iex process). If you hit "Enter" you'll get a shell prompt back.
| How do I keep a prompt running in background? [duplicate] |
1,500,454,082,000 |
Background
I work at a research institute, and have for a long time used the batch command to submit job queues to our machines. However, a recent update changed the batch command to be POSIX-compliant, the result of which is it doesn't take input files anymore. Thus, I would have to manually enter each command at the at> prompt, rather than reusing an old queue input file and making minor changes where needed. This is rather time-consuming, and considering how terribly batch handles jobs anyway, is almost a waste of time as opposed to just starting the jobs myself.
The Problem
The only replacement queues I have been able to find are either prohibitively expensive, or don't do the job. I am looking for a queue that simply runs the jobs it's given in temporal order; no priority queues or anything like that. What queues are available now that batch doesn't function the same way?
|
expect (and pexpect) were designed to automate interaction with programs that want "interactive" input. expect allows one to automate starting a program, waiting for its prompt, sending a response, waiting for another prompt, etc.
Here is a simple example of an expect script that starts batch and creates a job for it to schedule. It starts batch, waits for a prompt, creates a job, waits for a prompt, sends a control-D (\004), waits for batch to respond with confirming job creation, such as job 271 at Sat Oct 25 14:31:00 2014, and then waits a second for batch to complete and exits:
#!/usr/bin/expect --
spawn batch
expect "at>"
send "echo Hello from batch\r"
expect "at>"
send "\004"
expect "job"
sleep 1
expect has many advanced features. You can create procedures and manipulate variables. For more information, see the expect homepage and the expect FAQ. The expect wikipedia page is also very informative.
I haven't tried it but it might be possible to create an expect script that reads from stdin and queues jobs. Alternatively, one might create a shell/sed/awk/python script to read from stdin on input and write an expect script on output.
| Replacement queue now that batch doesn't accept input files? |
1,500,454,082,000 |
I have a shell script that initiates a long, resource-intensive command on several different machines.
In order to execute the script on each machine in parallel, I have an ampersand symbol after each remote command in the for-loop.
initialize-cluster.sh:
while read host;
do
ssh -f "$host" \"/home/user/allocate.sh\" &
done < ~/cluster
However, as a consequence, I do not get the shell prompt back at the end of the for-loop.
This prevents me from executing other scripts after initialize-cluster.sh has finished running allocate.sh on all of the machines.
For example, I want to be able to run a command such as:
./initalize-cluster.sh && ./execute-something-else.sh
However, currently initialize-cluster never "finishes."
I'd like subsequent commands to execute only after all of the allocate.sh scripts finish on the remote machines.
|
Nothing in this script should prevent execution of the next command or return to the shell prompt. What could be giving the impression that the prompt is gone is the output of the remote scripts, which would arrive after return to the shell. To avoid that you could redirect stdout/stderr to some log file.
Since the remote commands are run in the background the original script will finish before the allocate scripts have finished executing: the next command can't assume that initialization is complete. In order to start all initializations in parallel and wait for them all to finish you could do the following:
#!/bin/sh
while read host
do
ssh "$host" "/home/user/allocate.sh" &
done < ~/cluster
wait
And then it should be possible to:
./initialize-cluster.sh && ./execute-something-else.sh
The key here is the wait command, which waits for child processes to finish.
| Running commands after ampersand symbol & in a remote ssh session |
1,500,454,082,000 |
Is is possible in bash - or other sh-derivative shell - to run in the foreground from command-line a list of commands that have their own variable scope (so any values assigned to variables in that scope will not known outside that scope), but also - if spawning a background command - that background command still be a job under the parents shell, i.e. still under the command-line shell's job control? If there are more than 1 way to do this, which way is the most practically short?
I know using parentheses will create new subshell that have its own scope, but any spawned background command will not be under the shell's job control anymore.
|
In zsh, you can use an anonymous function, but you still need to declare variables as local.
For instance:
$ () { local t; for t in 100 200 300; do sleep $t & done; }
[2] 4186
[3] 4187
[4] 4188
$ jobs
[2] running sleep $t
[3] - running sleep $t
[4] + running sleep $t
$ typeset -p t
typeset: no such variable: t
With any shell with support for local scope in functions, you can use a normal function like:
run() { eval "$@"; }
Then:
run 'local t; for t in 100 200 300; do sleep "$t" & done'
| Separate variable scope spawning commands under job control |
1,500,454,082,000 |
I'm working with some tail -f path/to/my/log/file | grep pattern& and I need to kill the process as quick as possible.
With classic kill {tail PID}, tail still displays its buffer and it takes around 12 second (on my setup) to get tail completely silent.
However, it's much faster when I kill it with kill %{job id} (slightly more than a second).
How is it different to call kill {tail PID} and kill %{job id}?
Some samples :
01/09/2021 15:45:29:670:kill {tail PID}
...
01/09/2021 15:45:39:232: {some log}
01/09/2021 15:45:39:232: {some log}
01/09/2021 15:45:39:232: {last log line}
takes around 10 seconds to fully shutdown
with kill %{job id} :
01/09/2021 10:56:57:793 -> (COM12<):kill %{tail job ID}
...
01/09/2021 10:56:58:966 -> (COM12>):[root@my_board ~]#
takes 1 sec to fully shutdown
|
When you kill the job with kill %6, you kill the tail and kill grep too.
tail -f /var/log/mintupdate.log|grep ez&
[6] 3368377
If you kill 3368377, you kill just the grep process.
3368376 pts/6 S 0:00 tail -f /var/log/mintupdate.log
3368377 pts/6 S 0:00 grep --color=auto ez
Of course it caused to kill the tail -f too....
| "kill %{job id}" vs "kill {job pid}" |
1,500,454,082,000 |
I'm running a long-running pipeline from bash, in the background:
find / -size +500M -name '*.txt' -mtime +90 |
xargs -n1 gzip -v9 &
The 2nd stage of the pipeline takes a long time to complete (hours) since there are several big+old files.
In contrast, the 1st part of the pipeline completes immediately, and since the pipe isn't full, and it has completed, find exits successfully.
The parent bash process seems to wait properly for child processes.
I can tell this because there's no find (pid 20851) running according to either:
ps alx | grep 20851
pgrep -l find
There's no zombie process, nor there's any process with process-id 20851 to be found anywhere on the system.
The bash builtin jobs correctly shows the job as a single line, without any process ids:
[1]+ Running find / -size +500M -name '*.txt' -mtime +90 | xargs -n1 gzip -v9 &
OTOH: I stumbled by accident on a separate job control command (/bin/jobs) which shows:
[1]+ 20851 Running find / -size +500M -name '*.txt' -mtime +90
20852 Running | xargs -n1 gzip -v9 &
and which is (wrongly) showing the already exited 20851 find process as "Running".
This is on CentOS (edit: More accurately: Amazon Linux 2 AMI) Linux.
Turns out that /bin/jobs is a two line /bin/sh script:
#!/bin/sh
builtin jobs "$@"
This is surprising to me. How can a separate process, started from another program (sh), know the details of a process which is managed by another (bash) after that process has already completed and exited and is NOT a zombie?
Further:
how can it know details (including pid) about the already exited process, when other methods on the system (ps, pgrep) can't?
Edits:
(1) As Uncle Billy noted in the comments, on this system /bin/sh and/bin/bash are the same (/bin/sh is a symlink to /bin/bash) but /bin/jobs is a script with a shebang line so it runs in a separate process.
(2) Also, thanks to Uncle Billy: an easier way to reproduce. /bin/jobs was a red herring. I mistakenly assumed it is the one producing the output. The surprising output apparently came from the bash builtin jobs when called with -l:
$ sleep 1 | sleep 3600 &
[1] 13616
$ jobs -l
[1]+ 13615 Running sleep 1
13616 Running | sleep 3600 &
$ ls /proc/13615
ls: cannot access /proc/13615: No such file or directory
So process 13615 doesn't exist, but is shown as "Running" by bash builtin job control, which appears like a bug in jobs -l.
The presence on /bin/jobs which confused me to think it must be the culprit (it wasn't), seems confusing and questionable. I believe it should be removed from the system as it is useless (a sh script running in a separate process, which can't show jobs of the caller anyway).
|
FWIW, I can reproduce your case with:
rhel8$ /bin/jobs(){ jobs -l; }
rhel8$ sleep 1 | sleep 3600 &
[1] 2611
rhel8$ sleep 2
rhel8$ jobs
[1]+ Running sleep 1 | sleep 3600 &
rhel8$ /bin/jobs
[1]+ 2610 Running sleep 1
2611 Running | sleep 3600 &
rhel8$ pgrep 2610
<nothing!>
rhel8$ ls /proc/2610
ls: cannot access '/proc/2610': No such file or directory
rhel8$ /bin/jobs
[1]+ 2610 Running sleep 1
2611 Running | sleep 3600 &
rhel8$ cat /bin/jobs
#!/bin/sh
builtin jobs "$@"
Or with (even lamer than the previous):
rhel8$ unset -f /bin/jobs
rhel8$ export JOBS=$(jobs -l)
rhel8$ builtin(){ echo "$JOBS"; }
rhel8$ export -f builtin
rhel8$ /bin/jobs
[1]+ 2610 Running sleep 1
2611 Running | sleep 3600 &
rhel8$ type /bin/jobs
/bin/jobs is /bin/jobs
Note: As already demonstrated, jobs -l in bash is displaying stale information, with pipeline processes which have already exited still shown as Running. IMHO this is a bug -- other shells like zsh, ksh or yash correctly show them as Done.
| 'jobs' shows a no longer existing process as running |
1,500,454,082,000 |
I'm trying to use some functions in a bash script to simplify calling some child processes. I want to decide at the call site whether or not to run the process in the background as a job, or in the foreground, and later kill the child process when necessary. Unfortunately, as far as I can tell, the pid I get with $! doesn't belong to the long-running background process, but the function that called it. Killing the pid I get with $! doesn't kill the long running child process, and it seems to get orphaned.
function inner() {
tail -f /dev/null
}
function outer() {
inner
}
outer &
echo 'Before, $!: ' $!
echo 'Before, jobs -p: ' $(jobs -p)
echo 'Before, ps aux: ' $(ps aux | grep /dev/null | grep -v grep)
kill $!
echo 'After, $!: ' $!
echo 'After, jobs -p: ' $(jobs -p)
echo 'After, ps aux: ' $(ps aux | grep /dev/null | grep -v grep)
The output I get is:
Before, $!: 71644
Before, jobs -p: 71644
Before, ps aux: jstaab 71646 0.0 0.0 4267744 688 s005 S+ 2:53PM 0:00.00 tail -f /dev/null
After, $!: 71644
After, jobs -p: 71644
./test.sh: line 17: 71644 Terminated: 15 outer
After, ps aux: jstaab 71646 0.4 0.0 4267744 688 s005 S+ 2:53PM 0:00.00 tail -f /dev/null
You'll notice that ps aux gives me a different pid than either $! or jobs -p. This makes sense, but how can I kill tail -f /dev/null with a kill without grepping for the command?
|
outer & creates a subshell. $! gives the PID of this subshell.
tail -f /dev/null is a child of this subshell so it has a different PID. But you can do
exec tail -f /dev/null
instead. Then the kill hits the tail.
Another possibility is to use /bin/kill instead of the shell builtin. With a negative number as argument you can kill the whole process group:
/bin/kill -TERM -$!
| Killing jobs started within functions |
1,500,454,082,000 |
What is the difference between running following commands on terminal?
command1
for i in {1..3}; do ./script.sh >& log.$i & done
and
command2
for i in {1..3}; do ./script.sh >& log.$i & done &
Running the first command shows three job IDs on the screen and I can type the next command on ther terminal.
The second command is a bit weird, it does not show any job IDs on screen nor can I see them after running jobs command. Where did the jobs go?
Inside of script.sh I have following loop
for k in 1; do
./tmp -arguments
done
echo "hello"
If I use command 1, I can see via htop that ./tmp executible is running and echo "hello" has not yet been executed (not in the log file).
If I use command 2, I can see via htop that ./tmp executible is running AND echo "hello" has ALREADY been executed (as seen in the log file).
Why would an & on the terminal change the behaviour of the for loop inside the shell script?
[GNU bash, version 4.3.11(1)-release (x86_64-pc-linux-gnu)]
|
The first one
for i in {1..3}; do
./script.sh >& log.$i &
done
runs in the current shell. Each iteration of the loop runs the script.sh script as a job of the current shell, and so you can see them so.
The second one
for i in {1..3}; do
./script.sh >& log.$i &
done &
first starts a subshell that controls the loop. Then the 3 iterations create 3 subprocesses in that shell, while in your current shell you can only see 1 job, which is the whole command, not yet broken down into particular jobs. (You should see this 1 job. Either as a Running one or as Done.)
The ./tmp executable should run the same way in both cases. If you see echo "hello" has been performed, this means the ./tmp had been finished before. If it behaves abnormally, you should debug (and add the details to your question). Especially, make sure the starting conditions are the same at the time of its call in both cases. Eg. if there are checks for existing files, make sure in both cases they do/don't exist, etc.
| Ampersand after for loop on shell scripts |
1,500,454,082,000 |
I wanted to do a test for the Job Control Commands.
So, I ran a cat command and then made it a background job using the bgcommand after stopping it with Ctrl +Z.
Now I wanted to first terminate that background process, so I used the command %kill-2%2 as the Process ID was [2] but it gave me an error saying "No such job". I tried it will %kill-9%2 but same error.
I checked it with fg command and that job was still running and it came on foreground
Similarly, I wanted to suspend a background job, so I used the command %kill-19%2 but it gave me an error that "No such Job"
I would like to know my fault or error.
|
The command should be kill -2 %2 with proper spacing.
The % sign at the beginning of your line is probably just the prompt they are using (PS1).
| Can't terminate / suspend a background job |
1,500,454,082,000 |
One can launch GUI programs, for example, gv or xpdf from vifm in background in vifm's command line:
:!gv %f &
However, if gv is launched by pressing Enter on a file like aPSfile.ps in vifm, it blocks the vifm. Is it possible to run it in the background as well when it is launched this way? The following setup in vifmrc does not work:
FILETYPE=PS=ps,eps,epsi=gv &
My current solution is to run vifm in GNU screen. gv launched by pressing Enter will run in a new screen instead of blocking vifm. However, I'd like to save that screen as well...
|
The vifm documentation explicitly covers this requirement:
:filet[ype] pat1,pat2,... [{descr}]def_prog[ &],[{descr}]prog2[ &],...
Space followed by an ampersand as two last characters of a command means running of the command in the background.
I have
filetype *.pdf apvlv &
in my .vifm/vifmrc and it backgrounds any .pdf files I open, allowing me to close out of vifm and have apvlv still open.
| Is it possible to disconnect a GUI program launched within vifm from vifm? |
1,500,454,082,000 |
I want to understand a little bit better, what a background process is. The question came to live as a result of reading this line of code:
/usr/sbin/rsyslogd -niNONE &
Source
The documentations says:
-i pid file
Specify an alternative pid file instead of the default
one. This option must be used if multiple instances of
rsyslogd should run on a single machine. To disable
writing a pid file, use the reserved name "NONE" (all
upper case!), so "-iNONE".
-n Avoid auto-backgrounding. This is needed especially if
the rsyslogd is started and controlled by init(8).
Source
The ampersand & seems to mean to request that the command is run in the background, see, for example here.
If my understanding is correct, pid files are used with daemons, that is when a program is run in the background.
So on the face value it seems that the command in question first tells the program not to run in the background with -n, then specify NONE for the pid file, to indicate it is not a daemon1, and then right after that specify & to send it into the background.
I cannot make a lot of sense of that. Is the background that the process would normally enter is a different background it is sent to by using &? From all I read, it seems that the only meaning of the background is that shell is not blocked. In this respect asking the process not to auto-background and then background it does not make a lot of sense.
Is there something here I'm missing? What is exactly the background? (And who is responsible for deleting the pid file, while we are at it?)
1 - in a docker container context, where the question arose from, the existence of the pid file can cause problems when the container is stopped and then restarted. It is not clear for me what is responsible for deleting the pid files, some sources suggest that it's the init system, such as systemd, while others imply that it's the program responsibility. However if a process killed with SIGKILL the program might not have an opportunity to delete it, so subsequent container re-start will fail because the pid file will already be there, but is expected not to.
|
The Unix concept of a background process is part of job control. Job control is the organization of processes around a terminal session.
Daemons normally don't run in a terminal session, therefore they are not background processes in the Unix job control sense, only in the general computing sense of belonging to the machine's "invisible" activity. This is most likely what the rsyslogd documentation is referring to: it's referring to becoming a daemon, and not a job control background process.
In an interactive terminal session, the user manages jobs. Each job is a group of processes (perhaps one), arranged into that group by some piece of software (usually the job control shell). Each job in the session has the terminal as its controlling terminal.
The background/foreground concept has to do with the sharing of the terminal device among these jobs. At any time, one of the process groups is assigned as the foreground process group. It may read characters from the terminal, and send output to the terminal. Other process groups are all background process groups.
The job control shell provides the shuffling of jobs between foreground and background is done with a combination of signaling from the TTY and certain functions like tcsetpgrp.
The foreground process group is not just allowed to read from and write to the terminal, but it also receives TTY-generated signals, like SIGINT from CtrlC, and SIGTSTP from CtrlZ.
Typically, you use CtrlZ to suspend a job into the background and then a job control command like bg to get it to execute in the background.
When you issue CtrlZ, every process in the foreground process group gets a SIGTSTP signal and is suspended. The job control shell detects this change, and makes the library calls to move that job into the background, and make itself the foreground process group. So, the shell now being in the foreground again, it can receive commands from the TTY.
Now you can type bg, and the shell will cause the suspended background job to execute.
The fg command will cause the shell to remove itself from the foreground and place a background job into the foreground again.
Background jobs don't receive the character-driven signals from the TTY like SIGINT from CtrlC. However, when they try to read or write to the terminal, they receive a signal like SIGTTOU and become blocked from that action.
Job control is like a traffic cop, guarding access to the terminal.
It's similar to what window management is in a GUI. In typical GUI, one window has the "keyboard focus". Keystrokes go to that window. So it is with job control: the foreground process group has the "terminal focus", so to speak.
The rsyslogd documentation is almost certainly using the term "background" to actually mean "turn into a daemon" or "daemonize". This is different from "background job" under POSIX job control.
When a server application daemonizes itself automatically it means that if it is run in a terminal session, it takes steps to remove itself from that session. It forks a child and then a grandchild. The child terminates, and so the grandchild becomes orphaned, and a child of the init daemon. That's part of it. The other part of it is that the grandchild will close the standard input, output and error file descriptors. Thus it loses the connection to the TTY. Some other actions may be taken, like changing to some specific working directory.
It makes sense that that rsyslogd doesn't daemonize itself if run by init, since there is no terminal session to dissociate from. (However, rsyslogd could detect that it's not part of a terminal session, and not require the -n flag in that situation.)
Thus, actually, the main use of a do-not-daemonize command line option in a server would be for debugging, with specific reasons like:
Perhaps the daemon has some debug trace mode whereby messages get sent to standard output. If you want to watch those messages on your console, you don't want the daemon to close its standard output file descriptor.
If you want to debug the daemon under debugger, it is inconvenient if it simply exits, such that the actual daemon activity is happening in a forked grandchild.
When you're testing, you may want to exercise job control over the daemon, like terminating it with CtrlC, or suspending with CtrlZ.
Sometimes people like to run servers under terminal multiplexing software like GNU Screen or Termux, where the servers run in a terminal session. Auto-daemonization would defeat the purpose of that.
Regarding who is responsible for deleting a PID file: mainly, the service application itself, if it is shut down cleanly. If some master process is managing services, it can know about the paths to their PID files, and clean up after them if they terminate without deleting the file. PID files are typically placed into a directory that is wiped on a reboot, like /var/run, so if the system catastrophically fails and has to be restarted, the restart takes care of the PID files.
| What exactly does it mean to run a process in the "background"? |
1,500,454,082,000 |
I have many working jobs running on different consoles.
They almost occupied all the CPU usage, which caused me hard to control the system ( very slow response time )
Is there any way to pause these consoles? or any other ways?
#update
I am actually building Yocto in many different consoles, it seems hard to adjust a specific process, and when building Yocto, there are many different processes running, and they may end and start another new one repeatedly.
|
There are several ways to pause a process:
Send a SIGSTOP to the the process to freeze it (SIGCONT to unfreeze). You can also hit Ctrl+S (Ctrl+Q) to send these signals to an active process. But if it is in background, you would have to use kill or its variations.
Use nice to set priorities to processes. By default, all user processes are set to 10 and therefore get equal CPU. If the process is not important and can be slowed down - rise the niceness. If a process needs to have a priority - reduce the niceness.
Just stop unneeded processes :)
| Possible to pause console operation? |
1,500,454,082,000 |
In Linux, you can do the following:
kill 1 (or kill %1)
Which means "close the processes in job number 1".
And you can do the following:
kill 1234
Which means "send the SIGTERM signal to the process with PID 1234".
Are these two kill commands the same command, or are they two different commands?
|
I’m not sure you can do kill 1 (or rather, you can try, but you won’t be allowed to, unless your root, and then you’re in for a surprise). 1 here always refers to the process with id 1, which is usually init (or some variant thereof).
To actually answer your question, if you’re in a shell which supports job control, kill will be a shell built-in, handling both cases (managing jobs and processes). See for example Bash’s kill command.
If you’re in a shell which doesn’t support job control (are there any?), kill will be a binary in the system, typically /bin/kill; see for exampleutil-linux’s kill command. Even in a shell with a built-in kill command, you can access this one for example by specifying its full path. This kill command is also accessible without a shell (for use from another program).
See also POSIX’s definition of kill, which covers both cases (but doesn’t specify what is implemented where).
| Is the "kill" command for job control the same as the "kill" command to send a signal to a process? |
1,500,454,082,000 |
Is it possible to know if a script executed inside a screen session finished or not? The script can be seen by typing jobs and I would like to terminate the screen session when the script is finished. How can I do so or can I have the screen terminate itself after the script successfully executes? It is not convenient to do it manually since I run 72 screen sessions at once. It would be really nice to somehow check the status of the jobs inside the screen sessions automatically and print them to stdout and have the screen sessions terminate when the jobs running within are finished.
|
You just chain the commands 1. your_script and 2. kill this screen session
Let foo.sh be your script. The command to kill a screen session is kill. You issue commands to a screen session with screen -X, making screen -X kill the shell/bash command to kill the screen session you are currently in. You would
screen -S the_session_for_my_job
to create the screen session and then you would
/path/to/foo.sh; screen -X kill
This will kill the screen session, independent from the number of windows and will do so after foo.sh has finished.
A more complicated way is to watch process ids and issue the kill command when a certain PID disappears.
Let 12345 the PID of the process you want to watch. Then you would
while ps a | grep ^12345;do sleep 2; done; screen -X kill
The while loop watches the process list for a process with the PID 12345. If such a process exists, it will sleep 2 seconds before checking again. If no process with this PID exists, the while loop terminates. Then the kill command is issued.
| How to know when a job in screen finishes? |
1,500,454,082,000 |
Need an at command timestamp that run some command at given day monthly, like for example, every day 15, as would follow:
$ at every 15 day
So that on every day 15, it would run some command.
How would I set it?
|
As pointed out in the comments, cron is the right tool to do so. at is used to run a command at a specified time and date but only once.
Just add this line to /etc/crontab:
0 7 15 * * youruser /path/to/somecommand
This runs the specified command at 7:00 AM every 15th of the month.
For more information, see the manpages:
man cron
man crontab
| Need an "at" Command Timestamp That Runs Command Monthly at Given Day |
1,500,454,082,000 |
According to this site I set up cron to execute a script for me, first just trying to get it to work with cat before doing the actual work I need to do (actual work will need root priviliges so I did everything as root to make my life easier later):
me> sudo su
root> crontab -e
Edited the file as follows, leaving a blank line at the end:
SHELL=/bin/bash
#which cat outputs /bin/cat
PATH="/bin"
# execute this every minute, if it works, change cat to my script
1 * * * * cat /home/me/source.txt 1> /home/me/destination.txt
According to this SO question, restarted the cron service to be sure it loads changes after saving the file and exiting the editor:
root> service cron restart
And then waited for a few minutes. Nothing happened. Then restarted the computer. Again, nothing. Where did I do it wrong?
|
Your crontab is running at 1st minute of every hour. To run every minute you have to configure like this.
* * * * * cat /home/me/source.txt 1> /home/me/destination.txt
| cron doesn't execute scripts after setting it up |
1,500,454,082,000 |
I was reading a book about Linux. It states that to bring a process to foreground, use the fg command and a percent sign (%) followed by the job number. I did some testing and found that it works as expected. But I also found that I can use a simple number as the jobspec, like fg 3 (instead of fg %3), which can bring the third process to foreground, too. Is a simple number can be considered a valid jobspec?
|
Bash seems to accept fg 3 etc., but I'm not sure the documentation is too explicit about that.
The description for fg just says it takes a "jobspec", and their description says "The character ‘%’ introduces a job specification (jobspec)." and the % seems included in all the examples.
The other shells I tried (Dash, ksh and zsh) didn't accept a plain number there, so it looks like a Bash-only thing.
Note that kill can take a jobspec or a process ID, so both kill %3 and kill 3 are valid, they just mean different things. Which also implies that in general, a plain number can't work as a jobspec, so perhaps better to stick with %3.
| Is a pure number like "3" a valid jobspec? |
1,500,454,082,000 |
I have a script (not written by me, I cannot modify it) that has to run for days, that sometimes fails (exits with an error).
In this case all I have to do is just reboot the server (there is no better solution for now), and restart the script. Currently I do this:
log in via SSH
screen -S job
./myscript.sh to start the job (let's say this script contains just: dothis and this process might exit with an error)
CTRL A, D to detach from screen
...wait a few hours...
log in, resume the screen with screen -r job.
If still running, detach and come back later.
If the script has failed, sudo reboot, and start at step 1, to make the long job continue.
How to do this without manual intervention?
How to automate this and have the server reboot automatically if the script exists with an error, and then restart the script?
|
First, I would try to put that script in a container. this would remove some dependencies from the host itself, and allow automatic restart.
Solution using docker and docker-compose
This approach requires docker and docker compose. If you have Ubuntu, you can istall them via sudo apt install docker.io docker-compose.
Create a Dockerfile to build your container, like:
FROM ubuntu
COPY /path/to/script/on/host /myscript.sh
# maybe deal with some dependencies here
CMD /bin/bash /myscript.sh
Save the above named as Dockerfile in any folder. You can see some docs at https://docs.docker.com/engine/reference/builder/
Create a docker-compose.yml
version: "3.9"
services:
scriptrunner:
build: .
restart: always
Place this as docker-compose.yml in the same directory as your Dockerfile. See some docs here: https://docs.docker.com/compose/compose-file/compose-file-v3/
I assume you want to get some output of the script, in which case you may have to set up docker volumes to "share" folders between your host and the container.
Go to your folder in a terminal and type docker-compose up -d.
Using this method, you put your script in a container, will restart the container after every script fail, and will run as a daemon.
Solution using systemd
If you don't want to deal with containers, you can wrap your script in another one, e. g. my-runner.sh.
#!/bin/bash
/path/to/my-script.sh || systemctl reboot
This will reboot your computer after the script fail. Note that rebooting may require a different command or root privileges.
And now, let's make a systemd service of our runner script. This is a good tutorial but it comes down to the following:
Create a systemd unit file like /etc/systemd/system/my-script.service and put the following into it.
[Unit]
Description=my script runner service
After=network.target
Type=simple
User=my-user
ExecStart=/path/to/the/previous/my-runner.sh
[Install]
WantedBy=multi-user.target
Now you only have to issue systemcl start my-script && systemctl enable my-script to start it and make it start after reboot.
| Reboot and relaunch a script if error |
1,500,454,082,000 |
I run the command (Xorg & sleep 3; xeyes) & to test Xorg, and group it into a single subshell background job to make management easy. This works properly, and opens xeyes in the new Xorg session after 3 seconds.
Upon running the command, I'll get an output such as the following:
[1] 635
After running ps -ef to check new processes, I'll get an output like the following:
root 635 361 0 4:52 tty1 00:00:00 -bash
root 636 365 0 4:52 tty2 00:00:00 /usr/lib/Xorg
root 639 365 0 4:52 tty1 00:00:00 xeyes
This seems to be a pretty standard output, and as expected.
After verifying my X server works as expected, I attempt to kill this group with kill %1. Upon running this, my processes now look as follows:
root 636 1 0 4:52 tty2 00:00:00 /usr/lib/Xorg
Why has Xorg failed to exit? Why has the subshell exited successfully, closing xeyes correctly, but not brought Xorg along with it? Why has the parent process of Xorg now changed to 1 instead of the subshell? Shouldn't the subshell send a kill signal to all of its child processes upon exit?
Additionally, if I kill the group instead with kill 635, which many resources say should be equivalent to kill %1, my process state is even more bizzare:
root 636 1 0 4:52 tty2 00:00:00 /usr/lib/Xorg
root 639 1 0 4:52 tty1 00:00:00 xeyes
What??? Why have both processes failed to exit now, and are now children of PID 1? What's going on here, and what am I doing wrong?
An in-depth explanation of what exactly is going on here would be appreciated, in addition to just telling me what to do instead.
|
If you want to observe job control, you'd want to use the -j option to ps which will list the process group id and session id.
Here, I see:
[...]
UID PID PPID PGID SID C STIME TTY TIME CMD
chazelas 6805 4172 6805 6805 0 06:47 pts/7 00:00:00 /bin/zsh
chazelas 6825 6805 6825 6805 0 06:48 pts/7 00:00:00 xeyes
root 6826 6825 6826 6826 0 06:48 tty2 00:00:00 /usr/lib/xorg/Xorg :4
[...]
You see Xorg is a child of xeyes as my shell is a bit more optimised than bash and runs xeyes in the subshell's process as it's the last command of the subshell. And no, a subshell is not going to kill its children when it terminates, that would render the shell unusable (and here it's obvious it couldn't as the subshell has been replaced by xeyes).
The same in bash:
$ ps -Afj
[...]
UID PID PPID PGID SID C STIME TTY TIME CMD
chazelas 7230 6805 7230 6805 0 06:54 pts/7 00:00:00 bash
chazelas 7246 7230 7246 6805 0 06:54 pts/7 00:00:00 bash
root 7247 7246 7247 7247 2 06:54 tty2 00:00:00 /usr/lib/xorg/Xorg :4
chazelas 7274 7246 7246 6805 0 06:54 pts/7 00:00:00 xeyes
[...]
There's an extra useless bash process that just waits for xeyes to terminate and won't do anything afterwards, but otherwise it's the same as in zsh, you see that a new process group has been created by the shell (6825 for zsh, 7246 for bash), but Xorg is not in that process group.
That's not because of the & after Xorg, commands started in a subshell are not started in a new job, that's because Xorg itself starts a whole new session itself (let alone a process group) to attach that tty2 terminal.
So as Xorg has removed itself from the process group, doing that kill %1 won't kill it.
Note that kill %1 sends a SIGTERM signal to the job's process group, not an individual pid. To send a signal to a process group you need:
kill -- -7246
Which in my case above would have killed pids 7246 (bash subshell) and 7274 (xeyes), but not Xorg (7247) as it's not in the 7246 process group.
| Subshell Job Clarification |
1,391,717,026,000 |
I've got two files _jeter3.txt and _jeter1.txt
I've checked they are both sorted on the 20th column using sort -c
sort -t ' ' -c -k20,20 _jeter3.txt
sort -t ' ' -c -k20,20 _jeter1.txt
#no errors
but there is an error when I want to join both files it says that the second file is not sorted:
join -t ' ' -1 20 -2 20 _jeter1.txt _jeter3.txt > /dev/null
join: File 2 is not in sorted order
I don't understand why.
cat /etc/*-release #FYI
openSUSE 11.0 (i586)
VERSION = 11.0
UPDATE: using 'sort -f' and join -i (both case insensitive) fixes the problem. But it doesn't explain my initial problem.
UPDATE: versions of sort & join:
> join --version
join (GNU coreutils) 6.11
Copyright (C) 2008 Free Software Foundation, Inc.
(...)
> sort --version
sort (GNU coreutils) 6.11
Copyright (C) 2008 Free Software Foundation, Inc.
(...)
|
I got the same error with Ubuntu 11.04, with sort and join both in version (GNU coreutils) 8.5.
They are clearly incompatible. In fact the sort command seems bugged: there is no difference with or without the -f (--ignore-case) option. When sorting, aaB is always before aBa. Non alphanumeric characters seems also always ignored (abc is before ab-x)
Join seems to expect the opposite... But I have a solution
In fact, this is linked to the collation sequence: using LANG=en_EN sort -k 1,1 <myfile> ... then LANG=en_EN join ... eliminates the message.
Internationalisation is the root of evil... (nobody documents it clearly).
| join : "File 2 not in sorted order" |
1,391,717,026,000 |
File1.txt
id No
gi|371443199|gb|JH556661.1| 7907290
gi|371443198|gb|JH556662.1| 7573913
gi|371443197|gb|JH556663.1| 7384412
gi|371440577|gb|JH559283.1| 6931777
File2.txt
id P R S
gi|367088741|gb|AGAJ01056324.1| 5 5 0
gi|371443198|gb|JH556662.1| 2 2 0
gi|367090281|gb|AGAJ01054784.1| 4 4 0
gi|371440577|gb|JH559283.1| 21 19 2
output.txt
id P R S NO
gi|371443198|gb|JH556662.1| 2 2 0 7573913
gi|371440577|gb|JH559283.1| 21 19 2 6931777
File1.txt has two columns & File2.txt has four columns. I want to join both files which has unique id (array[1] should match in both files (file1.txt & file2.txt)
and give ouput only matched id (see output.txt).
I have tried join -v <(sort file1.txt) <(sort file2.txt). Any help with awk or join commands requested.
|
join works great:
$ join <(sort File1.txt) <(sort File2.txt) | column -t | tac
id No P R S
gi|371443198|gb|JH556662.1| 7573913 2 2 0
gi|371440577|gb|JH559283.1| 6931777 21 19 2
ps. does ouput column order matter?
if yes use:
$ join <(sort 1) <(sort 2) | tac | awk '{print $1,$3,$4,$5,$2}' | column -t
id P R S No
gi|371443198|gb|JH556662.1| 2 2 0 7573913
gi|371440577|gb|JH559283.1| 21 19 2 6931777
| Join two files with matching columns |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.