date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,358,935,391,000 |
I am running Linux Mint 18.2 with KDE Plasma. Recently I noticed that most of the times I copy large files to removable drives the process hangs just before finish.
I opened KSysGuard and saw that the process of file.so is in disk sleep.
When this happens the process seems not receiving any kill or end signal.
I decided to reboot, opened a terminal in Konsole and ran reboot command.
But the reboot also got stuck in the middle! I had to press the power button and force shutdown my laptop.
Now I want to know is it impossible to kill or end a process that is in I/O Sleep state ?
|
If a process is in an uninterruptible sleep state, then no, you cannot kill or otherwise end the process until it exits that state. While in that state, the process has invoked a system call on the kernel, and the kernel code executing on the behalf of the process has blocked the process waiting for some event to happen.
Note that this question may be a duplicate of https://stackoverflow.com/q/223644/5161900
| Is there any way to kill or end a process in "disk sleep" |
1,358,935,391,000 |
I have the pid and I just stopped a program using
kill -stop PID
Now I want to continue it by doing
kill -cont PID
But only if it's already stopped. How would I check to see if it's stopped or running?
|
You can check whether the process is in stopped state, T is ps output.
You can do:
[ "$(ps -o state= -p PID)" = T ] && kill -CONT PID
[ "$(ps -o state= -p PID)" = T ] tests whether the output of ps -o state= -p PID is T, if so send SIGCONT to the process. Replace PID with the actual process ID of the process.
| How can I check to see if a process is stopped from the command-line? |
1,358,935,391,000 |
Running newer versions of Gnome (on Wayland), you can't restart the shell with Alt+F2, entering r & then Enter - which used to restart the shell without logging the user out of the session.
More recently, on Fedora systems you used to be able to restart by sending SIGHUP to the gnome-shell process - using top or whatever. However now on Fedora 28 atleast this kills the session and sends the user back to the login screen.
Restarting the shell leaving the session intact is very useful in the event of installing/modifying an extension, or (hopefully not anymore!) having to restart gnome due to it bugging out and using 100% CPU. Is there a current alternative please?
EDIT: I have also tried SIGQUIT, and gnome-shell --replace (with export DISPLAY=:0 if on a TTY), and the result is to still be kicked back to the login screen
|
In an Xorg session one can restart GNOME shell without losing application state as applications are running against a separate server (X). But unlike Xorg in case of a Wayland session GNOME shell is not separate from the Wayland protocol, GNOME itself acts as the display server.
So there isn't any way to restart GNOME shell in Wayland without losing application state as the display server also goes down. It's similar to restarting X server in an Xorg session.
That is the reason why this shell restart option is disabled in Wayland (recall that usually the key sequence to kill the X server is also disabled by default in the Xorg session) and there will probably never be any non-destructive way to restart GNOME shell in Wayland.
| Restarting Gnome Shell 3.28.1 on Fedora 28 |
1,358,935,391,000 |
How does systemd handle the death of the children of managed processes?
Suppose that systemd launches the daemon foo, which then launches three other daemons: bar1, bar2, and bar3. Will systemd do anything to foo if bar2 terminates unexpectedly? From my understanding, under Service Management Facility (SMF) on Solaris foo would be killed or restarted if you didn't tell startd otherwise by changing the property ignore_error. Does systemd behave differently?
Edit #1:
I've written a test daemon to test systemd's behavior. The daemon is called mother_daemon because it spawns children.
#include <iostream>
#include <unistd.h>
#include <string>
#include <cstring>
using namespace std;
int main(int argc, char* argv[])
{
cout << "Hi! I'm going to fork and make 5 child processes!" << endl;
for (int i = 0; i < 5; i++)
{
pid_t pid = fork();
if (pid > 0)
{
cout << "I'm the parent process, and i = " << i << endl;
}
if (pid == 0)
{
// The following four lines rename the process to make it easier to keep track of with ps
int argv0size = strlen(argv[0]);
string childThreadName = "mother_daemon child thread PID: ";
childThreadName.append( to_string(::getpid()) );
strncpy(argv[0],childThreadName.c_str(),argv0size + 25);
cout << "I'm a child process, and i = " << i << endl;
pause();
// I don't want each child process spawning its own process
break;
}
}
pause();
return 0;
}
This is controlled with a systemd unit called mother_daemon.service:
[Unit]
Description=Testing how systemd handles the death of the children of a managed process
StopWhenUnneeded=true
[Service]
ExecStart=/home/my_user/test_program/mother_daemon
Restart=always
The mother_daemon.service unit is controlled by the mother_daemon.target:
[Unit]
Description=A target that wants mother_daemon.service
Wants=mother_daemon.service
When I run sudo systemctl start mother_daemon.target (after sudo systemctl daemon-reload) I can see the parent daemon and the five children daemons.
Killing one of the children has no effect on the parent, but killing the parent (and thus triggering a restart) does restart the children.
Stopping mother_daemon.target with sudo systemctl stop mother_daemon.target ends the children as well.
I think that this answers my question.
|
It doesn't.
The main process handles the death of its children, in the normal way.
This is the POSIX world. If process A has forked B, and process B has forked C, D, and E; then process B is what sees the SIGCHLD and wait() status from the termination of C, D, and E. Process A is unaware of what happens to C, D, and E, and this is irrespective of systemd.
For A to be aware of C, D, and E terminating, two things have to happen.
A has to register itself as a "subreaper". systemd does this, as do various other service managers including upstart and the nosh service-manager.
B has to exit(). Services that foolishly, erroneously, and vainly try to "dæmonize" themselves do this.
(One can get clever with kevent() on the BSDs. But this is a Linux question.)
| How does systemd handle the death of a child of a managed process? |
1,358,935,391,000 |
I've started a long running and machine hog process. I've hit CTRL-Z to stop it. I've then put it in the background with bg. Oops, I should have restarted with fg so that I could easily stop and start it again. What is the easiest way to stop a process that was just put into the background?
|
As noted, fg = foreground.
You can also try jobs to see them. Then %N can be used with fg or kill e.g. fg %4
| Undo bg; Undo putting a process into the background? |
1,358,935,391,000 |
I suspect this is not doable just because of the security implications, but here's what I'd like to do.
Basically we run a bash shell-script on our CentOS server that calls Program-A (in our case JMeter, but that is arbitrary) which runs and dumps data into a log file. After that process finishes the shell-script starts up Program-B to analyze the log.
What I would like to do is stop the instance of Program-A1 and replace it with another instance of Program-A2, or some how swap them in place so I can safely end Program-A1 without starting up Program-B prematurely.
Why would I want to do this? The main reason is that Program-A loads some configuration files at its startup and if we make changes to those config files we have to restart the program for them to take effect, which I'd like to avoid.
I understand this is most likely not possible, but if it is I would greatly appreciate the information.
EDIT: I suppose I wasn't clear enough. Basically my shellscript looks like this:
./Program-A
./Program-B
When Program-A finishes dumping to our log, Program-B picks up the log and parses it. The problem I'm having is sometimes Program-A's settings are not set correctly or something is wrong with the environment when it starts which means that we'll need to run the whole thing again. We'd like to avoid that by just replacing our first instance of Program-A (calling it Program-A1) with a new instance of Program-A (we'll call this Program-A2). Does this make more sense to everyone?
The main reason we would like to do this is because Program-A and Program-B are actually a single part of a GIANT shell script that takes hours to run. Rather than restart the whole process for one individual part, we'd like to restart the single troubled program.
|
It looks to me as if you are trying to replace a RUNNING process from OUTSIDE the process. That's some radical stuff.
When I first looked at the question, it seemed that you are looking exactly for exec. But exec is called by the program itself. So unless you have coded the process so that you can force it to just exec another process, you can't do that while it is running.
You could potentially create a signal trap in your Program-A to exec a program with predefined name (which you can then set to be whatever you want) and then use kill on this process to force it to exec. From the outside, it will look like the process kept running - it does, it just becomes someone else. However, if you haven't done that and you want to do this on a running process, I don't think you can do anything.
However, if the outer shell is running, but it hasn't started the critical process yet, you can just replace the file of the problematic process.
| Replace instance of process in place? |
1,358,935,391,000 |
When I start a graphical application from a terminal running bash, that application is somehow connected to that bash session. For example, when the applications dumps some text it will appear in the bash session it is started from. Also, some applications will get closed when i close the terminal using the close button, but not when i close the terminal by exiting the bash session using the exit command or CTRL+D.
How is a graphical application started from a bash session connected to that bash session?
bonus question: How can I inspect this connection? probably also manipulate?
|
The application is connected in two ways: to bash, and to the terminal.
The connection to the terminal is that the standard streams (stdin, stdout and stderr) of the application are connected to the terminal. Typical GUI applications don't use stdin or stdout, but they might emit error messages to stderr.
The connection to the shell is that if you started the application with foo &, it remains known to the shell as a job, as explained in Difference between nohup, disown and &. When you close the terminal, the shell receives a SIGHUP, which it propagates to its jobs. When you type exit in the shell, it disowns the jobs beforehand (this is configurable to some extent).
You can sever the shell connection with the disown built-in. You can't sever the terminal connection, at least not without underhand methods (using a debugger) that could crash the program.
| How is a graphical application started from a bash session connected to that bash session? |
1,358,935,391,000 |
When I'm launching a background process and then closes the terminal using the window's closing button, the background process gets killed. However if I close the terminal using Ctrl+D, the background process keeps running:
sam@Sam-Pc:~$ yes > /dev/null &
[1] 10219
// I then close the terminal a reopen a new one
sam@Sam-Pc:~$ ps aux | grep yes
sam 10295 0.0 0.0 15948 2152 pts/8 S+ 00:54 0:00 grep --color=auto yes
And now using Ctrl+D to close the terminal:
sam@Sam-Pc:~$ yes > /dev/null &
[1] 10299
sam@Sam-Pc:~$Ctrl-D
// I then reopen a new terminal
sam@Sam-Pc:~$ ps aux | grep yes
sam 10219 99.4 0.0 11404 812 ? R 00:52 2:01 yes
sam 10295 0.0 0.0 15948 2152 pts/8 S+ 00:54 0:00 grep --color=auto yes
Could anybody explain this behaviour?
Thanks!
|
If you close the window using closing button, then a SIGHUP is sent to the background processes by the shell which also receives SIGHUP as the terminal closes. The normal response of the processes would be to exit, so the background jobs will be closed.
On the other hand if you press Cntl + D, no signal is sent rather an EOF (End of File) is indicated on the STDIN and the shell (and terminal) closes. EOF basically means that we have reached an end to the STDIN, there is nothing more to input. As EOF does not trigger any response related to background jobs they will keep on continuing.
| Difference between closing the terminal using the closing button, and Ctrl-D |
1,358,935,391,000 |
This kills every process with a handle to file /foo/bar (in bash):
lsof /foo/bar 2>&1 | grep "/foo/bar" | sed "s/ */\\t/g" | cut -f 2 | while read PID; do kill $PID; done
This does not seem like such an uncommon task that there wouldn't be an easier solution, so I'm wondering if there's something like killall or a switch to kill that I've missed which does the same.
|
That's what -t is for. The man page even suggests you'd use that for kill.
lsof -t /some/file | xargs kill
Traditionally (before the lsof days), you'd use:
fuser /some/file 2> /dev/null | xargs kill
for that.
Some fuser implementations, like the one found on most Linux-based operating systems , Solaris or recent FreeBSDs can even do the killing by themselves:
fuser -k /some/file
However note that they send a SIGKILL, not SIGTERM. You can choose a different signal with -TERM with some implementations and -s TERM with others.
| Better way to kill all processes with a handle to some file |
1,358,935,391,000 |
I can not kill irq/${nnn}-nvidia by kill -9 or pkill -f -9.
Does anyone how to kill or stop those process?
(I am using Ubuntu 16.04, if that is relevant.)
|
As @hobbs explained, it is a kernel thread. A broader perspective is the following:
IRQ handling is problematic in any OS because interrupts can arrive at any time. Interrupts can arrive even while the kernel is in the middle of working on a complex task and resources are inconsistent (pointers are pointing to invalid addresses and so on). This problem can be solved with locks, i.e. don't allow the interrupt handlers to run until the kernel is in an interruptible, consistent state. The disadvantage of using locks is that too many locks make the system slow and inefficient.
Thus, the optimal solution for the problem is this:
The kernel interrupt handlers are as short as possible.
Their only job is to move all relevant interrupt data into a temporary buffer
Some "background" thread works continuously on this buffer and does the real work on behalf of the interrupt handlers.
These "background" threads are the interrupt handler kernel threads.
You see them in top as normal processes.
However, they are displayed as if they use zero memory.
And yes, this is true, because no real user space memory belongs to them.
They are essentially kernel threads running in the background.
You can't kill kernel threads: they are managed entirely by the kernel. If you could kill it, the irq/142 handler in your nvidia driver wouldn't exist any more: if your video card sends an interrupt, nothing would handle it. The result would be likely a freeze, but your video surely wouldn't work any more.
The problem in your system is that this interrupt handler gets a lot of CPU resource. There are many potential reasons:
For some reason, the hardware (your video card) sends so many interrupts that your CPU can't handle all of them.
The hardware is buggy.
The driver is buggy.
Knowing the quality of the Nvidia drivers, unfortunately a buggy driver is the most likely.
The solution is to somehow reset this driver. Some ideas, ordered ascending by brutality:
Is it running some 3D accelerated process in the background? Google Earth, for example? If yes, stop or kill it.
From X, switch back to character console (alt/ctrl/f1) and then back (alt/ctrl/f7). Then most of the video will re-initialize.
Restart X (exit ordinarily, or type alt/ctrl/backspace to kill the X server).
Kill X (killall -9 Xorg). It is better if you do this from the character console.
If you kill X and you still see this kernel thread, you may try to remove the Nvidia kernel module (you can see it in the list given by lsmod, then you can remove it with rmmod). Restarting X will insmod it automatically, resetting the hardware.
If none of these work, you need to reboot. If an ordinary reboot doesn't work you can do this with additional brutality: use alt/printscreen/s followed by alt/printscreen/b.
Extension: as a temporary workaround you could try to give a very low priority to that thread (renice +20 -p 1135). Then it will still run, but it will have less impact on your system performance.
| How do I kill an IRQ process in Linux? |
1,358,935,391,000 |
I have a gnome-run application in my home folder. I have now added the application to run when I press Meta+R (I added it in in CCSM). I run the application by executing ./gnome-run in my home folder.
I can't find any trace of the application process in the output of ps -A.
The problem is that if I have the gnome-run program open and I press the key combination I want the application to close. Is there a way to create a bash file that checks if the applications is running? If it is then close it, else launch it.
|
This shell script should handle the starting and stopping of any program:
#!/bin/bash
BASECMD=${1%%\ *}
PID=$(pgrep "$BASECMD")
if [ "$?" -eq "0" ]; then
echo "at least one instance of "$BASECMD" found, killing all instances"
kill $PID
else
echo "no running instances of "$BASECMD" found, starting one"
$1
fi
let's say you saved it under ~/mystarter, you can run any command with it using ~/mystarter <name>, eg in your case, bind Meta+R to:
~/mystarter gnome-run
and make sure the script is executable: chmod u+x ~/mystarter. Also it's probably best to put it somewhere in your PATH, so you don't have to type it's full location every time.
As for the fact that gnome-run doesn't show up in ps -A, make sure that gnome run itself isn't a script that launches the actual process. Check if there is a difference between ps -A | wc -l before and after launching it (this counts all running processes).
Edit:
Since you've accepted the answer, I thought I'd add support for running commands that have commandline arguments, so that this might become a place of reference. Run a command like so:
./mystarter 'cmd args'
eg:
./mystarter 'ncmpcpp -c ~/.ncmpcpp'
The command just looks up ncmpcpp to see if it's running already, but executes the full command (with arguments) when ncmpcpp wasn't running.
| How do I check with a Bash script if an application is running? |
1,358,935,391,000 |
When I'm checking for some process, I usually write
ps aux | grep myprocess
And sometimes I get the output of
eimantas 11998 0.0 0.0 8816 740 pts/0 S+ 07:45 0:00 grep myprocess
if the process is not running.
Now I really wonder why grep is in the list of processes if it filters out the output of the ps command after ps has run?
|
This behavior is totally normal, it's due to how bash manages pipe usage.
pipe is implemented by bash using the pipe syscall. After that call, bash forks and replaces the standard input (file descriptor 0) with the input from the right process (grep). The main bash process creates another fork and passes the output descriptor of the fifo in place of the standard input (file description 1) and launches the left command.
The ps utility is launched after the grep command, so you can see it on the output.
If you are not convinced by it, you can use set -x to enable command tracing. For instance:
+ ps aux
+ grep --color=auto grep
+ grep --color=auto systemd
alexises 1094 0.0 0.8 6212 2196 pts/0 S+ 09:30 0:00 grep --color=auto systemd
For more explanation you can check this part of basic c shell: http://www.cs.loyola.edu/~jglenn/702/S2005/Examples/dup2.html
| grep invading my ps [duplicate] |
1,358,935,391,000 |
I run 10+ different commands from 10+ different directories, and I need a better process to track everything.
I do a lot of debugging, and often times I need to work on multiple issues in parallel. I have lots of scripts that take 30-240 min to run, like:
create work area
compile code
run code to reproduce issue with debug info
run qualifications
I find myself asking the following questions:
What is running?
Why is it running? What was I trying to accomplish when I started the script?
What completed running and when? What was the exit code (pass/fail)?
Currently I make notes in a text file regarding what I was doing in each directory, and I manually check terminals or log files to get the running status of everything. Seems like there should be a better way.
|
If the issue is you're finding it difficult to keep track of what you're working on as you bounce from task to task you might want to take the time to look at using a tool such as tmux and/or screen. These are virtual terminal window servers and allow you to setup a terminal within them and name it. This allows you to put some context on a terminal and keep multiple terminals going without cluttering up your desktop.
I typically use screen so I'm more familiar with it's work flow but in general I setup a screen session like so:
screen -S appX
I then connect to it like so:
screen -r appX
Then within the space of appX you can setup different tabs/windows for work related to appX. I might setup a windows called compiling while another might be log, where I tail the log file for the app. I can then either use key combinations Ctrl+A+Ctrl+A to move from one tab/window to another or you can split your terminal up so that one of them is in the upper 1/2 of the terminal, while the other is in the lower 1/2.
| Best practices for job/process tracking and multi-tasking |
1,358,935,391,000 |
Gnome's system monitor has a "User" column in the Processes tab. There's also an "Owner" column, (that seems to be hidden by default).
Most of the processes have the same values on both columns. However, a few don't.
I was wondering what exactly does each column show, and what's the difference between the two.
|
systemd is a brand-spanking-new init system (it's about 4 years old, I believe). However, systemd encompasses much more than PID 1. Specifically, it happens to include a replacement for ConsoleKit, the old software that managed TTY sessions, X11 sessions, and really just logins in general. systemd's replacement for ConsoleKit is called logind, and has a number of advantages (e.g. multi-seat is finally possible, other things that I'm not really sure about, etc.).
Now, systemd <3 cgroups. A lot. cgroups, aka process Control Groups, are how systemd keeps track of what processes belong to which abstract "service"1. The key to understanding your question is that logind does this for users too: each user session gets its own kernel "session", which is backed by - you guessed it - a cgroup. Why? Because then the kernel is able to manage resources appropriately among users. Just because one user is running a lot of processes doesn't mean she should get more CPU time. But with cgroups, each cgroup gets equal time on the processor, and so every user gets equal resources.
Okay, now we're done with the background. Ready? The actual answer to your question is extremely undramatic given the above build-up: the process "owner" corresponds to whoever started the process, no matter what. On a technical level, this is kept track of by a user session, backed by a cgroup. The process "user" is the traditional sense of "user": the identity that the process is running under (and everything that is associated with that identity, most notably permissions).
Here's an example: you log into GNOME and start a terminal. The process that's running GNOME Shell and GNOME Terminal and gnome-session and everything else that makes up GNOME is running as user: you (because you've provided your credentials and logged on) and it's owned by you, too (because it was your fault, so to speak, that the processes got started). Now let's say you sudo -u to e.g. nobody. You are now running a process that has assumed the identity of nobody, but at a higher, abstract level, the process was still started by you and it's still attached to your session2. This level is kept track of by your user cgroup3, and that's what determines the fact that you are the "owner".
1: take Apache, for example. When Apache starts up, it has one main process to control everything, but it also spawns a bunch of subprocesses. The main Apache process doesn't actually do any work: it just directs the subprocesses, and those processes are the ones that do all the work. (It's done this way for various reasons.) The fact that the abstract concept of the Apache "service" cannot be directly mapped to a concrete concept of "the" Apache process creates problems for service managers like systemd. This is where cgroups come in: the main, original Apache process is placed into a Control Group, and then no matter what it does, it cannot ever escape that cgroup. This means that the abstract concept of the Apache service can now be directly mapped to the concrete concept of the "Apache cgroup".
2: look at /proc/$pid/sessionid to get some information about a process' kernel session, where $pid is the PID of the process in question.
3: you can find out more information about a process' cgroup by taking a peek at /proc/$pid/cgroup, where $pid is, again, the PID of the process in question.
| Process owner vs process user (Gnome's system monitor) |
1,358,935,391,000 |
One of the idioms often used to check if process is running is to use kill -s 0 $pid.
My question is, does it have any upsides over using [[ -e /proc/$pid ]] construct?
The script I'm writing is both Linux and bash specific.
|
I would prefer kill -s 0 pid vs testing /proc/pid as the former is portable, being specified by POSIX. Even if your script is targeting Linux, there is still a (very slight) risk for /proc to be unmounted for some reason.
| Using `kill -s 0 $pid` vs `[[ -e /proc/$pid ]]` to detect if PID is running |
1,358,935,391,000 |
I seem to have multiple bash processes running that are taking up most of my CPU. This is the output of top -c:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
20116 terdon 20 0 35288 14m 292 R 400.0 0.2 0:00.43 /bin/bash
20106 terdon 20 0 35992 15m 280 R 95.9 0.2 0:00.65 /bin/bash
20105 terdon 20 0 0 0 0 R 57.6 0.0 0:00.83 [bash]
This is the output of ps aux | grep bash | head -3:
terdon 7487 45.3 0.0 0 0 ? R 19:31 0:01 [bash]
terdon 7488 66.0 0.0 0 0 ? R 19:31 0:01 [bash]
terdon 7530 23.0 0.2 37984 17408 ? R 19:31 0:00 /bin/bash
The PIDs change every time I run the command so it looks like something is constantly respawning bash.
Details:
There are multiple [bash] entries. If I understand correctly [process name] means that the process was launched with no command line arguments.
The PIDs change so something is spawning these.
I have logged out and logged back in (I am working in Cinnamon) and the problem persists.
Now, I imagine this will go away if I restart, my main question is what can I use to track these processes down?
top -c does not help, pgrep bash just gives me different lists of PIDs, lsof /bin/bash just lists running bash instances and pstree shows them as independent processes.
In case it is relevant, I am running Linux Mint Debian, kernel 3.2.0-4-amd64, GNU bash, version 4.2.36(1)-release.
EDIT:
I have since rebooted (I had to) and, as expected, the problem has gone away. I am still interested in useful suggestions of how to track down such processes though.
|
Take a look at the output of lsof | grep 'bash.*cwd'. That will tell you the current working directories of the processes.
If you have pstree, take a look at its output. If not, take a look at the output of ps aux -H. That will tell you which processes own these mystery processes.
Start looking through configuration files for anything suspicious. Here's an incomplete list of ones you should check:
~/.bash*
~/.profile
/etc/profile
/etc/bash*
/etc/cron.*/*
The [process name] means that ps can't find that process' arguments, including argument 0 which contains the name of the file that was executed to create the process. That means lsof /bin/bash won't find these processes.
| Mysterious bash instances using a lot of CPU, how can I debug? |
1,358,935,391,000 |
I have a script which starts a number of background processes and if works fine when called from the cmdline.
However the same script is also called during my xsession startup and additionally on some udev events. In both cases the background processes disappear.
I had put a sleep 10 into the script and I could see, that the bg processes are indeed started, but once the script exists it takes the bg processes with it. I tried to solve with by invoking the bg processes with start_stop_deamon --background, but this does not make a difference. Hoever, I can invoke the script from a console and exit the session and the bg processes are still running.
Other than fixing my immediate problem (though any help would be much appreciated), I am keen to understand the logic behind it all. I suspect something related to the absence of a terminal.
|
Protect your processes with nohup:
nohup command-name &
You can also use this technique if you want to ignore stdout and stderr redirection to nohup.out:
command-name & disown
| Understanding when background process gets terminated |
1,358,935,391,000 |
This is more of a process management/signal handling question than a Bash question. It just uses Bash to explain the issue.
I'm running a Bash script in which I run a background process.
This is the script:
#!/bin/bash
# ...
# The background process
{
while :; do
sleep 1 && echo foo >> /path/to/some/file.txt
done
} &
# ...
exit 0
I do NOT run the script itself in the background. Simply ./script.
The "huponexit" shell option is enabled using shopt -s huponexit, therefore when the terminal is being closed I expect it to send a HUP signal to Bash, which will propagate it until it reaches the background process. If the background process will not trap and ignore the signal, it will be killed too - but this is not happening. The background process acts as if it was disown'ed.
This is a scheme I draw to illustrate the issue. The scheme, along with the description above, can be wrong as I'm sure I do not understand the subject well enough. Please fix me on that if it is truly the case.
I want to know why the background process is not being killed after closing the terminal, as if it was invoked by an interactive shell like that:
rany@~/Desktop$ while :; do sleep 1 && echo foo >> /path/to/some/file.txt; done &
I'm not sure but I guess that the answer to my question lies in the fact that Bash fork()s a non-interactive shell to run the script which might have a different set of rules for job control and signal handling.
|
So what does the man page tell us about huponexit?
If the huponexit shell option has been set with shopt, bash sends a SIGHUP to all jobs when an interactive login shell exits.
EDIT: Emphasizing that it is a LOGIN shell.
EDIT 2: interactive deserves equal emphasis
| A script's background process is still alive after closing the terminal |
1,358,935,391,000 |
For a while now, I have had the problem that a gzip process randomly starts on my Kubuntu system, uses up quite a bit of resources and causes my notebook fan to go crazy. The process shows up as gzip -c --rsyncable --best in htop and runs for quite a long time. I have no clue what is causing this, the system is a Kubuntu 14.04 and has no backup plan setup or anything like that. Any idea how I can figure out what is causing this the next time the process appears? I have done a bit of googling already but could not figure it out. I saw some suggestions with the ps command but grepping all lines there did not really point to anything.
|
Process tree
While the process is running try to use ps with the f option to see the process hierarchy:
ps axuf
Then you should get a tree of processes, meaning you should see what the parent process of the gzip is.
If gzip is a direct descendant of init then probably its parent has exited already, as it's very unlikely that init would create the gzip process.
Crontabs
Additionally you should check your crontabs to see whether there's anything creating it. Do sudo crontab -l -u <user> where user is the user of the gzip process you're seeing (in your case that seems to be root).
If you have any other users on that system which might have done stuff like setting up background services, then check their crontabs too. The fact that gzip runs as root doesn't guarantee that the original process that triggered the gzip was running as root as well. You can see a list of all existing crontabs by doing sudo ls /var/spool/cron/crontabs.
Logs
Check all the systems logs you have, looking for suspicious entries at the time the process is created. I'm not sure whether Kubuntu names its log files differently, but in standard Ubuntu you should at least check /var/log/syslog.
Last choice: a gzip wrapper
If none of these lead to any result you could rename your gzip binary and put a little wrapper in place which launches gzip with the passed parameters but also captures the system's state at that moment.
| A gzip process regularly runs on my system, how do I figure out what is triggering it? |
1,358,935,391,000 |
I use recoll to index files and it kicks in a inopportune times.
When I use htop to change the view to a tree view using F5 and filter the process list I see a master process running and child processes underneath it. When I press F9 to choose a termination option it doesn't seem to respond to the SIGTERM option so I have to use the SIGKILL option.
Is there an option to pause or stop the parent process and all its children rather than kill it outright?
|
You can press Space to tag a process. The kill command applies to all tagged processes.
There's no easy way to tag a process and its children, but the tree view (t) should list them contiguously.
Depending on how recoll is run, the processes may be in their own process group. If they are, then you can use kill -STOP -1234 to suspend them all, where 1234 is the process group ID (usually but not necessarily the process ID of the initial process in the group). You can check with ps -o pid,ppid,pgid,comm -C recoll, then ps -o pid,ppid,pgid,comm ww | grep -v recoll to see if there are any other processes in the process group. Htop doesn't have an interface to process groups.
If all the processes are called recoll, then an easier method is to filter the processes by name. In htop, use the filter command, then you can easily tag the processes you want to kill. On the command line, run ps $(pgrep recoll) to list the matching processes. If you're happy with the list, run pkill -STOP recoll to suspend those processes.
| How can htop be used to suspend a process and all its child processes? |
1,358,935,391,000 |
I start a new process from GNOME Terminal and then this process fork a child.
But when I killed the parent process the orphaned process's parent id became something other than 1 which represent init --user pid.
When I do this in virtual terminals, the parent pid is 1 which represent init process.
How can I execute new process from GNOME Terminal so that when it is died, the child process's parent pid became 1 and not pid of init --user process?
Thanks a lot.
|
I already answered a similar question a few months ago. So see that first for technical details. Here, I shall just show you how your situation is covered by that answer.
As I explained, I and other writers of various dæmon supervision utilities take advantage of how Linux now works, and what you are seeing is that very thing in action, almost exactly as I laid it out.
The only missing piece of information is that init --user is your session instance of upstart. It is started up when you first log in to a session, and stopped when you log out. It's there for you to have per-session jobs (similar, but not identical, to MacOS 10's user agents under launchd) of your own.
A couple of years ago, the Ubuntu people went about converting graphical desktop systems to employ upstart per-session jobs. Your GNOME Terminal is being started as a per-session job, and any orphaned children are inherited by the nearest sub-reaper, which is of course your per-session instance of upstart.
The systemd people have been, in recent months, working on the exact same thing, setting up GNOME Terminal to run individual tabs as separate systemd services, from one's per-user instance of systemd. (You can tell that your question is about upstart, not systemd, because on a systemd system the sub-reaper process would be systemd --user.)
How can I execute a new process from GNOME Terminal so that the child process's parent PID becomes 1 and not the PID of the ubuntu session init process?
This is intentionally hard. Service managers want to keep track of orphaned child processes. They want not to lose them to process #1. So the quick précis is: Stop trying to do that.
If you are asking solely because you think that your process ought to have a parent process ID of 1, then wean yourself off this idea.
If you erroneously think that this is an aspect of being a dæmon, then note that dæmons having parent process IDs of 1 has not been guaranteed (and on some Unices, not true across the whole system) since the advent of things like IBM's System Resource Controller and Bernstein's daemontools in the 1990s. In any case, one doesn't get to be a dæmon by double-forking within a login session. That's a long-since known to be half-baked idea.
If you erroneously think that this is a truism for orphaned child processes, then read my previous answer again. The absolutism that orphaned children are re-parented to process #1 is wrong, and has been wrong for over three years, at the time of writing this.
If you have a child process that for some bizarre reason truly needs this, then find out what that bizarre reason is and get it fixed. It's probably a bug, or someone making invalid design assumptions. Whatever the reason, the world of dæmon management changed in the 1990s, and Linux also changed some several years ago. It is time to catch up.
Further reading
"Session Init". upstart Cookbook. Ubuntu.
James Hunt, Stéphane Graber, Dmitrijs Ledkovs, and Steve Langasek (2012-11-12). "Respawning user jobs and PID tracking". Ubuntu Raring upstart user sessions. Ubuntu.
Nathan Willis (2013-04-17). Upstart for user sessions. LWN.
systemd. systemd manual pages. freedesktop.org.
| Orphan process's parent id is not 1 when parent process executed from GNOME Terminal |
1,358,935,391,000 |
I ssh into a virtual server and start up a web server. Everything runs as expected. But when I close my terminal on my laptop, the process dies on the virtual server. How do I fix this?
|
You can use disown, it is a bash builtin:
disown [-ar] [-h] [jobspec ...]
Without options, each jobspec is removed from the table of active
jobs. If the -h option is given, each jobspec is not removed from the
table, but is marked so that SIGHUP is not sent to the job if the
shell receives a SIGHUP. If no jobspec is present, and neither the -a
nor the -r option is supplied, the current job is used. If no jobspec
is supplied, the -a option means to remove or mark all jobs; the -r
option without a jobspec argument restricts operation to running jobs.
The return value is 0 unless a jobspec does not specify a valid job.
Try this:
$ <your command> &
$ disown
First, make your command run in background by typing <your command> &, then use disown, it will make your command keep running even if your ssh session is disconnected.
IMHO, you should use a tool to control your service, like supervisord or writing your own init script.
| How do I start a server via ssh and have it run after I log out? |
1,358,935,391,000 |
sudo ps o gpid,comm reports something like 3029 bash but the command has parameters --arbitrary -other -searchword is there a way to display these arguments?
|
Rather than formatting the output of ps and then using grep, you can simply use pgrep with -a option:
pgrep -a bash
This will show the command name (bash) along with its arguments (if any).
From man pgrep :
-a, --list-full
List the full command line as well as the process ID.
| How to find and print arguments of a command in ps? |
1,358,935,391,000 |
For example:
$ cat foo.sh
#!/usr/bin/env bash
while true; do sleep 1 ; done
$ ./foo.sh &
$ pgrep foo.sh
$
Contrast with:
$ cat bar.sh
#!/bin/bash
while true; do sleep 1 ; done
$ ./bar.sh &
$ pgrep bar.sh
21202
The process started by env bash shows up in the output of ps aux as:
terdon 4203 0.0 0.0 26676 6340 pts/3 S 17:23 0:00 /bin/bash
while the one started with /bin/bash shows as
terdon 9374 0.0 0.0 12828 1392 pts/3 S 17:27 0:00 /bin/bash ./bar.sh
which probably explains why it the first is not being caught by pgrep. So, questions are:
Why does the name of the script not show up when called through env?
Does pgrep simply parse the output of ps?
Is there any way around this so that pgrep can show me scripts started via env?
|
Q#1
Why does the name of the script not show up when called through env?
From the shebang wikipedia article:
Under Unix-like operating systems, when a script with a shebang is run
as a program, the program loader parses the rest of the script's
initial line as an interpreter directive; the specified interpreter
program is run instead, passing to it as an argument the path that was
initially used when attempting to run the script.
So this means that the name of the script is what's known by the kernel as the name of the process, but then immediately after it's invoked, the loader then execs the argument to #! and passes the rest of the script in as an argument.
However env doesn't do this. When it's invoked, the Kernel knows the name of the script and then executes env. env then searches the $PATH looking for the executable to exec.
How does /usr/bin/env know which program to use?
It is then env that executes the interpreter. It knows nothing of the original name of the script, only the Kernel knows this. At this point env is parsing the rest of the file and passing it to interpreter that it just invoked.
Q#2
Does pgrep simply parse the output of ps?
Yes, kind of. It's calling the same C libraries that ps is making use of. It's not simply a wrapper around ps.
Q#3
Is there any way around this so that pgrep can show me scripts started via env?
I can see the name of the executable in the ps output.
$ ps -eaf|grep 32405
saml 32405 24272 0 13:11 pts/27 00:00:00 bash ./foo.sh
saml 32440 32405 0 13:11 pts/27 00:00:00 sleep 1
In which case you can use pgrep -f <name> to find the executable, since it will search the entire command line argument, not just the executable.
$ pgrep -f foo
32405
References
#!/usr/bin/env Interpreter Arguments — portable scripts with arguments for the interpreter
Why is it better to use “#!/usr/bin/env NAME” instead of “#!/path/to/NAME” as my shebang?
| Why can't pgrep find scripts started via env? |
1,358,935,391,000 |
Looking through syslog, I see lines like dd invoked oom-killer.
Does this mean dd is being killed by the oom-killer or does it mean dd asked oom-killer to go kill another high memory process?
|
dd triggered OOM killer, which, in turn, killed a process with the highest OOM score.
| Does a process invoking oom-killer kill itself? |
1,358,935,391,000 |
I was trying to change linux process priority using chrt. I changed priority of one process to SCHED_FIFO from SCHED_OTHER. I could see some improvement in the perfomance. I run linux angstrom distribution for my embedded system.
So if I use SCHED_FIFO for one process, how other process will get affected? What are the precautions to be taken? I couldn't notice an apparent change in processor utilization. Thanks in advance.
|
As explained in sched_setscheduler(2), SCHED_FIFO is RT-priority, meaning that it will preempt any and all SCHED_OTHER (ie. "normal") tasks if it decides it wants to do something.
So, you should be absolutely sure it is well written and will yield control periodically by itself, because if it decides not to (eg. it wants CPU time) the rest of your system will come to complete halt until such time your RT process decides to sleep (which may be "never").
| SCHED_FIFO and SCHED_OTHER |
1,358,935,391,000 |
For long running processes like init, I can do things like
$ cat /proc/[pid]/io
What can I do if I want to see stats for a briefly running process such as a command line utility like ls? I don't even know how to see the pid for such a briefly running process...
|
Basically, it sounds like you want general advice on profiling an application's I/O at runtime. You've been doing this with /proc/$PID/io which will give you some sort of idea of how much bandwidth is being used between disk and memory for the application. Polling this file can give you a rough idea of what the process is doing but it's an incomplete picture since it only shows you how much data is being shoved to and from disk.
To solve your stated problem, you basically have the following options:
Use platform instrumentation. On Linux writing a SystemTap script is the most feature complete solution but depending on how hardcore you're wanting to go that may be more work than you're really willing to expend for the desired benefit.
Use application-based instrumentation. A lot of ways to do this but gprof profile is a popular option. Some applications may also provide their own instrumentation, but you'd have to check.
Probably, the best alternative is to use already existing platform instrumentation tools together to achieve the desired effect and to get the most out of it.
I'm not aware of a program that will kick off an application and do all this for you (doesn't mean there isn't by any means, just that I haven't heard of it) so your best bet is to just start gathering system-wide information and just filter for the PID you're concerned about after the fact (so that you get a full sample).
First things first, I would enable auditing of execve calls so that you can save the PID of the application you're kicking off. Once you have the PID, you can remove auditing.
Run mount debugfs -t debugfs /sys/kernel/debug to get the debugfs running, so you can run the blktrace.
On my system I ran blktrace -d /dev/sda -a read -a write -o - | blkparse -i - but you can adjust accordingly. Here is some example blktrace output:
8,0 15 3 1266874889.709440165 32679 Q W 20511277 + 8 [rpc.mountd]
In the above output the fifth column (32679) is the PID associated with the application performing the write. The parts we care about are the Q (event type, queued) the W (RWBS field, W means it's an write since there's no S in that field as well the implication is that it was an asynchronous one.) and the 20511277 + 8 (operation starts at block number 20511277 and goes for another eight blocks). Determining read/write sizes should just be adding the blocks together and multiplying by block size.
blktrace will also tell you about more than just throughput, it will also let you see if there's anything going on with merges that you care about.
Once you have the blktrace running, you can spawn the process using strace -c which will give you a feeling for the average latency associated with each system call (including read and write operations). Depending on how reliable each invocation needs to be latency can be important, it can also tell you more about what the application is doing (to point out areas to explore tuning) without having any application instrumentation going.
Between those two you should get a pretty good sampling of what your program is doing without losing any data or possibly including the I/O of other applications. Obviously there are more ways to do this than what I've described but this is how I would've solved the problem.
One should also be able to collect I/O-related latency measures by messing with blkparse's output options, for instance. I just didn't because I haven't played with them enough.
| How can I see i/o stats for a briefly running process? |
1,454,631,843,000 |
I am writing a wrapper application to bash scripts and want the application to keep a track of which tools/processes have been launched from user scripts. I would like to know what is the best way to determine the list of child processes that were spawned of this parent process.
I tried
Periodically invoking ps command and building a process tree (like ps -ejH) but this misses out on processes that ran to completion very quickly.
Using a tool like forkstat that uses the proc connector interface, but that would only run with elevated privileges. While this gives the correct data, running as sudo would not work in my case?
Any suggestions how this can be achieved?
|
If you're using Linux, you can use strace to trace system calls used by a process. For example:
~ strace -e fork,vfork,clone,execve -fb execve -o log ./foo.sh
foo bar
~ cat log
4817 execve("./foo.sh", ["./foo.sh"], [/* 42 vars */]) = 0
4817 clone(child_stack=NULL, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7f1bb563b9d0) = 4818
4818 execve("/bin/true", ["/bin/true"], [/* 42 vars */] <detached ...>
4817 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=4818, si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---
4817 clone(child_stack=NULL, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7f1bb563b9d0) = 4819
4817 clone(child_stack=NULL, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7f1bb563b9d0) = 4820
4820 execve("/bin/echo", ["/bin/echo", "foo", "bar"], [/* 42 vars */] <detached ...>
4817 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=4820, si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---
4817 +++ exited with 0 +++
4819 execve("/bin/sleep", ["sleep", "1"], [/* 42 vars */] <detached ...>
You can see that the script forked off three processes (PIDs 4818, 4819, 4820) using the clone(2) system call, and the execve(2) system calls in those forked off processes show the commands executed.
-e fork,vfork,clone,execve limits strace output to these system calls
-f follows child processes
-b execve detaches from a process when the execve is reached, so we don't see further tracing of child processes.
| Get list of processes that were forked off my currently running process? |
1,454,631,843,000 |
Say you're in a directory and you want to check something in another directory, so you type bash to spawn a new shell so you can cd and check that thing and then type exit to get out and go back to the original shell still in that old directory.
Is there any way to keep both bashes running and switch between the two, sort of like Ctrl+Z + fg/bg?
|
You can use the suspend command to background the child shell and return to the parent shell.
| Intelligently switch between multiple BASH processes |
1,454,631,843,000 |
What the multiprocessing model for Linux? Is there a default or most commonly used model? Is it similar or very different from say BSD or even the MS Windows kernel?
If SMP is used normally, can assymetric be used instead if desired?
|
From Wikipedia:
Asymmetric multiprocessing (AMP) was a software stopgap for handling
multiple CPUs before symmetric multiprocessing (SMP) was available.
Linux uses SMP.
| What is the default or most commonly used multiprocessing model in Linux? Symmetric or Asymmetric? |
1,454,631,843,000 |
How do you start a process that cannot do any file IO (opening / closing files, creating / deleting files, reading / writing files, etc.), except to read and write to pre-created FIFOs?
(chroot will not work because the process can break out of it, but messing with / modifying the kernel and such is okay)
BTW: I cannot modify the programs that are being run
|
If
the program can be modified to make a system call of your choice before any of the untrusted code (this might be done via LD_PRELOAD), and
the program doesn't need to do any system calls beyond exit(), sigreturn(), read() and write()
then you can use seccomp (Wikipedia article). To allow for more that just those system calls there's seccomp-bpf, which uses Berkeley Packet Filter to determine which system calls to allow. The libseccomp library simplifies seccomp-bpf so (for example) if you wanted to allow the close() system call:
seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(close), 0);
Or for something similar to chroot, but which can't be broken out of, you could try Linux containers, OpenVZ or Linux VServer.
| Disallow File IO for a process except for FIFOs |
1,454,631,843,000 |
I was wondering how many processes can I create on my machine (x64 with 8Gb of RAM and running Ubuntu). So I made simple master process which was continiously creating child processes, and that child processes were just sleeping all the time. I ended with just 11-12k processes. Then I switched processes to threads and got exactly same result.
My pid_max is set to 32768, all per-user limits are disabled. Physical memory usage is just couple of bytes. Could you tell me what prevents the system to create new threads at that point?
p.s. here is my source code for multiprocessing test written in C
#include <stdio.h>
#include <unistd.h>
int main() {
pid_t pid;
int count = 0;
while (1) {
pid = fork();
if (pid == -1) {
printf("total: %d\n", count);
return 0;
}
if (pid == 0) {
while (1) sleep(10);
}
count++;
}
}
|
I think you hit either a number of processes limit or a memory limit.
When I try your program on my computer and reach the pid == -1 state, fork() returns the error EAGAIN, with error message: Resource temporarily unavailable. As a normal user, I could create approx 15k processes.
There are several reasons this EAGAIN could happen, detailed in man 2 fork:
not enough memory,
hitting a limit like RLIMIT_NPROC,
deadline scheduler specifics.
In my case, I think I just hit the RLIMIT_NPROC limit, aka what ulimit -u usually shows. The best is to display this limit within the program, so you have the real value, not your shell's limits.
#include <sys/time.h>
#include <sys/resource.h>
int main() {
struct rlimit rlim;
getrlimit(RLIMIT_NPROC, &rlim);
printf("RLIMIT_NPROC soft: %d, hard: %d\n", rlim.rlim_cur, rlim.rlim_max);
Which yields:
RLIMIT_NPROC soft: 15608, hard: 15608
total: 15242
Which looks reasonable as I have other processes running, including a web browser.
Now, as root, the limits don't really apply anymore and I could fork() much more: I created more than 30k processes, close to my 32k pid_max.
Now, if I take my normal user shell's PID (echo $$), and as root in another shell, I do: prlimit --pid $SHELLPID --nproc=30000, and then launch your program in this shell, I can create almost 30k processes:
RLIMIT_NPROC soft: 30000, hard: 30000
total: 29678
Finally: you should also consider memory usage, because on my system, I used a lot of RAM and swap to create all those processes, and maybe it was the limit you hit. Check with free.
| What is a limit for number of threads? |
1,454,631,843,000 |
I've noticed, that all mate-terminal instances I start, be it inside a mate-terminal or via a link button, have the same PID.
For example, I got something like
$ wmctrl -lp
<omitted lines that don't matter>
0x03c0001f 1 7411 <hostname> Terminal
0x03c06b9f 1 7411 <hostname> Terminal
0x03c07349 1 7411 <hostname> Terminal
0x03c073f4 1 7411 <hostname> Terminal
0x03c0749f 1 7411 <hostname> Terminal
0x03c0754c 1 7411 <hostname> Terminal
0x03c075f9 1 7411 <hostname> Terminal
0x03c076a6 1 7411 <hostname> Terminal
0x0340000b 1 <pid1> <hostname> xeyes
0x0460000b 1 <pid2> <hostname> xeyes
which clearly shows that there are multiple Terminal windows, all with the same PID. As stated above, it didn't matter, whether or not the process was started inside a terminal or by clicking a menu bar link. Neither did it matter, whether or not I started the process in the background inside the terminal.
What is the applied rule here, or "why is this so"?
My understanding used to be that every command I start in a shell would obtain a unique PID.
Isn't it kind of impractical to have multiple terminals with the same PID?
I can't kill them individually by PID anymore.
Edit: Kernel version: 3.16.0-4-amd64
|
All the instances of Mate Terminal have the same PID because they are in fact a single process which happens to display multiple windows. Mate Terminal runs in a single process because that's the way the application is designed. When you run the command mate-terminal, it contacts the existing process and sends it an instruction to open a new window.
As of Mate Terminal 1.8.1, you can run mate-terminal --disable-factory to open a new window in a new process. Beware that this option has been removed from the Gnome version in 3.10; I don't know whether the Mate developers have decided to merge that change. See Run true multiple process instances of gnome-terminal for a similar question regarding Gnome-terminal.
| Why do multiple instances of Mate-terminal have the same PID? |
1,454,631,843,000 |
Circumstances
I'm having a lecture on "Cloud Security" at University, and am currently doing a project on virtual machine introspection.
Goal of my exercise was to escalate privileges of some process to root privileges, which I accomplished by "borrowing" the pointer to struct cred from init, using volatility and libvmi. Basically just like some rootkits do, but from one VM to another. I can prove the effect of this method by escalating privileges of some process, which repeatedly tries to write to protected memory. Just like that:
#!/bin/bash
while true
do
echo 1 >> /etc/test
id
sleep 2
done
This leads to the following output (at the time, when privileges change):
# last time permission is denied
./test.sh: line 3: /etc/test: Permission denied
uid=1000(tester) gid=1000(tester)groups=1000(tester),24(cdrom),
25(floppy),29(audio),30(dip),44(video),46(plugdev),108(netdev),
111(scanner),115(bluetooth)
# tada, magic
# now I'm root
uid=0(root) gid=0(root) groups=0(root)
The Question
So I can prove some process (bash in above example) has root privileges now. But when I look at the process using ps u or directly via /proc/$PID, UID and GID don't seem to have changed.
Output of ps -A u | grep 2368:
Before:
# ...
tester 2368 0.0 0.9 23556 4552 pts/2 S+ 22:24 0:00 -bash
After:
# ...
tester 2368 0.0 0.9 23556 4552 pts/2 S+ 22:24 0:00 -bash
Nothing has changed here.
Also /proc/$PID/status hasn't changed:
~# cat /proc/2368/status | grep Uid
Uid: 1000 1000 1000 1000
~# cat /proc/2368/status | grep Gid
Gid: 1000 1000 1000 1000
So can you explain, why they don't change there, and where that information came from, when they are not taken from the process's struct cred, which has obviously been changed.
|
Tasks do not have a struct cred. They have two struct cred's:
* A task has two security pointers. task->real_cred points to the objective
* context that defines that task's actual details. The objective part of this
* context is used whenever that task is acted upon.
*
* task->cred points to the subjective context that defines the details of how
* that task is going to act upon another object. This may be overridden
* temporarily to point to another security context, but normally points to the
* same context as task->real_cred.
I checked which one /proc shows you, but you can probably guess :-P.
(See fs/proc/, using https://elixir.bootlin.com . The procfs "status" file is defined in fs/proc/base.c.)
| Where are UID/GID of a process from, when not from the process's cred pointer? |
1,454,631,843,000 |
I am kind of new with bash and have been playing with it on and of for about a month.
While trying to understand how nested command groups work, I tried the following command:
((ps j; ps j); ps j; ps j)
Now, what I was expecting is that the nested group would produce a separate process group with a new bash shell as the group leader.
A new bash shell is created but for some reason the nested bash shell is in the same process group as the bash shell above it.
Why is this? Is it perhaps because I am trying to view process information statically?
|
As a first guess, I would assume that subshells started with ( .. ) don't use job control, in the same way that noninteractive scripts don't. However, $- seems to contain the m for job control even inside parenthesis (as well as i for interactive):
$ echo $-
himuBs
$ bash -c 'echo $-'
hBc
$ ( echo $-; )
himuBs
But I think that's a bit of a lie, since explicitly enabling job control makes some process groups appear.
These are all in one PG:
$ ( (ps j; ps j); ps j;) | awk 'NR == 1 || /[p]s/'
PPID PID PGID SID TTY TPGID STAT UID TIME COMMAND
32524 32525 32522 32368 pts/23 32522 R+ 1000 0:00 ps j
32524 32526 32522 32368 pts/23 32522 R+ 1000 0:00 ps j
32522 32527 32522 32368 pts/23 32522 R+ 1000 0:00 ps j
These aren't:
$ ( set -m; (ps j; ps j); ps j;) | awk 'NR == 1 || /[p]s/'
PPID PID PGID SID TTY TPGID STAT UID TIME COMMAND
32518 32519 32518 32368 pts/23 32516 R 1000 0:00 ps j
32518 32520 32518 32368 pts/23 32516 R 1000 0:00 ps j
32516 32521 32521 32368 pts/23 32516 R 1000 0:00 ps j
| Nested command groups do not produced nested process groups? |
1,454,631,843,000 |
I have been in a situation where my desktop has crashed and become unresponsive. (In my case it was the Cinnamon DE, and I have yet to try cinnamon --replace from the commandline, BTW)
I was using a download manager type GUI app to download a large file, and it was clear that the process was still running quite happily even though the GUI was borked. If I killed X I would kill all the child processes and be forced to restart my download etc.
Is it possible to create a surrogate X session, detach existing GUI processes and hitch them to the "dummy" session, restart the real X session and finally re-hitch the GUI process back to the new, healthy X session? If so, how?
|
In theory, a program that loses its connection to the X server could just try reconnecting until a new X server is available. In fact, I've written programs that do this. It requires extra code, because you have to re-run your GUI-initialization routine to re-create your resources (windows, bitmaps, fonts, etc) on the new X server, and refresh all your program's internal data structures to use these new resources.
Sadly, almost no X program I've ever seen is willing to do this. They just crash out because all the reconnect/re-setup is too much trouble. And more sadly, they can't be tricked into switching X servers because that code to re-init their graphic resources doesn't exist in that program. So for most programs, they're doomed if they lose the X connection.
As XTaran mentioned, there is a neat relay/shim/proxy program called ''xpra'' which acts like an X server to clients, and then can do the re-initialization of their resources into any other X server, allowing you to move all the programs between X servers like you wanted. When I used it 10 years ago, it had a lot of bugs. I'm sure they've made progress since then, but you'll need to find out whether its stable enough for everyday desktop use.
| Can I attach a GUI process to a "surrogate X server"? |
1,454,631,843,000 |
I have a shell-wrapper around a large executable. It does something like this:
run/the/real/executable "$@" &
PID=$!
# perform
# a few
# minor things
wait $PID
# perform some
# post-processing
One of the things it does after the wait is check for core-dumps and handle process the crashes, however, by then the process is already dead and some information no longer available.
Can the fatal signal (SIGSEGV or SIGBUS) be intercepted by the shell script before it is delivered to the child itself?
I'd then be able to, for example, perform lsof -p $PID to get the list of files opened by the wrapped process before it dies...
Update: I tried using strace to catch the process receiving a signal. Unfortunately, there seems to be a race -- when strace reports the child's signal, the child is on its way out and there is no telling, whether the lsof will get the list of its files or not...
Here is the test script, which spawns off /bin/sleep and tries to get the files it has opened for writing. Some times the /tmp/sleep-output.txt is reported as it should be, other times the list is empty...
ulimit -c 0
/bin/sleep 15 > /tmp/sleep-output.txt &
NPID=$!
echo "Me: $$, sleep: $NPID"
(sleep 3; kill -BUS $NPID) &
ps -ww $NPID
while read line
do
set -x
outputfiles=$(lsof -F an -b -w -p $NPID | sed -n '/^aw$/ {n; s,.,,; p}')
ps -ww $NPID
lsof -F an -b -w -p $NPID
break
done < <(strace -qq -p $NPID -e trace=signal 2>&1)
echo $outputfiles
wait $NPID
The above test requires use of ksh or bash (for the < <(...) construct to work).
|
As far as I know, there are no shell methods to do what you're trying, it will have to be done from a custom program.
Use ptrace() to monitor the process, similar to the way a debugger does. When the process receives a signal, it will be stopped, and the monitoring program will be notified (its call to wait() will return, and WIFSTOPPED(status) will be true).
It can then run lsof -p <pid> to list the open files of the process, and then call ptrace(PTRACE_CONT, pid, NULL, 0) to restart the process.
| Can a Linux-process intercept signals sent to its child? |
1,454,631,843,000 |
I am working on some batch scripts involving the following:
Run some non-terminating sub-processes (asynchronously)
Wait for t seconds
Perform other task X for some time
Terminate subprocesses
Ideally, I would like to be able to differentiate the stdout of the sub-processes which has been emitted before X from that which has been emitted after X.
A few ideas come to mind, although I have no idea as to how I would implement them:
Discard stdout for t seconds
Insert some text (for instance, 'Task X started') to visually separate the sections
Split stdout into various output streams
|
While you could complicate the matter with exec and extra file descriptor wrangling, your second suggestion is the simplest. Before starting X, echo a marker string into the log file.
All those commands would be appending to the same file, so maybe it would be a good idea to prepend all output of X with a marker, so you can tell apart its output from the one of the still running previous commands. Something along the lines of:
{ X; } | sed 's,^,[X say] ,'
This would make further analysis much simpler. It is not safe and for very verbose programs, race conditions would happen often.
If you're willing to take the chance to break one log line and can interrupt the first batch of apps without consequence, this would work too:
{ Y; } >> log &
sleep $t
kill -STOP %% # last job, like the same as %1
echo -e "\nX started" >> log
kill -CONT %%
{ X; } >> log2
| Discard stdout of a command for t seconds |
1,454,631,843,000 |
I want to write a script that will detect whether a particular desktop application is responding and kill it. Is this possible?
I know I've seen the GNOME desktop put up a "Application is not responding" dialog, and I figure it sends some sort of signal to the window and waits a certain amount of time for a response. If there's a way to do something like that, I'd appreciate some details. Thank you!
(This is on xfce, if it matters)
|
I can comment on Gnome's "Application is not responding" dialog, but not directly answer your question.
It seems that both Metacity and Mutter use meta_display_ping_window() function to determine the status of a window (read the doc comment in display.c).
The default timeout PING_TIMEOUT_DELAY is 5 s. Ping-timeout and response are handled internally by the window manager and at a glance I don't see a simple method to watch this ping-pong party from outside.
| How to detect a desktop application hanging |
1,454,631,843,000 |
Is it possible to run a Java process in Linux in a way that it could be seen in ps as some sort of alias? It would be easier to restart it when it is down.
|
Try Java Virtual Machine Process Status Tool(jps):
[Tue Aug 30@17:02:14][prince@localhost ~]$ jps -l
30207 sun.tools.jps.Jps
29947 org.netbeans.Main
| How to run java process to be seen not as 'java...' in processes list? |
1,454,631,843,000 |
I am trying to set up a Thin Ruby application server on my Ubuntu VPS. I have created a specific account, installed rbenv under it along with all gems.
I am looking for a convenient way to obtain the following objectives:
Run my Thin Rack application under my non-privileged user account.
Make the application run as a daemon
Have the daemon run automatically whenever the system boots
Make the daemon restartable
Make the application accessible to Nginx through a unix domain socket.
Objective two and three are the trickiest. Is it possible to define scripts for a user to be run as that user whenever the system boots?
|
For starting at boot time add a line to your users crontab file (using crontab -e):
@reboot /path/to/your/script with parameters
The actual contents of that script vary with your needs. It might just start the daemon, or it might start a somewhat more intelligent agent that you pass a configuration. That way you can have your service automatically restarted if it for some reason dies unexpectedly.
| Running a daemon as a non-privileged user |
1,454,631,843,000 |
From the book Advanced programming in the Unix environment I read the following line regarding threads in Unix like systems
All the threads within a process share the same address space, file
descriptors, stacks, and process-related attributes.Because they can
access the same memory,the threads need to synchronize access to
shared data among themselves to avoid inconsistencies.
What does the author mean by stacks here ? I do Java programming and know each thread gets its own stack . So I am confused by shared stacks concept here .
|
In the context of a Unix or linux process, the phrase "the stack" can mean two things.
First, "the stack" can mean the last-in, first-out records of the calling sequence of the flow of control. When a process executes, main() gets called first. main() might call printf(). Code generated by the compiler writes the address of the format string, and any other arguments to printf() to some memory locations. Then the code writes the address to which flow-of-control should return after printf() finishes. Then the code calls a jump or branch to the start of printf(). Each thread has one of these function activation record stacks. Note that a lot of CPUs have hardware instructions for setting up and maintaining the stack, but that other CPUs (IBM 360, or whatever it's called) actually used linked lists that could potentially be scattered about the address space.
Second, "the stack" can mean the memory locations to which the CPU writes arguments to functions, and the address that a called-function should return to. "The stack" refers to a contiguous piece of the process' address space.
Memory in a Unix or Linux or *BSD process is a very long line, starting at about 0x400000, and ending at about 0x7fffffffffff (on x86_64 CPUs). The stack address space starts at the largest numerical address. Every time a function gets called, the stack of function activatio records "grows down": the process code puts function arguments and a return address on the stack of activatio records, and decrements the stack pointer, a special CPU register used to keep track of where in the address space of the stack, the process current variables' values reside.
Each thread gets a piece of "the stack" (stack address space) for its own use as a function activation record stack. Somewhere between 0x7fffffffffff and a lower address, each thread has an area of memory reserved for use in function calls. Usually this is only enforced by convention, not hardware, so if your thread calls function after nested function, the bottom of that thread's stack can overwrite the top of another thread's stack.
So each thread has a piece of "the stack" memory area, and that's where the "shared stack" terminology comes from. It's a consequence of a process address space being a single linear chunk of memory, and the two uses of the term "the stack". I'm pretty sure that some older JVMs (really ancient) in reality only had a single thread. Any threading inside the Java code was really done by a single real thread. Newer JVMs, JVMs who invoke real threads to do Java threads, will have the same "shared stack" I describe above. Linux and Plan 9 have a process-starting system call (clone() for Linux, rfork() in Plan 9) that can set up processes that share parts of the address space, and maybe different stack address spaces, but that style of threading never really caught on.
| What is meant by stack in connection to a process? |
1,454,631,843,000 |
I want to find the sessions which only have dead panes, to then kill them. How do I do this?
|
You can list all panes and then filter by dead panes:
tmux list-panes -a -F "#{pane_dead} #{pane_id}" | grep "^1"
And you can kill them with
tmux kill-pane -t PANE_ID
combine this into:
tmux list-panes -a -F "#{pane_dead} #{pane_id}" | \
awk '/^1/ { print $2 }' | xargs -l tmux kill-pane -t
| tmux: Find all sessions that only have dead panes |
1,454,631,843,000 |
I was wondering if the Process Table in linux OS has a limit.
Can it get full? And if so, what would I do to make space (maybe try deleting entries for zombie processes) ?
|
Run sysctl kernel.pid_max kernel.threads-max to see the current maximum limits for processes and threads respectively. (Each process occupies at least one thread; more if multithreaded.)
The "factory default" process limit might be 32768 on desktop-oriented distributions, or something rather higher in enterprise-oriented distributions. You can use /etc/sysctl.conf to increase the limits up to 4194304 (at least) in modern 64-bit systems.
(The maximum was 4194304 in kernel version 3.10.25; it may have been increased further since then.)
You cannot delete zombie processes, they are already dead. What you should do is kill the evil zombie master, i.e. the parent process of the zombies, because the presence of zombies indicates the parent is not doing its job properly. Once the negligent parent process has been killed, the zombies will get adopted by process #1 which will normally clean them up pretty much immediately.
The parent process should either always check the return code of its child processes when it gets notified that its child has died, or it should arrange for the child processes to get disowned at start-up, so the process #1 (usually /sbin/init) can adopt them. Process #1 has a special responsibility of adopting any otherwise parentless processes and taking care of their death notifications.
| Process table limit |
1,454,631,843,000 |
I have a bug in my Linux app that is reproducable only on single-core CPUs.
To debug it, I want to start the process from the command line so that it is limited to 1 CPU even on my multi-processor machine.
Is it possible to change this for a particular process, e.g. to run it so that it does not run (its) multiple threads on multiple processors?
|
You can use taskset from util-linux.
The masks
may be specified in hexadecimal (with or without a leading "0x"), or as
a CPU list with the --cpu-list option. For example,
0x00000001 is processor #0,
0x00000003 is processors #0 and #1,
0xFFFFFFFF is processors #0 through #31,
32 is processors #1, #4, and #5,
--cpu-list 0-2,6
is processors #0, #1, #2, and #6.
When taskset returns, it is guaranteed that the given program has been
scheduled to a legal CPU.
| Run process as if on a single-core machine to find a bug |
1,454,631,843,000 |
I am reading some Linux documentation about 'signals' and I still have these questions making noise in my mind:
1) A 'signal' handler execution is done when the 'target' process receives its execution token from the Scheduler?
2) Or 'signal' handler execution takes place in whatever process 'context' happens to be executing when the 'signal' was sent? (The same style as a hardware ISR).
3) What about process execution priorities? Are they swept away when dealing with 'signals'?
|
1) The signal handler is executed the next time the target process returns from kernel mode to user mode.
This occurs either when the process is scheduled to run again after a hardware interrupt (and it wasn't already running in kernel mode), or when the process returns from a system call (on some architectures, these are the same thing).
In normal operation, when leaving kernel mode, your process will simply return to the next instruction after the point where it originally left user mode.
However if a signal is pending for your process, the kernel will re-write your processes context such that the return to user mode will instead go to the first instruction of your signal handler and your stack will have been modified to look like you had made a "special" subroutine call to the signal handler at the point where you originally left user mode (the return from this "special" subroutine call involves making a system call to restore the original state).
For details read this, this and this.
So the 'target' process may receive its execution token from the Scheduler any number of times before the signal handler is finally executed (if it happens to stay in kernel mode for some reason).
2) No - the signal handler will only ever execute in the user mode context of your process.
3) The aren't really any execution priorities in a time-shared system such as Linux, unless you count the nice value of a process, so you can't sweep away something that isn't there.
Things are complicated by threads and so-called real time scheduling policies, so the comments above are only valid for single-threaded processes running with non-real-time scheduling policies (the only sort of process that existed in the good old days :-).
| Signal execution details |
1,454,631,843,000 |
Is there an existing tool for solaris/unix that keeps a history trail of the list of running processes. I'd like to be able to review backwards in time what processes were active/running.
I can create a cron job that just regularly logs the output of ps into files, but this is crude and over a large server farm seems inefficient and can create many files.
And I need full command arguments so it has to be /usr/ucb/ps auxww output, ideally with cpu times, state, rss, pid, ppid, zone information.
Also, if possible the output should be easy to parse--e.g. in a consistent delimited format or some other.
|
Use auditing.
Solaris Auditing (Overview)
Auditing generates audit records when specified events occur. Most
commonly, events that generate audit records include the following:
System startup and system shutdown
Login and logout
Process creation or process destruction, or thread creation or thread destruction
Opening, closing, creating, destroying, or renaming of objects
Use of privilege capabilities or role-based access control (RBAC)
Identification actions and authentication actions
Permission changes by a process or user
Administrative actions, such as installing a package
Site-specific applications
Audit records are generated from three sources:
By an application
As a result of an asynchronous audit event
As a result of a process system call
A good blog article on Solaris auditing can be found here.
| Is there a process logging tool for solaris? |
1,454,631,843,000 |
If I ssh to a box and start a task that will take some time to complete I usually press control+z to pause the process, and then immediately type bg 1 to put run it in the background.
I can then type jobs and see it running.
If I disconnect (type exit, press control+d, etc) and then log back in, I can no longer type jobs to see it running - it won't show anything.
I know I can type something like
ps -u `whoami`
to see what items are running, but I'm not sure if I can pause them any longer. I know I can kill them, but is there a way to pause them or can I somehow get them to show back up in the jobs list?
Linux-fu tips regarding jobs and process management are also welcome and will be upvoted.
|
You can use kill -STOP pid to pause a job and kill -CONT pid to resume it. You get the proper pid from the ps command you already know.
| How can I manage jobs after I disconnect from my tty/ssh session? |
1,454,631,843,000 |
Possible Duplicate:
How to check how a long a program has been running?
I am interested in doing this purely using bash.
|
ps can fit your needs:
ps -eo pid,command,etime
To get information for a specific process:
ps -o command,etime -p PID
| How to know how long a process has been running? [duplicate] |
1,454,631,843,000 |
I am running some test on Amazon EC2 instances and we want to make the CPU always busy at above 80%.
What I have is a program main that needs to run in high priority and I want to launch another program, preferably some math C code or a bash script that loads the CPU to over 80%.
What programs are there to use for such a task, and how to make my program run with the highest priority.
PS: Running Fedora.
|
Occupying one CPU at 100% (minus overhead) is easy in the shell:
while true; do :; done
If you want to reduce the load, introduce sleeps.
i=0; while [ $i -ne 0 ] || sleep 0.001; do i=$(( (i+1) % 10000 )); done
Tune 10000 up or down to get the desired load.
The scheduling priority is set by nice. You'll need to be root to set a higher-than-default priority. Note that a negative niceness means high-priority (a positive niceness means be nice, i.e. low priority).
nice -20 sh -c 'while …'
| Program to test CPU load and process priority |
1,454,631,843,000 |
I terminated the gnome-pty-helper process with
$ kill -9 9753
After a while it was running again with another process number.
There wasn't a restart of a system.
Why is it located under ~/.config/gnomy-pty-helper?
$ file gnome-pty-helper
gnome-pty-helper: ELF 32-bit LSB executable, Intel 80386, version 1 (GNU/Linux), statically linked, stripped
Why is it in the crontab?
*/10 * * * * sh -c " /home/xralf/.config/gnome-pty-helper"
Can I delete the file and the line from the crontab? If not, why?
|
gnome-pty-helper is automatically started by the VTE library when necessary. If you want to avoid its running (why?), you should avoid using anything built with the VTE library (libvte*).
The elements you’ve added recently make this look more like a compromise of some sort: ~/.config shouldn’t contain binaries, and gnome-pty-helper certainly doesn’t belong in your crontab. It’s a nice name for a virus of some sort though because it wouldn’t draw attention in a process listing...
You can delete the file and the crontab entry (but watch out for their coming back obviously). It might be worth keeping a copy of the file somewhere safe (on a noexec file system) to try to figure out what it was doing...
| Prevent gnome-pty-helper from running again |
1,454,631,843,000 |
I read that their is 1:1 mapping of user and kernel thread in linux
What is the difference between PTHREAD_SCOPE_PROCESS & PTHREAD_SCOPE_SYSTEM in linux if kernel is considering every thread like a process then there will not be any performance difference? Correct me I'm wrong
|
According to the man page:
Linux supports PTHREAD_SCOPE_SYSTEM, but not PTHREAD_SCOPE_PROCESS
And if you take a look at the glibc's implementation:
0034 /* Catch invalid values. */
0035 switch (scope)
0036 {
0037 case PTHREAD_SCOPE_SYSTEM:
0038 iattr->flags &= ~ATTR_FLAG_SCOPEPROCESS;
0039 break;
0040
0041 case PTHREAD_SCOPE_PROCESS:
0042 return ENOTSUP;
0043
0044 default:
0045 return EINVAL;
0046 }
| Pthread scheduler scope variables? |
1,454,631,843,000 |
I have some confusion as to what's the correct value to use for the number of CPU's I can use to make a CPU_SET for a sched_setaffinity call on my system.
My /proc/cpuinfo file:
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 37
model name : Intel(R) Core(TM) i5 CPU M 460 @ 2.53GHz
stepping : 5
microcode : 0x2
cpu MHz : 1199.000
cache size : 3072 KB
physical id : 0
siblings : 4
core id : 0
cpu cores : 2
apicid : 0
initial apicid : 0
fdiv_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx rdtscp lm constant_tsc arch_perfmon pebs bts xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt lahf_lm ida arat dtherm tpr_shadow vnmi flexpriority ept vpid
bogomips : 5056.34
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 37
model name : Intel(R) Core(TM) i5 CPU M 460 @ 2.53GHz
stepping : 5
microcode : 0x2
cpu MHz : 1199.000
cache size : 3072 KB
physical id : 0
siblings : 4
core id : 0
cpu cores : 2
apicid : 1
initial apicid : 1
fdiv_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx rdtscp lm constant_tsc arch_perfmon pebs bts xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt lahf_lm ida arat dtherm tpr_shadow vnmi flexpriority ept vpid
bogomips : 5056.34
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 2
vendor_id : GenuineIntel
cpu family : 6
model : 37
model name : Intel(R) Core(TM) i5 CPU M 460 @ 2.53GHz
stepping : 5
microcode : 0x2
cpu MHz : 1199.000
cache size : 3072 KB
physical id : 0
siblings : 4
core id : 2
cpu cores : 2
apicid : 4
initial apicid : 4
fdiv_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx rdtscp lm constant_tsc arch_perfmon pebs bts xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt lahf_lm ida arat dtherm tpr_shadow vnmi flexpriority ept vpid
bogomips : 5056.34
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 3
vendor_id : GenuineIntel
cpu family : 6
model : 37
model name : Intel(R) Core(TM) i5 CPU M 460 @ 2.53GHz
stepping : 5
microcode : 0x2
cpu MHz : 1199.000
cache size : 3072 KB
physical id : 0
siblings : 4
core id : 2
cpu cores : 2
apicid : 5
initial apicid : 5
fdiv_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx rdtscp lm constant_tsc arch_perfmon pebs bts xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt lahf_lm ida arat dtherm tpr_shadow vnmi flexpriority ept vpid
bogomips : 5056.34
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
In this file there are processor lines numbered 0-3, for "physical" processors (4 processors total). I can get this value from sysconf(_SC_NPROCESSORS_ONLN) but, there is also a line for cpu cores and each processor has 2. I believe this represents the "logical" processors or hyperthreading that is accounted for. Should I be using only the "physical" value or can I use the "logical" count?
I'm not clear on this because if I go to /proc/PID/status theres the line Cpus_allowed_list and that can range from 0-7 (8 processors total) but, I also wrote a script to call taskset -c -p PID for every "PID" running and this shows every process of having an affinity list of 0-3 max.
|
Your CPU is a dual core CPU with hyperthreading Intel® Core™ i5-460M Processor
This means you have 2 cores and they are physical CPU's.
You have also hyperthreading and so you have 4 logical CPU's.
taskset was designed because the balancing of tasks in a multicore CPU was a performance lost. The tasks did normally not use hyperthreading and CPU's had only separate caches. You have a hyperthreading CPU so you'll never know which physical CPU is in use and the balancing of tasks normally does not result in a performance lost because they use the same cache. Intel's smart(unified) cache seems to make taskset obsolete. However using taskset in a NUMA System makes still sense.
A benchmark can answer if you can increase performance using taskset here.
| What's the correct value to base the maximum number of CPU's to sched_setaffinity to? |
1,454,631,843,000 |
After a process has been terminated, is it possible to retrieve any information about it, such as when it started and finished, &c.
Is it possible to enable logging of this information?
|
In general, only if you log it at the time. Probably the most straightforward way would be to use the kernel's auditing features using auditd, and configure the system calls that you care about logging.
| What information about a process is retrievable after it is terminated? |
1,454,631,843,000 |
I've been studying process management using shell scripts and I'm starting to realise how difficult it is to make sure that it's done right.
For example, you can record the PID of a program to a file, wait on it, and clean up the PID file after the program exits.
If you were to try and kill this daemon from an init script, for example, you might think of doing something like this:
do_stop() {
kill $(</var/run/program.pid)
}
This obviously doesn't work. Between obtaining the PID and sending the kill signal, another process could have died and taken its place.
The correct way seems to require using IPC in the parent of the program to send a kill signal to its child. This will ensure that the PID of the process hasn't been reused by another.
I've been trying to write my own init scripts that are as correct as possible. In this case, I've been writing one for NRPE. NRPE unfortunately daemonizes and disowns itself to init, which means I can't wait on it. Instead, I came up with the following solution:
do_stop() {
echo "Stopping (sending SIGTERM to) nrpe"
pkill -u nrpe || { echo >&2 "nrpe isn't running"; exit 1; }
}
The only process that the nrpe user runs is NRPE itself, and considering the system is under my control I consider this a relatively sane solution.
What I'm curious about is the atomicity of pkill (if that's the right word to use). I assume pkill follows these steps:
Looks up the PID in the process table after parsing the arguments for the process criteria.
Sends SIGTERM (by default) to the obtained PID
Let's say pkill -u nrpe gives a PID of 42 in step 1. Is it possible that nrpe's process could die and another one could spawn in its place before step 2 occurs?
|
You are correct to suspect that there is a (small!) atomicity problem.
No matter what method you use, whether it's a system-standard utility like start-stop-daemon, a roll-your-own PID file, using pkill to query and kill by user ID, by executable binary, or whatever, there is always an interval between finding what process you want to kill and giving that process ID to the kill system call to send it a signal.
Basically, you should just not worry about it. In order to run into trouble, both of the following would have to happen:
The target process dies between the time you identity its process ID and the time you actually kill it.
The process IDs for newly created process would have to, during the same time interval, happen to cycle around to reuse the process ID just vacated.
It's just really unlikely.
Note that in the particular case you're studying, you actually do have a way of protecting yourself against this. The only process that the nrpe user runs is NRPE itself, so if you switch to the nrpe user (from root, probably) before issuing the kill command, you might under very unlikely circumstances try to kill a poor innocent process belonging to something else, but you won't have permission to do it and it won't have any effect.
| Process management and pkill |
1,454,631,843,000 |
I figured out that the number of context switches performed by a process can be found in /proc/$$/status. I have been trying to look for the total number of context switches performed since bootup.
I tried doing grep context * | grep switch while in /proc, and got the following output
...
kallsyms:0000000000000000 t xen_end_context_switch
kallsyms:0000000000000000 T paravirt_start_context_switch
kallsyms:0000000000000000 T paravirt_end_context_switch
kallsyms:0000000000000000 T nr_context_switches
kallsyms:0000000000000000 T rcu_note_context_switch
kallsyms:0000000000000000 r __ksymtab_rcu_note_context_switch
kallsyms:0000000000000000 r __kstrtab_rcu_note_context_switch
kallsyms:0000000000000000 D event_context_switch
kallsyms:0000000000000000 D event_class_ftrace_context_switch
kallsyms:0000000000000000 t ftrace_define_fields_context_switch
kallsyms:0000000000000000 T __event_context_switch
...
Could not make sense of this file upon opening. I also tried grep -s -r context | grep switch, but it appeared to be taking too much time.
Could not find a man entry for kallsyms.
So, where can I find the total number of context switches made since bootup and what could I have done to find it out on my own?
|
The number of switches performed by each processor can be found in proc/sched_debug.
The output of grep nr_switches * is
...
sched_debug: .nr_switches : 2652089
sched_debug: .nr_switches : 2677660
sched_debug: .nr_switches : 2778421
sched_debug: .nr_switches : 2467321
sched_debug: .nr_switches : 2527589
sched_debug: .nr_switches : 2511760
sched_debug: .nr_switches : 2528093
sched_debug: .nr_switches : 2584352
sched_debug: .nr_switches : 2570571
sched_debug: .nr_switches : 2678180
sched_debug: .nr_switches : 2381052
sched_debug: .nr_switches : 2535081
...
The number of such lines printed will obviously depend on the number of logical cores on your machine.
| Where does one find the total number of context switches performed since bootup? |
1,454,631,843,000 |
I have some doubts about setting the policy of a thread and how that policy is going to be followed while it is executing. Pthread allows setting the scheduling policy of a thread to SCHED_FIFO/SCHED_RR/SCHED_OTHER. I am trying to understand how this user-set-policy works as the Linux kernel uses CFS as the default scheduler policy. Will the user-set-policy is going to be overriden to CFS when it is executing? If so , what is the use of pthread schedling policy?
|
A/ BASIC THEORY 3: CFS is not the default "scheduler policy" under Linux. CFS is the default scheduler under linux.
A scheduler chooses among all existing threads those to which cpu time should be granted.
This choice is governed by miscellaneous parameters that are taken into account differently depending on the scheduling policy of the threads.
All threads get a scheduling policy.
Default scheduling policy under CFS is known as : SCHED_OTHER also sometimes labeled SCHED_NORMAL.
This policy actually instructs the scheduler to take the nice value into account and ensure fairness among all threads running under this policy.
B/ RUN TIME : 1 Every tick (or whatever dedicated interrupt) the scheduler maintains (reorders) a list (a queue) of runable threads according to their associated scheduling policy and other parameters depending on that policy.
When the reordering is over, the thread on the top of the queue will be the one elected.
Threads belonging to "real-time" policies (SCHED_RR / SCHED_FIFO) (if any in a runable situation) will always be at the top of the list. Order, among them, being governed by the real-time priority setting.
C: YOUR QUESTION : If, in these conditions, you change the scheduling policy of some given thread (more precisely : if some running thread issue a system call requesting the change of it's scheduling policy 2) then, provided it gets the rights to do so, the scheduler will reorder its queue accordingly.
If, for example, some SCHED_OTHER thread changes to SCHED_RR, it will enter the top of the list, the scheduler will ignore it's nice value and order it, among other SCHED_RR threads according to its given realtime priority.
BTW, if that was part of your questioning :
The scheduler never decides / forces the scheduling policy of threads.
The scheduler never changes depending on scheduling policies. If CFS has been chosen at boot time, CFS will always be THE scheduler.
One can always opt for other schedulers, some consisting in CFS patches, others written from scratch, each of them claiming lesser overhead / better handling of nice values / more efficient handling of the SCHED_RR scheduling policy / more efficient when MAX_CORES <= 4, etc. But whatever scheduler you boot with, will be kept as the only program scheduling threads until shudown.
In any case, the scheduler adapts its behaviour according to scheduling policies afforded to threads by (most of the time) their parent and, more rarely by themself.
1 : This shall be considered in a single core environment.
It could be extended to whatever SMP / SMP + HT environment at the cost of extra complexity for the understanding because of the possibility to share (or not) queues between cores and to allow threads to run on all / some specific set of available cores.
2 : The family of system calls to use will depend on the API used.
sched_setscheduler() as the standard way, pthread_setschedparam() when using the POSIX API. (function names differ but results (the impact on CFS) are identical)
3 : For a detailed description of each available scheduling policy, please refer to the sched(7) Linux manual page (man sched.7), which, I make no doubt about it, is the most trustable/reputable source you are looking for.
| Scheduling policy of a POSIX thread Vs kernel's Completely Fair Scheduler when the thread is actually executing |
1,463,169,850,000 |
I have a javascript program, which runs on nodejs. If I run the following command:
node app.js
It keeps running but sometimes it exits. But I want to start it again automatically when it exits.
Is there any command to do so for Linux systems? Note that I don't want to use cron jobs.
|
Quick and dirty way : If using bash then what about a simple bash script like :
#!/usr/bin/bash
while true
do
node app.js
done
BTW, this is wrongdoing since you don't take into account the reasons why your task exited and there might be very good reasons for not restarting it. Not to say that it could also crash at startup, be oomkilled…
More canonical way under a systemd-ized linux : (suggested as part of @Kusalananda note and inspired by The ultimate guide to deploying your node app on Linux) :
Assuming that app.js starts with the #!/usr/bin/env node declaration and the file app.js is marked executable,
Design a new service : Create a .service file in the /etc/systemd/system directory if needed system-wide or ~/.config/systemd/user directory if only needed by your user, as follows :
[Unit]
Description= #(Whatever String You Want)
After= #(Any service that should be started before, ex. network.target)
[Service]
ExecStart= #(Key in the absolute path of app.js)
Restart=always #(This will ensure the automatic restart)
User= #(TBD depending on your system & wishes)
Group= #(TBD depending on your system & wishes)
Environment=PATH=/usr/bin:/usr/local/bin
Environment=NODE_ENV=production
WorkingDirectory= # (Absolute path of app.js working dir.)
[Install]
WantedBy= # multi-user.target or default.target according to your needs
You should now be able to start it : systemctl --user start service_filename
| Automatically restart a process after it exits |
1,463,169,850,000 |
I'm trying to obtain the processID of pcmanfm like this:
pgrep -f "pcmanfm"
When pcmanfm is not running, the command above returns nothing (as I expect).
However, when I run the command from python, it returns a process ID even when pcmanfm is not running:
processID = os.system('pgrep -f "pcmanfm"')
Furthermore, if you run the command above multiple times at a python3 prompt, it returns a different processID each time. All the while, pcmanfm has been closed prior to these commands.
>>> processID = os.system('pgrep -f "pcmanfm"')
17412
>>> processID = os.system('pgrep -f "pcmanfm"')
17414
>>> processID = os.system('pgrep -f "pcmanfm"')
17416
This is really messing up my ability to launch pcmanfm if it isn't currently running. My script thinks it is running when it isn't.
Why is this happening?
I'm actually encountering this issue in an Autokey script that I've attempted to write based on this video I watched. Here's my current script:
processID = system.exec_command('pgrep -f "pcmanfm" | head -1',True)
dialog.info_dialog("info",processID)
if (processID):
cmd = "wmctrl -lp | grep " + processID + " | awk '{print $1}'"
windowID = system.exec_command(cmd,True)
# dialog.info_dialog("info",windowID)
cmd = "wmctrl -iR " + windowID
#dialog.info_dialog("info",cmd)
system.exec_command(cmd,False)
else:
#os.system("pcmanfm /home/user/Downloads")
cmd = "/usr/bin/pcmanfm /home/user/Downloads"
system.exec_command(cmd,False)
The problem is, I keep getting processIDs even when pcmanfm isn't running. The script properly focuses pcmanfm if it is running, but it won't launch it if it isn't.
Update: I finally got this script to work by taking out -f and replacing it with -nx (from @they 's advice). Also, I added some exception handling to ignore autokey exceptions caused by empty output that's expected. Additionally, I converted it to a (more flexible) function so that it will service a wider variety of commands/applications:
import re
def focusOrLaunch(launchCommand):
appName = re.findall('[^\s/]+(?=\s|$)',launchCommand)[0]
processID = None
try:
processID = system.exec_command('pgrep -nx "' + appName + '"',True)
except Exception as e:
#dialog.info_dialog("ERROR",str(e))
pass
#dialog.info_dialog("info",processID)
if (processID):
cmd = "wmctrl -lp | grep " + processID + " | awk '{print $1}'"
windowID = system.exec_command(cmd,True)
# dialog.info_dialog("info",windowID)
cmd = "wmctrl -iR " + windowID
#dialog.info_dialog("info",cmd)
system.exec_command(cmd,False)
else:
system.exec_command(launchCommand,False)
cmd = "/usr/bin/pcmanfm ~/Downloads"
focusOrLaunch(cmd)
|
Proposed solution:
Remove the -f option from your pgrep command.
Explanation:
You probably get the process ID of the shell that is executed to run your command. A new shell process with a new PID will be created for every system.exec_command.
Run e.g. sh -c 'pgrep -af nonexistent' and check the output. You will probably get something like
11300 sh -c pgrep -af nonexistent
With an existing command I also get a line for the shell
sh -c 'pgrep -af sshd'
695 /usr/sbin/sshd -D
11207 sshd: pi [priv]
11224 sshd: pi@pts/0
11331 sshd: [accepted]
11343 sh -c pgrep -af sshd
Depending on the PID values, your head command might extract the PID of a process you are looking for or the PID of the shell process.
With option -f you explicitly tell pgrep to search the whole command line instead of the process name only. This way it will find the string in the shell's command line argument.
Without -f you won't get the shell process.
$ sh -c 'pgrep -a sshd'
695 /usr/sbin/sshd -D
11207 sshd: pi [priv]
11224 sshd: pi@pts/0
11364 sshd: [accepted]
| Autokey - Focus App Window If Running, Launch App If Not |
1,463,169,850,000 |
I have some conditions for some background job to run:
condition-command && condition-command && background-job &
The problem is: I want the conditions to block until the job runs, as if I had run:
condition-command; condition-command; background-job &
But it isn't a condition, if previous command fails I do not want the job to run.
I realised it is asynchronous, but it should not, in my mind the both following scripts should be the same, but they do not:
sleep 2; echo foo & sleep 1; echo bar; wait # prints foo, then bar: correct
sleep 2 && echo foo & sleep 1; echo bar; wait # prints bar, then foo: bug
I know if I test $? variable it would work, or if I put the last one inside a subshell (but then I would lost job controls, I want to avoid daemons), but I want to know why bash make it this way, where is it documented? Is there any way to prevent this behaviour?
Edit: Chained ifs is disgusting, that is why I will not accept it as a alternative way.
Edit 2: I know a subshell is possible, but it will not work for me, let us imagine I want to run a bunch of commands then wait in the end. It will be possible if i check the existence of /proc/$PID directory, but it would be a pain in the neck if there are several jobs.
Edit 3: The main question is WHY bash does it, where is it documented? Whether or not it has a solution is a bonus!
|
If you don't want the background to apply to the whole line, then use eval:
sleep 2 && eval 'sleep 10 &'
Now only the second command will be a background job, and it will be a proper background job that you can wait on.
| Why double ampersand chained conditions runs in background together the last one? |
1,463,169,850,000 |
I'm on an OpenVZ VPS and I created a background process as a non-root user then disowned it i.e.
user@server:~$node server.js &
user@server:~$disown
I SSH'ed out of the VPS and now I'm back in but I can't seem to kill the process using its PID. Pkill 1292. It even fails as root.
I know its not dead because when I run top its till running.
Also, when I run ps -l -p 1292 I can see that the process is till running.
I can tell that the process is not attached to any terminal session because the ps command displays a Question Mark at TTY i.e.
How do I kill this process?
|
pkill (like pgrep which uses the same interface, initially a Solaris command, now found on many other unix-likes including Linux (procps package)) is to kill processes based on their name.
pkill regexp
kills (sends the SIGTERM signal) to all the processes whose name¹ matches the given regular expression.
So here pkill node would kill all the processes whose name contains node. Use pkill -x node (-x like in grep/pgrep for exact match) to kill processes whose name is exactly node.
To kill based on pid², it's just kill (a command built in most shells so it can also be used on shell jobs, but also as a stand-alone utility).
If kill 6806 (short for kill -s TERM 6806) fails, you can as a last-resort try kill -s KILL 6806 which would terminate it non-gracefully.
¹ process name being a notion that varies a bit depending on the OS. On Linux, it's generally up to the first 15 bytes of the base name of the file that the process (or its closest ancestor) executed, though a process may change it to any arbitrary (but not longer than 15 bytes) value. See also pkill -f to match on the argument list.
² kill can also kill based on process group id. kill -- -123 sends the SIGTERM signal to all the processes whose process group id is 123. When using the job specification for the kill builtin of POSIX shells (as in kill %spec), kill generally also sends signals to a process group.
| How to kill a process that's not attached to any terminal |
1,463,169,850,000 |
I run both regular Chrome and Chrome Canary (from now on, Canary). Sometimes I want to kill all subprocesses, Google Chrome Helper, of Canary. The problem is that they have the same name as the subprocesses of regular Chrome so killall "Google Chrome Helper" would kill both Canary's and Chromes' subprocesses.
How can, with a "oneliner" or similar, I kill all subprocesses of Canary without killing the subprocesses of Chrome with the same name?
Mac OS X
|
Try using the -P option of pkill:
-P ppid Restrict matches to processes with a parent process ID in the
comma-separated list ppid.
| How can I kill all child processes of a certain process from the command line? |
1,463,169,850,000 |
I have process which created multiple PID's. I want to kill all those PID's. I have tried
pkill <process_name>.
But PID not getting killed as they were wait to resource releasing.
I have managed to get PID list with
ps -ef | grep <process_name> | awk '{print $2}'
which gives process ID list but how can I kill all those listed PIDs ?
Thank you.
|
You could pipe the output to xargs e.g.
ps -ef | grep <process_name> | awk '{print $2}' | xargs /bin/kill
But why doesn't your pkill command work?
| How to kill line of PID? |
1,463,169,850,000 |
I am looking for a reliable cross-platform way to check if a process with a specific pid is running. Two possible solutions came up:
kill -0 $PID — exit status is 0 if it the process exists and 1 if not, however it also returns 1 for pids that require additional privileges to kill.
ps a | grep "^\s*${PID}" and similar which are plain ugly.
Is there a way to have something like #1, but without the owner limitation?
|
Can you write a small C program? The kill(2) system call does return -1 if your UID doesn't have permission to send a signal to a given process, but errno is set to EPERM in that case, as opposed to ESRCH for a non-existent PID. I'm reasonably certain you could make it portable across Solaris, HP-UX, Linux and the *BSDs. You would have to compile it for each platform.
| Cross-platform (Linux, BSD, Solaris) way to check if pid exists |
1,463,169,850,000 |
I want to run an interactive tool that can either exit by itself (when the tasks are done) or by me hitting Ctrl+C. In this example, the tool consists of an echo and a sleep (thus it is not really interactive any more).
I need some more monitoring around it, so I would do
echo "$(date) Starting!" | tee -a myLog.log; \
echo "I NEED SOME TIME"; \
sleep 10; \
echo "$(date) Ended!" | tee -a myLog.log
But this only works if I do not press Ctrl+C -- when I do, the last echo is not executed.
Can I somehow prevent the Ctrl+C from propagating "outwards" to the overall process?
Working in a sh on a FreeBSD.
|
If I understand you correctly:
#!/bin/sh
trap catchSigint INT
catchSigint(){
echo
echo "$(date) Interrupted" | tee -a myLog.log
exit 1
}
echo "$(date) Starting!" | tee -a myLog.log
echo "I NEED SOME TIME"
sleep 10
echo "$(date) Ended" | tee -a myLog.log
See that we are trapping SIGINT (triggered by Ctrl+C). If it is detected, then catchSigint terminates execution, logging "Interrupted". Else, "Ended" is logged.
| How can I prevent Ctrl+C from "going outwards"? |
1,463,169,850,000 |
How can I terminate a process upon specific output from that process? For example, running a Java program with java -jar xyz.jar, I want to terminate the process once the line "Started server on port 8000" appears on stdout.
|
That can be accomplished with the following script considering that grep -m1 doesn't work for you:
#!/bin/bash
java -jar xyz.jar &> "/tmp/yourscriptlog.txt" &
processnumber=$!
tail -F "/tmp/yourscriptlog.txt" | awk '/Started server on port 8000/ { system("kill '$processnumber'") }'
Basically, this script redirects the stdout of your java code to a file with the command&> "/tmp/yourscriptlog.txt", the last & on the first line makes your code run as an isolated process and on the next line we have the number of this process with $!. Having the number of the process and a log file to tail we can finally kill the process when the desired line is printed.
| Terminate process upon specific output |
1,463,169,850,000 |
As we know, the lsof can know which file/directory is take up by process. But I want to capture a command process to judge which file/directory the command will call.
For example,the useradd will call the /etc/passwd and etc/shadow,the lastb will call the /var/log/btmp. Of course,some programs may conditionally open files, but I am just interested in those files/directories during the command invocating? These information can be known by capturing the process produced by command?
If that is possible indeed, how to do it?
|
strace may be of interest.
# strace -fe open useradd bob
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libaudit.so.1", O_RDONLY|O_CLOEXEC) = 3
open("/usr/lib64/libselinux.so.1", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libacl.so.1", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libcap-ng.so.0", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libdl.so.2", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libattr.so.1", O_RDONLY|O_CLOEXEC) = 3
open("/proc/filesystems", O_RDONLY) = 3
open("/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 4
open("/proc/sys/kernel/ngroups_max", O_RDONLY) = 4
open("/etc/default/useradd", O_RDONLY) = 4
open("/etc/nsswitch.conf", O_RDONLY|O_CLOEXEC) = 5
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 5
open("/lib64/libnss_files.so.2", O_RDONLY|O_CLOEXEC) = 5
open("/etc/group", O_RDONLY|O_CLOEXEC) = 5
open("/etc/login.defs", O_RDONLY) = 4
open("/etc/passwd", O_RDONLY|O_CLOEXEC) = 4
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 4
open("/lib64/tls/x86_64/libnss_sss.so.2", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
[etc]
| How to capture a command process |
1,463,169,850,000 |
This is the situation:
I have a PHP/MySQL web application that does some PDF processing and thumbnail creation. This is done by using some 3rd party command line software on the server. Both kinds of processing consume a lot of resources, to the point of choking the server. I would like to limit the amount of resources these applications can use in order to enable the server to keep serving users without too much delay, because now when some heavy PDF is processed my users don't get any response.
Is it possible to constrain the amount of RAM and CPU an application can use (all processes combined)? Or is there another way to deal with these kinds of situations? How is this usually done?
|
Run it with nice -n 20 ionice -c 3
That will make it use the remaining CPU cycles and access to I/O not used by other processes.
For RAM, all you can do is kill the process when it uses more than the amount you want it to use (using ulimit).
| How to constrain the resources an application can use on a linux web server |
1,463,169,850,000 |
Everybody knows that PID number 1 is systemd (or something equivalent). And every process after that takes another PID, counting up.
However when there are 50 processes running (up to PID 50) and the process with PID 2 terminates and a new process starts, it won't be PID 2, but it will be PID 51. Why is that?
I noticed that e.g. with file descriptors, it's not like that, but rather, when I close file descriptor 4 and open a new file descriptor, it will have the number 4.
|
Most Unix variants allocate process IDs sequentially: 1, 2, 3, 4, ... When the largest possible PID value is reached, they start again at 1, skipping PIDs that already exist.
This is not an obligation. For example, OpenBSD assigns PIDs randomly, not sequentially; this is also an option on FreeBSD. The goal is improved security, though the benefits are dubious.
There is a (dubious) advantage to this behavior: it makes it rare for a process ID to be reused immediately after the process dies. There are many programs out there that monitor processes and assume that after a process dies, the PID will not be in use — which breaks if the PID is used by a new process. Those programs do have an excuse: there are no good APIs to monitor a process except from its parent. But such programs are widespread enough that OpenBSD avoids reusing a PID for a little while (a few minutes, if I remember correctly) after a process dies.
The main reason for this behavior is that it's how it was done on traditional Unix systems, and there's no strong reason to change. For file descriptors, Unix historically used the first free fd number, and that behavior has been made an official standard so all Unix/POSIX systems have to do it this way.
| Why do processes not fill up empty process IDs |
1,463,169,850,000 |
I executed a command like this: nohup some_command &. Now this command is in the background. I can see it with the command jobs.
Example output:
[1]+ Running nohup some_command &
Is it somehow possible to have a live representation of the status that comes from the jobs command, similar to how top works? So that when nohup some_command & completes, it immediately disappears from the list?
|
If all you want is for job completion notifications to be printed immediately, even if you're at typing a prompt or if some other job is in the foreground, then just run set -o notify.
If you want a foreground command that displays the status of background jobs from the current shell, you can run jobs in a loop. It's easy to do it in full screen:
tput clear
jobs
while sleep 1; do
tput clear
jobs
done
If you want to display the list below the prompt without clearing the screen, save the cursor position at the beginning and restore it on each run:
tput sc
jobs
while sleep 1; do
tput rc
jobs
done
| Live monitoring of background jobs |
1,463,169,850,000 |
I would like to spawn a persistent netcat server. My command of choice is the following:
echo "bash -c \"while [ 1 ]; do nc -l -p 1111 >> check; done\"" | at now
I am wondering how I can get the PID of the process created by at so that I can easily put the server to sleep when needed.
|
This is a tad tricky because of quoting, note change from " to '
The following will work if you submit your at job via at -f file
at -f nc.on now
cat nc.on
bash -c 'while [ 1 ]; do echo $$ > /var/run/atnc.pid; nc -l -p 1111 >> check; done'
the file /var/run/atnc.pid will have the process id of the bash which is running nc
You can cat the file to get the bash process id and kill it, terminating nc. Then rm /var/run/atnc.pid (optional).
| Get pid of long running command executed via at |
1,463,169,850,000 |
A Linux thread or forked process may change its name and/or its commandline as visible by ps or in the /proc filesystem.
When using the python-setproctitle package, the same change occurs on /proc/pid/cmdline, /proc/pid/comm, the Name: line of /proc/pid/status and in the second field of /proc/pid/stat, where only cmdline is showing the full length and the other three locations are showing the first 15 chars of the changed name.
When watching multithreaded ruby processes, it looks like the /proc/pid/cmdline remains unchanged but the other three locations are showing a thread name, truncated to 15 chars.
man prctl tells that /proc/pid/comm is modified by the PR_SET_NAME operation of the prctl syscall but it does not say anything about /proc/pid/status and /proc/pid/stat.
man proc says /proc/pid/comm provides a superset of prctl PR_SET_NAME which is not explained anymore.
And it tells that the second field of /proc/pid/stat would still be available even if the process gets swapped out.
When watching JVM processes, all the mentioned locations give identical contents for all threads (the three places other than cmdline all showing java), but jcmd pid Thread.print still shows different thread names for the existing threads, so it looks like Java threads are using some non-standard mechanism to change their name.
Is /proc/pid/comm always identical to the Name: line of /proc/pid/status and the second field of /proc/pid/stat or are there circumstances where one of these three places is offering different contents ?
Please provide an (easy to reproduce) example if differences are possible.
|
All three entries are defined close together in the kernel source: comm, stat, and status. Working forwards from there, comm is handled by comm_show which calls proc_task_name to determine the task’s name. stat is handled by proc_tgid_stat, which is a thin wrapper around do_task_stat, which calls proc_task_name to determine the task’s name. status is handled by proc_pid_status, which also calls proc_task_name to determine the task’s name.
So yes, comm, the “Name” line of status and the second field of stat all show the same value. The only variations are whether the value is escaped or not: status escapes it (replacing special characters), the others don’t.
| Thread Name: Is /proc/pid/comm always identical to the Name: line of /proc/pid/status and the second field of /proc/pid/stat? |
1,463,169,850,000 |
I have a script that looks like this. Invoked with ./myscript.sh start
#!/bin/bash
if [[ "$1" == "start" ]]; then
echo "Dev start script process ID: $$"
cd /my/path1
yarn dev &> /my/path/logs/log1.log &
echo "started process1 in background"
echo "Process ID: $!"
echo "Logging output at logs/log1.log"
sleep 2
cd /my/path2
yarn start:dev &> /my/path/logs/log2.log &
echo "started process2 in background"
echo "Process ID: $!"
echo "Logging output at logs/log2.log"
sleep 2
cd /my/path2
yarn dev &> /my/path/logs/log3.log &
echo "started process3 in background"
echo "Process ID: $!"
echo "Logging output at logs/log3.log"
elif [[ "$1" == "stop" ]]; then
killList=`ps | grep node | awk '{print $1}'`
if [[ $killList != "" ]]; then
echo $killList | xargs kill
echo "Processes killed: $killList"
else
echo "Nothing to stop"
fi
else
echo "Invalid argument"
fi
When i run ps after I run this script, i can see a bunch of node processes (more than 3) that I assume has been started by the yarn dev commands. I want to run ./myscript.sh stop and stop all the node processes that were spawned from my previous run of ./myscript.sh start
How do I make this happen?
|
You could get the list of processes directly spawned from that bash and send SIGTERM to them:
pkill -P $$
$$ is the process number of the current bash script.
Depending on how the signal is treated by the child processes, that might or not kill the grandchild processes (and so on, recursively).
So another way to do it is to get list the whole tree of processes starting at your bash and then kill them.
pstree -A -p $$ | grep -Eow "[0-9]+" | xargs kill
pstree outputs the tree (formatted for humans), grep extracts the process numbers and then you do what you need to do with them, for example running kill for each one.
Note that with the line above pstree will also list the grep and xargs processes, so you will have two No such process warnings from kill. Not sure if there is a race condition, that's just a proof of concept. You can code the right side of the pipeline to deal with the list of processes properly, filter, etc.
| How do I kill all subprocesses spawned by my bash script? |
1,463,169,850,000 |
If I have the following jobs running in a shell
-> % jobs -l
[1] 83664 suspended nvim
[2] 84330 suspended python
[3] 84344 suspended python
[4] 84376 suspended nvim
[5] - 84701 suspended python
[6] + 84715 suspended python
How can i return to the nth job, suppose I want to return to job 4, or job 1, how can I do that without having to kill all those which are before it?
|
To return to job 4, run:
fg %4
The command fg tells the shell to move job %4 to the foreground. For more information, run help fg at the command prompt.
When you want to suspend the job you are working on, run bg. For more information, run help bg at the command prompt.
For more detail than you'd likely want to know, see the section in man bash entitled JOB CONTROL.
| Return to a particular job in the jobs list |
1,463,169,850,000 |
I was wondering if there was a program like GNU/screen or tmux that would allow me to attach or detach a session with a running process but would not provide all of the other features such as windows and panes. Ideally the program would be able to run in a dumb terminal (a terminal without clear).
My use case is to use either the shell or the terminal that are built into emacs to run a program and have that program keep running even if emacs crashes. Tmux and screen are incompatible with shell because shell does not support clear. And although they work in the terminal the output is improperly formatted in part because of the bottom bar and also because of the quirks of term-mode.
Thank you!
|
dtach is a wafer-thin terminal session manager, now forked on github, or doubtless easily installed via the ports or package system for your operating system.
(Of historical interest may also be the dislocate example script distributed with expect.)
| Is there a program like tmux or screen but only for attaching or detaching a session |
1,463,169,850,000 |
I knew how to suspend one process and bring it into foreground in current shell, but I have no idea about that If I have run a process in shell A and now I want to bring it into foreground in current shell B.
Without using third tools like screen, does there exist a way to do this task?
In my opinion, I think it should be suspended and resume it in foreground, however, I don't know how to do.
|
There are usually 2 things you would need to do to get this effect:
Reparent the process to a new shell.
There is basically no way to do this as far as I know. In UNIX, there is only one way to reparent a process that I know about, and that happens only when its original parent dies without waiting for it. In that case the process (an orphan) gets reparented to init, process ID 1. But that doesn't help you here, because you neither want to kill the original parent nor do you want the new parent to become init.
Anyone (with permission) can still send signals to processes like SIGTSTP and SIGCONT and SIGINT so you can use the kill command to send those signals to the process (or process group) in order to simulate effects like suspending, continuing, and interrupting the job, but the new shell won't be aware of it and won't receive notifications about the status of the process group and therefore cannot track it with its job control feature.
Redirect stdio. Because the process' stdio is probably attached to the terminal where the process was started and you probably want to get it to go to the terminal where the new shell is running. Unless you had redirected output to a file or other location originally.
There do exist some ways to do that.
| Is there a way to suspend an process belongs to shell A and foreground it in shell B? |
1,463,169,850,000 |
I have running processes on my server that get killed every night at midnight. It's at work, I'm not around when it happens and I don't have remote access.
The kill occurs very predicably at 23:59 every night. I know this because when I arrive the next day:
Processes are up until 23:59
Logs of the process show last modified time of 23:59 (and new dated log is started right after).
Since the killing occurs at the same hour, I strongly suspected a batch job. I went through the crontabs of all our machines and couldn't find anything. Clearly I'm missing something.
I am thinking of laying out a surveillance script that would report the output of ps intermittently, would be launched with at a few minutes before and would loop for a little while. This idea seems weak and highly error prone, so I'm wondering if anyone has a better idea.
More details:
The universe is a very large and very old legacy system; no one in my team seems aware of such a process (if anyone did, she'd be in our team) although the larger organization consists of thousands of employees, a lot of them would theoretically have access to this (I don't see why they would). In other words, security isn't very tight.
Environment consists of multiple machines running Solaris 10.
It's not a production environment, so timeout or down time isn't critical.
I'm not excluding the possiblity that the killing might not be due to a batch job, although unlikely because of how accurate the timing is.
Clearly, there are defficiencies in our bookkeeping, so anything imagineable is possible.
My question is what's the best strategy to adopt? It falls under the greater umbrella of "the joy of working on legacy systems". I'm starting to work on my script that I'll post here shortly for feedback. In the meanwhile if anyone has a better idea, please say so.
|
It is common to rotate logs periodically, rotating them at midnight is common. Many applications will do this automatically.
For those that don't there are tools like logrotate that will do the rotation. Many programs are configured to reopen their logs when sent a HUP signal, and this is one of the techniques used by logrotate.
Things to check:
Do all the PIDs change. If not, then the programs may be rotating their own log, or responding appropriately to having their logs rotated.
For programs which change PIDs, were they restarted at midnight? If not check their parent to see what it does.
Check the crontab for root to see what processes run at the end of the day.
Check the crontab for the process userid to see what processes run at the end of the day.
Check to see if the log files are being written directly, or are being written by a log-writer which rotates the logs.
| What's the best strategy to catch mystery process? |
1,463,169,850,000 |
I'd like to set the PR_SET_CHILD_SUBREAPER process flag for the Bash process running my script, so that I can reap the tree of child processes that gets created (and can get killed in a non-orderly way) during its lifetime. Basically, I'd like Bash to call prctl(PR_SET_CHILD_SUBREAPER, 1, 0, 0, 0); on itself, but I haven't found a way to achieve that. Any suggestion?
I'd be fine even with dodgy solutions that involve invoking libc.so.6 prctl() directly (a-la Python ctypes) using Bash builtins, if any exist.
|
Since you mention it's fine with you, I feel obliged to introduce bash's equivalent...
ctypes.sh, a foreign function interface for bash
It's a shared object plugin for bash that is loaded with bash's enable -f mechanism:
enable [-a] [-dnps] [-f filename] [name ...]
The -f option means to load the new builtin command name from shared object filename, on systems that support dynamic loading.
and implemented in C language. It works at least on most Linux distributions and on FreeBSD.
You'll have to compile and install it first. The main feature is the ability to use almost any library call or system call from the shell. Though calls requiring structures might become way more complex to use when the builtin struct fails to reconstruct them automatically.
Example typed in current bash shell, on amd64 (x86_64) architecture and Linux kernel 5.6 (in some cases constants depend on architecture and (more uncommonly) kernel version):
$ source /usr/local/bin/ctypes.sh
$ dlcall -r int prctl int:36 ulong:1 ulong:0 ulong:0 ulong:0
int:0
$ echo $DLRETVAL # you can't use $() above to get the result since that would be a subshell
int:0
$ echo $$; bash -c 'echo $$; sleep 99 & echo $!; disown -a'
14767
16761
16762
$ pstree -p $$
bash(14767)─┬─pstree(16778)
└─sleep(16762)
The sleep process having lost its parent process (bash pid 16761), has been inherited by the current shell instead of the init process: it worked.
Note that PR_SET_CHILD_SUBREAPER had to be replaced by its value (and type) as found in /usr/include/linux/prctl.h on this system:
#define PR_SET_CHILD_SUBREAPER 36
You'll have to check the documentation to use it properly.
Also, the shell's standard wait might not work as expected for this: the shell didn't spawn that sleep command so the wait command won't do anything. You might have to invest into dlcalling wait(), waitpid() & co. This could be difficult, because bash itself alters settings and uses wait()-like calls each time it runs a command, so some unforeseen interactions to handle those inherited processes are likely.
Using gdb
This would achieve the same result as before (there must be some options to get it less verbose):
$ gdb -ex 'call (int)prctl((int)36,(long)1,(long)0,(long)0,(long)0)' -ex detach -ex quit -p $$
| How to set the CHILD_SUBREAPER flag for the Bash process running a script? |
1,463,169,850,000 |
If I have already started a job with GNU parallel in a similar fashion to:
$ cat jobs | parallel -j 70 "program {};"
is it possible, by e.g. some signal, to adjust the number of jobs of this parallel job? So that I could indicate to parallel that there should now be run at most 75 sub-jobs?
| ERROR: type should be string, got "\nhttps://www.gnu.org/software/parallel/parallel_tutorial.html#Number-of-simultaneous-jobs\n\nNumber of simultaneous jobs\n:\n--jobs can read from a file which is re-read when a job finishes:\necho 50% > my_jobs\n/usr/bin/time parallel -N0 --jobs my_jobs sleep 1 :::: num128 &\nsleep 1\necho 0 > my_jobs\nwait\n\nThe first second\nonly 50% of the CPU cores will run a job. Then 0 is put into my_jobs\nand then the rest of the jobs will be started in parallel.\n\nI highly recommend spending an hour walking through the tutorial. Your command line will love you for it.\n" | Is it possible to adjust number of sub-jobs for GNU parallel after invocation? |
1,463,169,850,000 |
When I had loads of firefox windows open and wanted to close them quickly I did
killall firefox
using killall from the psmisc package in Ubuntu.
Nothing happened.
I looked in the list of my processes and there were many lines of the form
alle_meije 55061 7662 0 01:16 ? 00:00:31 /usr/lib/firefox/firefox -contentproc -childID 126 -isForBrowser -prefsLen 9704 -prefMapSize 254479 -jsInitLen 279340 -parentBuildID 20220106144528 -appDir /usr/lib/firefox/browser 7662 true tab
so, firefox being the 'basename' of the executable there, I would have expected these to be killed.
Sure enough, doing it by hand using
kill $( ps -fu $USER | grep firefox | awk '{print $2}' )
did close all these windows. Does anyone know why the same does not happen with killall?
|
killall firefox-bin works for me but then I use the official Firefox distribution.
As mentioned in the comments, pkill -f firefox should work as well.
-f The pattern is normally only matched against the process name. When -f is set, the full command line is used.
| killall firefox does not kill firefox |
1,429,872,458,000 |
I want to run a program so that it can only create new files and not overwrite existing ones.
Does something like this exist?
$ fsaccess --can read,write --not overwrite --command bash -c 'echo "stuff" > filetim; echo "Woohoo I did it"'
Now if filetim doesn't exist, then the command should run just fine, but if filetim did exist then fsaccess would exit with a message like
Killed child! Command tried to overwrite a file. It does not have permission to do that!
|
If you want a process to be able to create new files but not overwrite pre-existing files, run it as a dedicated user and don't give this user write access anywhere except in some initially-empty directories. That way the program will not have the permission to overwrite any pre-existing file.
If you want to run a program and let it pretend it's overwriting files when it isn't, give it write access only to dedicated directories as above, but in addition you can use a union filesystem such as aufs, funionfs, unionfs-fuse, …, to make another hierarchy appear to the program as well.
If you want to retain all the prior versions of the files overwritten by a process, restrict that process to a copyfs filesystem, which retains all past versions of all files.
If you really need to allow the process to create new files but not to overwrite files, even its own, I don't think you'll find anything preexisting. You could write your own FUSE filesystem.
| Process overwrite access restriction |
1,429,872,458,000 |
What is the difference between the two commands nice and renice to manage process priority?
|
nice launches a new command with a modified nice level (lower priority than it would have otherwise had, or higher priority if you have permission). You specify which command to launch by providing it as an argument to nice itself. nice actually execs that command, so nice itself doesn't terminate until the command does.
renice changes the priority of an existing running process, then terminates immediately. You specify which process by giving its PID (as well as the new desired nice level) as an argument to renice.
Note also that nice takes a relative nice level (i.e. less or more than before) whereas renice requires an absolute level.
| What is the difference between nice and renice? |
1,429,872,458,000 |
I can easily start 3 processes on 3 different terminals, and kill each one by pressing Ctrl+C. Now, is there any way to start all 3 processes at once, and then finish them equally easily? Ideas:
If I could start 3 processes in such a way that they would run on the same terminal, and Ctrl+C would kill all 3, that would work.
If I could create two scripts, init.sh and kill.sh that would start/kill the 3 processes, that would work too.
Both of those work because they are easy. Having to spawn a process on the background, then finding its pid, then copying it, then killing it with yet another command isn't easy.
|
Using bash's job control:
$ sleep 10m & sleep 11m & sleep 12m &
[1] 1821
[2] 1822
[3] 1823
$ jobs
[1] Running sleep 10m &
[2]- Running sleep 11m &
[3]+ Running sleep 12m &
$ kill %1 %2 %3
$ jobs
[1] Terminated: 15 sleep 10m
[2]- Terminated: 15 sleep 11m
[3]+ Terminated: 15 sleep 12m
In bash, running command & sends it to the background. This way, you can start multiple commands in the same shell, running in the background. The kill builtin can be used to kill these background jobs. The first (oldest) active job is %1, the next is %2 and so on. Also see: Kill all background jobs
| How do I start 3 processes on the same terminal, and then exit all 3 easily? |
1,429,872,458,000 |
Suppose if i have a situation in which some files need to be copied and it takes long time so
i would have to go for parallel processing of the file copy.
for example it may look like this
for i in ipaddresslist
do
cp x y & //running in back ground or some other process in background
done
wait //will cause till all the files are copied or whatever process if all are finished
now i have used wait so that all the backgroud processes are completed
but there are cases in it like
1) some file copy may happen earlier so if i have to do some processing on these files
i have to wait till all the files are copied.
2) if the copy process (or any other program running in back ground) writes to a log file
the log file could be garbled with each background process trying to write to the file
at the same time.
is there any workaround for such things,
1)i mean if i can know that this process has finished then i could start doing the rest of the processing for that particular instance of the process(say file copy) if its completed.
also, moreover writing to log files could be in orderly fashion.
please make suggestions
|
If some job needs to be started after some file has been copied, just make it part of the background job:
(cp this /there && start job that needs this in /there) &
(cp that /here && start job that needs that in /here) &
wait
(the last & is not necessary).
Now for more complex dependencies, you could use GNU make -j.
make -j2 -f /dev/fd/3 3<< 'EOF'
all: j1 j2 j3
.PHONY: cp1 cp2 cp3 j1 j2 j3 all
cp1:
cp this /there
cp2:
cp that /here
cp3:
cp this /here
j1: cp1
start job that needs this in /there
j2: cp2
start job that needs that in /here
j3: cp1 cp3
start job that needs this in /here and /there
EOF
-j2 would run up to 2 jobs at any given time, and dependencies would be respected.
Now to avoid garbling of log files you have two main options
don't interleave them, that is append the content of each job one after the other.
try to ensure they interleave nicely, possibly tagging each line of each job to make it easier to see what job which line belongs to.
For 1, the easiest is to store each job output in a separate file and to merge them afterwards:
(cp this /there && start job that needs this in /there) > j1.log 2>&1 &
(cp that /here && start job that needs that in /here) > j2.log 2>&1 &
wait
cat j1.log j2.log > jobs.log
Another option is to use pipes to gather the output of each job and have cat merge them. Shell process substitution as available in ksh, zsh or bash can help us with that and even take care of the backgrounding:
j1() { cp this /there && start job that needs this in /there; }
j2() { cp that /here && start job that needs that in /here; }
cat <(j1 2>&1) <(j2 2>&1) > jobs.log
j1, j2 and cat will be started concurrently and inter connected with pipes.
However note that cat will only start reading from the second pipe (that is written to by j2) after j1 has finished. That means that if j2 writes more logging than the size of the pipe (for instance, on Linux, typically 64kiB) j2 will be blocked until j1 finishes.
That can be avoided by using sponge from moreutils, like:
cat <(j1 2>&1) <(j2 2>&1 | sponge) > jobs.log
Though that would mean all the output of j2 would be stored in memory, and cat will only start writing the output of j2 in jobs.log after j2 has finished, in which case using pv -qB 100M for instance may be preferable:
cat <(j1 2>&1) <(j2 2>&1 | pv -qB 100M) > jobs.log
That way j2 would only get paused (if j1 has not finished yet) after 100M (plus two pipe contents) of logs have been output, and pv wouldn't wait for j2 to finish before outputting to stdout.
Note that for all the above, you need to beware that once you redirect the output of most commands to a file or pipe (anything but a tty), the behavior is affected. The commands, or rather the stdio API of the libc they call (printf, fputs, fwrite...) detects that the output is not going to a terminal and perform an optimisation by outputting in big chunks (several kilo-bytes), while they don't do that for standard error. That means the order of the output and error messages will be affected. If that's an issue, on GNU systems or FreeBSD (at least) and for dynamically linked commands, you can use the stdbuf command:
stdbuf -oL j1 > j1.log 2>&1
instead of:
j1 > j1.log 2>&1
to ensure that the stdio output is line-buffered (each line of output will be written separately as soon as they are complete).
For option 2, writes to a pipe of less than PIPE_BUF bytes which on Linux is 4096 bytes so much larger than your average line of log, are guaranteed to be atomic, that is if two processes write to the same pipe at the same time, their 2 writes are guaranteed not to be intertwined. There's no such guarantee on regular files but I seriously doubt that 2 writes of less than 4kiB could end up intertwined on any OS or filesystem.
So if it weren't for that buffering described above and if log lines were output individually as a whole and separately, you'd have a guarantee that the lines of the output wouldn't have a piece of line of this job and a piece of line of that other job.
However, nothing prevents a command to do a flush in between two parts of a line being written (like printf("foo"); fflush(stdout); printf("bar\n");) and there's no buffering on stderr.
Another problem is that once the lines of all the jobs are interleaved, it's going to be hard to tell which line is for which job.
You can solve both problems by doing something like:
tag() { stdbuf -oL sed "s%^%$1: %"; }
{
j1 2>&1 | tag j1 &
j2 2>&1 | tag j2
} | cat > jobs.log
(note that we don't need the wait (and it wouldn't work anyway in most shells), because cat will not finish until there's nobody writing to the pipe anymore, so not until j1 and j2 have terminated).
Above we use | cat to have pipe with its atomicity guarantee. We pipe the output of each command to a command that tags every line with the job name. j1 and j2 can write their output however they want, sed (because of stdbuf -oL) will output the lines (with the tag prefix) as a whole and separately, which will guarantee the output not to be mangled.
The same note as above still applies: we're not applying stdbuf -oL to the commands in j1 and j2 so they will most likely buffer their output which may therefore be written long after it has been produced. That is even worse than in the previous case, because if you see:
j1: doing A
j1: doing B
j2: doing C
That does mean that j1 did A before doing B, but not that it did any of them before j2 doing C. So once again, you may need to apply stdbuf -oL to more commands if it's an issue.
Note that you can't apply stdbuf to shell functions like j1 or j2 above, but at least with GNU and FreeBSD stdbuf, you can use this to set stdbuf globally or on a per-subshell basis:
stdbuf_LD_PRELOAD=$(stdbuf sh -c 'export -p LD_PRELOAD')
line_buffered_output() {
eval "$stdbuf_LD_PRELOAD"
export _STDBUF_O=L
}
j1() (line_buffered_output; cp this /there && start...)
| multiple background processes in a script |
1,429,872,458,000 |
I have a script which starts 3 terminal emulator windows:
#!/bin/sh
terminator --role='terminator-left' 2>/dev/null &
sleep 0.1
terminator --role='terminator-center' 2>/dev/null &
sleep 0.1
terminator --role='terminator-right' 2>/dev/null &
(I am using the --role option, so that I can automatically place the windows on different monitors, defined in my openbox configuration file)
I start this script multiple times, on different virtual desktops.
How can I start these 3 terminals in some kind of a "process group", so that I can then kill all of the remaining processes, in case any one of the processes terminates?
Important: I don't want to kill all terminals that might have been launched in a different instance of my script (I cannot just use pkill or pgrep to kill any process matching a pattern)
So, in other words, lets say I launch my script on virtual desktop 1 and virtual desktop 2.
I have now 3 terminal windows on 2 different virtual desktops.
When I exit one terminal on virtual desktop 1, I would like the remaining 2 windows to automatically terminate, but without affecting the independent instance of 3 other windows running on my virtual desktop 2.
Is what I describe possible ?
How can I achieve this ?
|
Disclaimer :
I have had no opportunity to test my suggestion under openbox. I validated it under kde-plasma only,
My solution resorts on the wait shell internal command as documented in the POSIX Programmers manual. Some particular implementation could make the script simpler, using the -n option.
#!/bin/sh
ownkill()
{
trap - CHLD
pkill --signal 15 -P $$
}
terminator --role='terminator-left' 2>/dev/null &
sleep 0.1
terminator --role='terminator-center' 2>/dev/null &
sleep 0.1
terminator --role='terminator-right' 2>/dev/null &
trap ownkill CHLD
wait
This script is based on the idea to :
Launch the 3 terminator processes as desired,
Await for their completion,
Handle the SIGCHLD signalling the termination of one of
the above processes in a way to terminate the remaining ones.
Nota 1 (reseting the handler) : As rightfully observed by aviro in comments, the signal handler must reset the handling of SIGCHLD to defaults since the killing of the remaining processes will trigger other (up to 2) SIGCHLD.
Failing to do so would force the shell in some sort of recursive situation (the handler having to handle a signal it generated itself ) more or less gracefully managed depending on shell implementations. (from a harmless warning to some maximum recursion depth exceeded or equiv. error.)
Nota 2 (Keep SIGCHLD default handling until wait) : As rightfully observed by Martin Vegter in comments, the trap ownkill CHLD instruction is to be placed immediately before the wait instruction.
Placing it as the first instruction of the script (as I had absurdly suggested in a preceding version) would trigger the dedicated handler as soon as the first sleep process terminates.
| start multiple terminal windows in a "process group" so that remaining processes can be killed, if any one of the processes terminates |
1,429,872,458,000 |
As part of a provisioning script for CentOS 7 I am in need to have a one-liner that performs the following. Unfortunately, I have no clue how to achieve that.
If httpd is running then stop it
If httpd is not running then check if httpd is installed at all & start it
ideally the result is logged into /log/httpd/ AND /&hostname/log/httpd/
Anyone able to help?
|
In CentOS7, you have systemctl that will pretty much do most of this for you. If Apache is installed via the standard packages, this should work for you out-of-the-box:
echo -n $(date +"%s %F %T"): \
if systemctl is-active httpd; then \
systemctl stop httpd && echo "httpd stopped"; \
elif systemctl enable httpd; then \
systemctl start httpd && echo "httpd started"; \
else \
echo "httpd not installed"; false;\
fi 2>&1 || echo "Failure: $?" | \
tee -a /var/log/httpd/status.log /some/other/location/log/httpd/status.log
I broke it into several lines for clarity. To collapse it to one line, remove the \'s and newlines. You can add more verbosity to the logging.
| One-Liner needed: Stop httpd if running already & Start httpd if not running |
1,429,872,458,000 |
How do I kill all processes with my username that were started within (past hour, past day) etc?
|
find your processes that are younger than an hour
extract the pids
kill the pids
process list:
$ ps -e -o pid,user,etimes,comm \
| awk -v me=$USER '$2 == me && $3 <= 3600 { print }'
Produces
661162 jaroslav 3006 chrome
667859 jaroslav 1711 chrome
669145 jaroslav 1471 chrome
671222 jaroslav 1016 chrome
675278 jaroslav 270 chrome
675578 jaroslav 207 sleep
676094 jaroslav 91 chrome
676102 jaroslav 91 chrome
676528 jaroslav 11 chrome
676529 jaroslav 11 chrome
676553 jaroslav 11 chrome
676602 jaroslav 3 top
676615 jaroslav 0 ps
676616 jaroslav 0 awk
extract pids:
$ ps -e -o pid,user,etimes,comm \
| awk -v me=$USER '$2 == me && $3 <= 3600 { print $1 }'
Kill pids:
$ ps -e -o pid,user,etimes,comm \
| awk -v me=$USER '$2 == me && $3 <= 3600 { print $1 }' \
| xargs -rt kill
The -tr arguments to xargs are optional and will skip xargs if there is no output and report every executed line.
You can even test it with kill -0 which does nothing to stop the process, but will report an error if the process is no longer running.
$ ps -e -o pid,user,etimes,comm \
| awk -v me=$USER '$2 == me && $3 <= 3600 { print $1 }' \
| xargs -rt kill -0
kill -0 661162 667859 669145 671222 675278 676602 677310 677311 677883 677893 677965 677966 677967 677968
kill: (677966): No such process
kill: (677967): No such process
Realizing that this pipe / script can kill itself after feedback, (notice etimes=0 in the process list above), here is a revised version which ignores very recent processes:
ps -u "$LOGNAME" -o pid,etimes,comm \
| awk '$2 <= 3600 && $2 > 1 { print $1 }' \
| xargs -rt kill -0
This is probably not very portable, but should work on Linux (at least ubuntu 18). Hopefully this gives you some idea about how to approach this problem.
<mother-mode>
Do run the ps command without awk and xargs and kill first to see what would be killed and be careful if running as root. You could potentially shut down the system or kill some important service that has recently been restarted.
</mother-mode>
| Kill all of my processes that were started within the past hour? |
1,429,872,458,000 |
Simple question. I'm trying to find the config file of pm2's logrotate module to edit it manually. Unfortunately this information is not provided in the Github repo's README. So where is this file?
Backstory: I accidentally added a config with the incorrect key using pm2 set pm2-logrotate:wrong-key. I don't want it to confuse me when I come back to it later. Since there's no way to remove the config line in console (that I am aware of), I would like to get rid of it manually.
|
Found it.
~/.pm2/module_conf.json
I believe this file stores configuration for all modules.
| Where on disk is the config file of pm2-logrotate module? |
1,429,872,458,000 |
I am trying my hand on Linux Signals. Where I have created a scenario mentioned below:
Initially block all SIGINT signals using sigprocmask().
If sender send SIGUSR1 signal then unblock SIGINT for rest of the process life.
However first step is successfully implemented but not able to unblock (or change) process mask using sigprocmask().
What am I doing wrong?
#include<stdio.h>
#include<signal.h>
#include<stdlib.h>
sigset_t block_list, unblock_list;
void sigint_handler(int sig)
{
printf("Ouch!!\n");
}
void sigusr1_handler(int sig)
{
sigemptyset(&unblock_list);
sigprocmask(SIG_SETMASK, &unblock_list, NULL);
}
int main(int argc, char **argv)
{
int count;
signal(SIGINT, &sigint_handler);
signal(SIGUSR1, &sigusr1_handler);
sigemptyset(&block_list);
sigaddset(&block_list, SIGINT);
sigprocmask(SIG_SETMASK, &block_list, NULL);
for(count=1; ;++count)
{
printf("Process id: %ld\t%d\n", (long)getpid(), count);
sleep(4);
}
exit(EXIT_SUCCESS);
}
$kill -n SIGINT <pid>
$kill -n SIGUSER1 <pid> //This call should unblock sigint_handler() for rest of the process life, but it is only unblocking for one time. Everytime I have call $kill -n SIGUSER1 <pid> to unblock SIGINT.
Note: Error handling has been removed for simplicity.
|
The kernel will restore the signal mask upon returning from a signal handler. This is specified by the standard:
When a thread's signal mask is changed in a signal-catching function
that is installed by sigaction(), the restoration of the signal mask
on return from the signal-catching function overrides that change
(see sigaction()). If the signal-catching function was installed
with signal(), it is unspecified whether this occurs.
On Linux, signal(2) is just a deprecated compatibily wrapper for sigaction(2), and that does also occur when using signal(2).
| Why below code is not able to unblock SIGINT signal |
1,429,872,458,000 |
I am trying to make an AV bug raspberry pi for a class.
I am using sox to record sound.
Which is working fine.
the issue is sox needs to be stopped by a control+C to stop and create the new file. If killall is sent from a different ssh session it will drop the other session and sox will not create the file.
listen.sh
#! /bin/bash
NOW=$( date '+%F_%H:%M:%S' )
filename="/home/pi/gets/$NOW.wav"
sox -t alsa plughw:1 $NOW.wav;
sleep 6;
echo $filename
I have tried making a separate script for stopping it; pretty much
killlisten.sh
#! /bin/bash
sleep 5;
ps | grep sox | kill 0;
Then run a
superscript.sh
#! /bin/bash
./listen.sh;
./killlisten.sh;
Any advice on how to stop sox in a way that would still produce an output file would be great. This will ideally be set to run at set times so avoiding human interaction is essential.
|
Your pipeline
ps -aux | grep sox | kill 0
will not do what you want to do. This is because kill won't ever read the input from grep (the result from grep will also contain a lot of other things than just the PID of the sox process).
If you have pkill, just do
pkill sox
instead (use pkill -INT sox to send the same signal as Ctrl+C does).
If you change your startup script to
#!/bin/bash
NOW=$( date '+%F_%H:%M:%S' )
filename="/home/pi/gets/$NOW.wav"
sox -t alsa plughw:1 "$NOW.wav" & sox_pid="$!"
printf 'sox pid is %d\n' "$sox_pid"
wait
# Alternatively (instead of "wait", if you want to kill sox after six seconds):
# sleep 6 && kill "$sox_pid"
echo "$filename"
You will get the PID of the sox process printed to the terminal and you could use that to do kill pid (with pid replaced by that number).
Using & ofte the sox invocation places it in the background. The PID of that background task is automatically stored in $! and the code above assigns it to sox_pid which is later printed.
The wait command waits for the sox command (running in the background) to finish.
As we discussed in a previous session: Double-quote all variable expansions.
| Stopping a script process with a Control + C or something |
1,429,872,458,000 |
I have 5 processes running from one terminal (T1). All of them are running in background, but generate a lot of output.
Now from other terminal (T2), I want to kill one of them using KILL pid command. Then after 60 secs, I want to restart the same process (this will get a different pid obviously).
My script would look like
KILL 1524
sleep 60
myProcess
The problem is that after this, the terminal T2 also becomes unusable, due to outputs of the process. If I want to do the same thing again, I will have to start another terminal. Is it possible to something that would force the process to start in T1?
|
So, you can have the output appear in the other terminal—though I doubt you really want to. To do so:
Find the tty of the terminal you'd like the output to go to; the easiest way is to run tty. This should print something like: /dev/pts/42.
In the other terminal, run: command > /dev/pts/42 &. If you want to do stderr as well as stdout: command > /dev/pts/42 2>&1 &
That will only work for the same user (due to permissions), and it doesn't redirect input (and redirecting input won't really work, as you'll be fighting the shell for it).
A much better solution is to redirect the output to a file (command > outfile), then you can use less, tail, etc. to watch it. Or, use screen/tmux to run multiple sessions inside one terminal.
| How can I run a process in one terminal from another terminal |
1,429,872,458,000 |
I am trying to list the superusers processes currently running in my Kali distro. Using "pgrep -f sbin" I figured that would do the trick, however it only lists the PID numbers, not the actual name of the processes. How can I get it to do this?
Using "ps ef | grep "sbin" it returns a very unformatted list, is there a neat way to format this or dump it to a textfile with some proper formation?
|
Solved it by adding the -l flag to pgrep:
pgrep -lf sbin
From man pgrep:
-l, --list-name
List the process name as well as the process ID. (pgrep only.)
| List superuser processes |
1,429,872,458,000 |
I am maintaining an application that currently consists of 4 processes that are dependant on each other in various ways. At the moment these processes are started, stopped and monitored via a rather "grown" bash script, which contains pearls like these:
# And another rather dirty way to identify the node Api
# process by knowing the filename that is given to node.js.
# Keep in mind not to kill the retrieved process directly, but
# check the running user and the used port to make sure you
# actually got the correct process.
findApiProcessId() {
local pid=`ps ax | grep node | grep api/server\.js | cut -c -5`
if [ $pid ]
then
echo $pid
else
echo -1
fi
}
The other processes are checked using similar means.
I am now allowed to completely rework this process management, but to be honest I have no real idea where to start. The processes I need to monitor are
lighttpd which manages a "stable" pid file
mongod which manages a pid file that doesn't seem to be quite reliable. Sometimes it points to a pid that belongs to an entirely different process.
At least two node.js instances (running with nohup), which try to do maintain some pid file but fail quite miserably.
I need to be able to start and stop single processes and to query their status, all from the command line. And I need to be able to start that program multiple times in different directories for different "groups" of our application. The child processes currently check the current working directory for configuration files and are able to "choose" their ports without blocking other running instances.
My first thought was to write up a simple process host using Python or C, but I guess that's kind of overkill. So I was looking for an existing tool but searches for "linux process host tool" don't reveal anything useful.
So are there any "standard" process host tools that are able to watch over multiple child processes?
|
How about runit, "a UNIX init scheme with service supervision"?
I think it matches your requirements, i.e.
"runit's service supervision resolves dependencies for service daemons designed to be run by a supervisor process automatically." (more)
checking a running service can be done via sv status service
there are already many service definitions, readily usable and a good resource for building your own ones
it is packaged for various distributions and quite well established (there's a lighttpd wiki page on runit, see also these run scripts including lighttpd and mongodb)
it can accomodate many different daemon variants (i.e. node.js doesn't pose a problem at all)
I cannot answer the "one service in some variants" issue in a clever way, you could of course define the services separately...
(there might be some neat symlink-and-examine-my-pwd-solution to this but I'm not sure if trying to be clever here is a good idea; thinking about maintainability)
Edit This ArchWiki page provides a quick overview that might be a better start than runit's page.
| Host process for multiple processes? |
1,429,872,458,000 |
Under the folder /home/testing/scripts on a Linux machine, we have 234 different scripts that do sanity and testing
as
/home/testing/scripts/test.network.py
/home/testing/scripts/test.hw.py
/home/testing/scripts/test.load.sh
.
.
.
in some cases we want to kill all running scripts
so in order to find the running scripts pids we do that
lsof /home/testing/scripts/
and to kill all pids we use:
for proccess in `lsof /home/testing/scripts/ | awk '{print $2}' | grep -v proccess`; do kill $proccess; done
lets say we run only the script - /home/testing/scripts/test.network.py
and from ps -ef |grep "testing/scripts" we get
root 5793 17546 84 09:20 ? 00:00:00 python3 -u /home/testing/scripts/test.network.py
so from lsof we should get the same pid number as
lsof /home/testing/scripts/
now , I just to know if my approach
for proccess in `lsof /home/testing/scripts/ | awk '{print $2}' | grep -v proccess`; do kill $proccess; done
is good enough to kill all running scripts under /home/testing/scripts/
or maybe other suggestions?
|
You could tidy up the command a little, but it seems to me that it's reasonably accurate already. I'd match the filename more tightly to reduce potential mismatches, and I'd ensure that each candidate PID was actually numeric,
lsof | awk -v p='^/home/testing/scripts/' '$9~p && $2+0 {print $2}' | sort -u | xargs echo kill
Notice the ^ at the beginning of the directory assignment to the awk variable p. You should also escape characters that could be interpreted as part of a Regular Expression (i.e. . should be represented as \., the * as \*, etc.)
Remove echo when you're ready to have the script really perform the kill operation.
| what is the right way to know all script PIDS that runs under folder |
1,429,872,458,000 |
I created a CGROUP on my desktop called background. The purpose of this group is to run all my sysadmin scripts within its CPU limit of 10%. The group is created on every reboot with the following cronjob:
@reboot /usr/bin/cgcreate -t jerzy:jerzy -a jerzy:jerzy -g cpu:background && /usr/bin/cgset -r cpu.cfs_period_us=1000000 background && /usr/bin/cgset -r cpu.cfs_quota_us=100000 background
Despite this limitation, I still want my sysadmin scripts, already limited by cgexec, never to take priority over the rest of my processes. Hence I decided to use the nice command, as in the example below:
cgexec -g cpu:background nice -19 prependPollen.py
Is cgexec in the above command limiting resources to prependPollen.py or only to nice?
More general: does using cgexec limit resources only to the one command placed immediately after cgexec command? Does the same apply to nice?
nice -19 cgexec -g cpu:background prependPollen.py
Would swaping the order, like in the above command, make any difference in the CPU usage/limiting?
Can both nice and cgexec be used in the same command/cronjob?
P.S. My environment: Bash, Debian 10 LTS.
|
Both commands are preparation commands exec-ing the following command while keeping the property they changed. So the order here won't matter as long as the changed properties don't have any side effect changing the other (it's fine for these two).
cgexec -> nice -> final executable
will move the following process nice to be executed to the relevant cgroup and nice will change the niceness of the final executable (while keeping the cgroup).
nice -> cgexec -> final executable
will change the niceness of following process cgexec and cgexec will place the final executable in the relevant cgroup (while keeping the niceness).
Both commands will yield the same result. Both properties (cgroup and niceness) are automatically propagated to all children of the following process. So whatever is spawned from prependPollen.py will be in cpu:background and have the changed niceness too.
Any other similar command can be inserted at any place in this pipeline. For example ionice -c 3 could be added in first, second or 3rd place to attempt to limit the I/O effect of the python program with regard to other processes (while there are also cgroups doing a better job for this, it usually requires cgroups v2 to work properly).
| Using cgroups and nice with respect to the same process - does the order matter? What is the correct syntax? |
1,429,872,458,000 |
I'm trying to do is create a new process, from within bash, like nothing ever happened, except my terminal is on the next line, where that process would normally steal my stdout...
Let's say I want to apt-get update right now, but i want to edit some configuration files instead of watching everything download, so I want to run vi /some/config. I should be able to do both at once, right?
Just to clarify, because i did a bit of reading before asking, I want this process to survive past the closing of the terminal [if it has not yet come to its end] so i guess that's not a child process, probably not a sub-shell, maybe a fork? Is fork what I mean? How do I fork like this?
|
I believe you are looking for the setsid command which starts a program in a new session. So you can:
setsid apt-get update
And, if you want the apt-get update to be silent:
setsid apt-get update >/dev/null 2>&1
The good thing with setsid is that the process started this way will continue after the terminal closes.
| How can I run a program from bash ignoring its stdout so i can run more programs? [closed] |
1,429,872,458,000 |
I recently learned that I can use top with my keyboard to kill processes (k), show processes for a specific user only (u), etc. But I was wondering if there is a way of selecting processes from the menu without having to manually type their PID (e.g. using C-n and C-p to navigate the list would be ideal)
If top does not let me do this, are there any tools that can help with this task?
|
Maybe htop fits the bill. It is a nicer top.
Comparison between htop and top
In 'htop' you can scroll the list vertically and horizontally to see all processes and complete command lines.
In 'top' you are subject to a delay for each unassigned key you press (especially annoying when multi-key escape sequences are triggered by accident).
'htop' starts faster ('top' seems to collect data for a while before displaying anything).
In 'htop' you don't need to type the process number to kill a process, in 'top' you do.
In 'htop' you don't need to type the process number or the priority value to renice a process, in 'top' you do.
'htop' supports mouse operation, 'top' doesn't
'top' is older, hence, more used and tested.
| Task management tools with keyboard navigation that run in a terminal |
1,429,872,458,000 |
I'm looking for a way to run a process, which itself may spawn off more child / grandchild / etc. subprocesses. Then:
Be able to send a signal (e.g. TERM or KILL) to all descendants.
Be able to ensure that when the main process exits, all descendants are terminated (either by killing them or because we have waited for them to all exit voluntarily).
"all descendants" should include descendants that use the same method to create their own inner process group. So, whatever method we use, it should support nesting, otherwise processes have a way to escape our supervision by creating their own conclave.
Though this sounds like a simple task, various documented methods fail in different subtle ways.
setsid leaves orphaned descendants - it can only wait for the top-level process. I also don't think it can be nested, because sessions can't be nested.
Yelp's dumb-init can't be nested (without using --single-process).
bwrap can create a PID namespace and run an init inside it, which seems to be halfway there, however it can't wait until all subprocesses exited.
I tried combining dumb-init with bwrap --as-pid-1, which seems to get us a step further, however, the only way to send a SIGKILL to the entire group is to map another signal to KILL using dumb-init's --rewrite feature. This breaks nesting: mapping a signal to SIGKILL in the outer dumb-init will just kill the inner dumb-init, leaving other inner processes running.
Running the commands in a container (docker or podman) does achieve these goals. However, it is a heavyweight solution that requires extra work to set up, and to disable other types of isolation (e.g. of the filesystem or UIDs) and enable nesting (DinD etc.).
Anything I've overlooked?
|
As it turns out, my testing methodology was off and PID namespaces seem to be the best tool for this job. BubbleWrap is one tool that allows creating them, and does get the job done:
args=(
bwrap
# Create PID namespace
--unshare-pid
# Make the filesystem namespace a straight map to the outer one
--dev-bind / /
# Mount a working procfs
--proc /proc
# Let the kernel kill the entire namespace when the top-level process dies
--as-pid-1
# Kill the group when bwrap or the current script dies
# (needed for nesting)
--die-with-parent
# Pass in the top-level command of the process tree that we want to isolate
"$@"
) ; exec "${args[@]}"
Notes:
PID namespaces can be nested, which satisfies the nestability requirement.
When PID 1 dies, the kernel kills all processes from its PID namespace. Usually bwrap runs its own init as PID 1, but we can disable that to prevent orphan processes. Note that Bash will reap zombie processes reparented to it, so any Bash script should be usable as PID 1.
--die-with-parent makes it so that when bwrap or its parent dies, the process tree is killed as well. This allows reliably cleaning it up on demand.
| Reliably run, wait for, and kill a tree of processes |
1,429,872,458,000 |
If I do:
sleep 1
versus
sleep 1 & wait $!
will there be any difference in terms of CPU usage for spawning a foreground process versus a background process? Or will the performance of both lines be exactly equal?
|
Yes.
On my system the first takes 3.0 ms and the second takes 3.3 ms CPU time.
In practice I would never worry about the 0.3 ms CPU time if you are sleeping a full second.
The 0.3 ms is probably caused by the extra fork that is needed to put sleep in the background. In other words: It is a one-time cost, not a 10% extra cost for running jobs in the background.
| Is there a performance penalty for backgrounding a process? |
1,429,872,458,000 |
I want to understand the Process Control block of Mac OS and Linux. For Lionux it was pretty strightforward, there was a post here asking about the same thing and someone replied to go take a look at "task_struct" in <linux/sched.h>. However i am finding it more difficult to find the equivalent information for Mac OS, someone in apple's developer forum asked a similar question and got told to look at proc_info.h and proc.h, but i am lost as to which struct i should be looking at. Is there a task_struct equivalent in Mac OS?
|
I know nothing of Mac OS but… a couple about FreeBSD. Hope it will match.
You got it right looking at the task_struct in Linux because it's the basic unit of scheduling in Linux.
The basic unit of scheduling in FreeBSD is the thread.
Linux represents processes (and threads) by task_struct structures.
A single-threaded process in Linux has a single task_struct.
A single-threaded process in FreeBSD has a proc struct, a thread struct, and a ksegrp struct.
(The ksegrp is a "kernel scheduling entity group.")
At the end of the day both OSes schedule threads, where a thread is a thread structure in FreeBSD, and a task_struct in Linux.
Therefore, yes indeed, follow the advice and first have a look to proc.h
| What is the equivalent of "task_struct" in linux's <linux/sched.h> for Mac OS? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.