date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,336,906,003,000 |
This is a simple problem but the first time I've ever had to actually fix it: finding which specific files/inodes are the targets of the most I/O. I'd like to be able to get a general system overview, but if I have to give a PID or TID I'm alright with that.
I'd like to go without having to do a strace on the program that pops up in iotop. Preferably, using a tool in the same vein as iotop but one that itemizes by file. I can use lsof to see which files mailman has open but it doesn't indicate which file is receiving I/O or how much.
I've seen elsewhere where it was suggested to use auditd but I'd prefer to not do that since it would put the information into our audit files, which we use for other purposes and this seems like an issue I ought to be able to research in this way.
The specific problem I have right now is with LVM snapshots filling too rapidly. I've since resolved the problem but would like to have been able to fix it this way rather than just doing an ls on all the open file descriptors in /proc/<pid>/fd to see which one was growing fastest.
|
There are several aspects to this question which have been addressed partially through other tools, but there doesn't appear to be a single tool that provides all the features you're looking for.
iotop
This tools shows which processes are consuming the most I/O. But it lacks options to show specific file names.
$ sudo iotop
Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s
TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % init
2 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kthreadd]
3 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ksoftirqd/0]
5 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/u:0]
6 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [migration/0]
7 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [watchdog/0]
By default it does what regular top does for processes vying for the CPU's time, except for disk I/O. You can coax it to give you a 30,000 foot view by using the -a switch so that it shows an accumulation by process, over time.
$ sudo iotop -a
Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s
TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
258 be/3 root 0.00 B 896.00 K 0.00 % 0.46 % [jbd2/dm-0-8]
22698 be/4 emma 0.00 B 72.00 K 0.00 % 0.00 % chrome
22712 be/4 emma 0.00 B 172.00 K 0.00 % 0.00 % chrome
1177 be/4 root 0.00 B 36.00 K 0.00 % 0.00 % cupsd -F
22711 be/4 emma 0.00 B 120.00 K 0.00 % 0.00 % chrome
22703 be/4 emma 0.00 B 32.00 K 0.00 % 0.00 % chrome
22722 be/4 emma 0.00 B 12.00 K 0.00 % 0.00 % chrome
i* tools (inotify, iwatch, etc.)
These tools provide access to the file access events, however they need to be specifically targeted to specific directories or files. So they aren't that helpful when trying to trace down a rogue file access by an unknown process, when debugging performance issues.
Also the inotify framework doesn't provide any particulars about the files being accessed. Only the type of access, so no information about the amount of data being moved back and forth is available, using these tools.
iostat
Shows overall performance (reads & writes) based on access to a given device (hard drive) or partition. But doesn't provide any insight into which files are generating these accesses.
$ iostat -htx 1 1
Linux 3.5.0-19-generic (manny) 08/18/2013 _x86_64_ (3 CPU)
08/18/2013 10:15:38 PM
avg-cpu: %user %nice %system %iowait %steal %idle
18.41 0.00 1.98 0.11 0.00 79.49
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sda
0.01 0.67 0.09 0.87 1.45 16.27 37.06 0.01 10.92 11.86 10.82 5.02 0.48
dm-0
0.00 0.00 0.09 1.42 1.42 16.21 23.41 0.01 9.95 12.22 9.81 3.19 0.48
dm-1
0.00 0.00 0.00 0.02 0.01 0.06 8.00 0.00 175.77 24.68 204.11 1.43 0.00
blktrace
This option is too low level. It lacks visibility as to which files and/or inodes are being accessed, just raw block numbers.
$ sudo blktrace -d /dev/sda -o - | blkparse -i -
8,5 0 1 0.000000000 258 A WBS 0 + 0 <- (252,0) 0
8,0 0 2 0.000001644 258 Q WBS [(null)]
8,0 0 3 0.000007636 258 G WBS [(null)]
8,0 0 4 0.000011344 258 I WBS [(null)]
8,5 2 1 1266874889.709032673 258 A WS 852117920 + 8 <- (252,0) 852115872
8,0 2 2 1266874889.709033751 258 A WS 852619680 + 8 <- (8,5) 852117920
8,0 2 3 1266874889.709034966 258 Q WS 852619680 + 8 [jbd2/dm-0-8]
8,0 2 4 1266874889.709043188 258 G WS 852619680 + 8 [jbd2/dm-0-8]
8,0 2 5 1266874889.709045444 258 P N [jbd2/dm-0-8]
8,0 2 6 1266874889.709051409 258 I WS 852619680 + 8 [jbd2/dm-0-8]
8,0 2 7 1266874889.709053080 258 U N [jbd2/dm-0-8] 1
8,0 2 8 1266874889.709056385 258 D WS 852619680 + 8 [jbd2/dm-0-8]
8,5 2 9 1266874889.709111456 258 A WS 482763752 + 8 <- (252,0) 482761704
...
^C
...
Total (8,0):
Reads Queued: 0, 0KiB Writes Queued: 7, 24KiB
Read Dispatches: 0, 0KiB Write Dispatches: 3, 24KiB
Reads Requeued: 0 Writes Requeued: 0
Reads Completed: 0, 0KiB Writes Completed: 5, 24KiB
Read Merges: 0, 0KiB Write Merges: 3, 12KiB
IO unplugs: 2 Timer unplugs: 0
Throughput (R/W): 0KiB/s / 510KiB/s
Events (8,0): 43 entries
Skips: 0 forward (0 - 0.0%)
fatrace
This is a new addition to the Linux Kernel and a welcomed one, so it's only in newer distros such as Ubuntu 12.10. My Fedora 14 system was lacking it 8-).
It provides the same access that you can get through inotify without having to target a particular directory and/or files.
$ sudo fatrace
pickup(4910): O /var/spool/postfix/maildrop
pickup(4910): C /var/spool/postfix/maildrop
sshd(4927): CO /etc/group
sshd(4927): CO /etc/passwd
sshd(4927): RCO /var/log/lastlog
sshd(4927): CWO /var/log/wtmp
sshd(4927): CWO /var/log/lastlog
sshd(6808): RO /bin/dash
sshd(6808): RO /lib/x86_64-linux-gnu/ld-2.15.so
sh(6808): R /lib/x86_64-linux-gnu/ld-2.15.so
sh(6808): O /etc/ld.so.cache
sh(6808): O /lib/x86_64-linux-gnu/libc-2.15.so
The above shows you the process ID that's doing the file accessing and which file it's accessing, but it doesn't give you any overall bandwidth usage, so each access is indistinguishable to any other access.
So what to do?
The fatrace option shows the most promise for FINALLY providing a tool that can show you aggregate usage of disk I/O based on files being accessed, rather than the processes doing the accessing.
References
fatrace: report system wide file access events
fatrace - report system wide file access events
Another new ABI for fanotify
blktrace User Guide
| Determining Specific File Responsible for High I/O |
1,336,906,003,000 |
Is there a way to show the connections of a process? Something like that:
show PID
in which show is a command to do this, and PID is the ID of the process.
The output that I want is composed of all the connections of the process (in real-time). For example, if the process tries to connect to 173.194.112.151 the output is 173.194.112.151.
A more specific example with Firefox:
show `pidof firefox`
and with Firefox I go at first to google.com, then to unix.stackexchange.com and finally to 192.30.252.129. The output, when I close the browser, must be:
google.com
stackexchange.com
192.30.252.129
(Obviously with the browser this output is not realistic, because there are a lot of other related connections, but this is only an example.)
|
You're looking for strace!
I found this answer on askubuntu, but it's valid for Unix:
To start and monitor an new process:
strace -f -e trace=network -s 10000 PROCESS ARGUMENTS
To monitor an existing process with a known PID:
strace -p $PID -f -e trace=network -s 10000
Otherwise, but that's specific to Linux, you can run the process in an isolated network namespace and use wireshark to monitor the traffic. This will probably be more convenient than reading the strace log:
create a test network namespace:
ip netns add test
create a pair of virtual network interfaces (veth-a and veth-b):
ip link add veth-a type veth peer name veth-b
change the active namespace of the veth-a interface:
ip link set veth-a netns test
configure the IP addresses of the virtual interfaces:
ip netns exec test ifconfig veth-a up 192.168.163.1 netmask 255.255.255.0
ifconfig veth-b up 192.168.163.254 netmask 255.255.255.0
configure the routing in the test namespace:
ip netns exec test route add default gw 192.168.163.254 dev veth-a
activate ip_forward and establish a NAT rule to forward the traffic coming in from the namespace you created (you have to adjust the network interface and SNAT ip address):
echo 1 > /proc/sys/net/ipv4/ip_forward
iptables -t nat -A POSTROUTING -s 192.168.163.0/24 -o YOURNETWORKINTERFACE -j SNAT --to-source YOURIPADDRESS
(You can also use the MASQUERADE rule if you prefer)
finally, you can run the process you want to analyze in the new namespace, and wireshark too:
ip netns exec test thebinarytotest
ip netns exec test wireshark
You'll have to monitor the veth-a interface.
| Show network connections of a process |
1,336,906,003,000 |
I have seen this answer:
You should consider using inotifywait, as an example:
inotifywait -m /path -e create -e moved_to |
while read path action file; do
echo "The file '$file' appeared in directory '$path' via '$action'"
# do something with the file
done
The above script watches a directory for creation of files of any type. My question is how to modify the inotifywait command to report only when a file of a certain type/extension is created (or moved into the directory). For example, it should report when any .xml file is created.
What I tried:
I have run the inotifywait --help command, and have read the command line options. It has --exclude <pattern> and --excludei <pattern> options to EXCLUDE files of certain types (by using regular expressions), but I need a way to INCLUDE just the files of a certain type/extension.
|
how do I modify the inotifywait command to report only when a file of
certain type/extension is created
Please note that this is untested code since I don't have access to inotify right now. But something akin to this ought to work with bash:
inotifywait -m /path -e create -e moved_to |
while read -r directory action file; do
if [[ "$file" =~ .*xml$ ]]; then # Does the file end with .xml?
echo "xml file" # If so, do your thing here!
fi
done
Alternatively, without bash,
inotifywait -m /path -e create -e moved_to |
while read -r directory action file; do
case "$file" in
(*.xml)
echo "xml file" # Do your thing here!
;;
esac
fi
done
With newer versions of inotifywait you can directly create a pattern match for files:
inotifywait -m /path -e create -e moved_to --include '.*\.xml$' |
while read -r directory action file; do
echo "xml file" # Do your thing here!
done
| How to use inotifywait to watch a directory for creation of files of a specific extension |
1,336,906,003,000 |
Three files have suddenly appeared in my home directory, called "client_state.xml", "lockfile", and "time_stats_log". The last two are empty. I'm wondering how they got there. It's not the first time it has happened, but the last time was weeks ago; I deleted the files and nothing broke or complained. I haven't been able to think of what I was doing at the time reported by stat $filename. Is there any way I can find out where they came from?
Alternatively, is there a way to monitor the home directory (but not sub-directories) for the creation of files?
|
I don't believe there is a way to determine which program created a file.
For your alternative question:
You can watch for the file to be recreated, though, using inotify. inotifywait is a command-line interface for the inotify subsystem; you can tell it to look for create events in your home directory:
$ (sleep 5; touch ~/making-a-test-file) &
[1] 22526
$ inotifywait -e create ~/
Setting up watches.
Watches established.
/home/mmrozek/ CREATE making-a-test-file
You probably want to run it with -m (monitor), which tells it not to exit after it sees the first event
| Is it possible to find out what program or script created a given file? |
1,336,906,003,000 |
How can I open a text file and let it update itself? Similar to the way top works.
I want to open a log file and watch it update itself on the fly.
I have just tried:
$ tail error.log
But just realised, that it just shows you the lines in the log file.
I am using RHEL 5.10
|
You're looking for tail -f error.log (from man tail):
-f, --follow[={name|descriptor}]
output appended data as the file grows; -f, --follow, and --fol‐
low=descriptor are equivalent
That will let you watch a file and see any changes made to it.
| Open a text file and let it update itself |
1,336,906,003,000 |
We have one central server which functions as an internet gateway. This server is connected to the internet, and using iptables we forward traffic and share the internet connection among all computers in the network. This works just fine.
However, sometimes internet gets really slow. Most likely one of the users is downloading videos or other large files. I want to pinpoint the culprit. I'm thinking of installing a tool that can monitor the network traffic that passes through the server, by IP. Preferably in real time as well as an accumulated total (again by IP). Any tool that is recommended for this? Preferably something in the Ubuntu repositories.
|
ntop is probably the best solution for doing this. It is designed to run long term and capture exactly what you're looking for.
It can show you which clients are receiving/sending the most traffic, where they're receiving/sending to, what protocols and ports are being used etc.
It then uses a web GUI to navigate and display this information.
ntop is a fairly well known tool, so I would be highly surprised if its not in Ubuntu's package repository.
| Find out network traffic per IP |
1,336,906,003,000 |
I would like to see what's happening in my app server folders, i.e. which files are changed by process x or which *.war files have been changed (replaced/created) in the last x minutes.
Is there a tool in Linux to help with this?
|
Depending on your exact needs, you might want to look into inotify and/or FAM/GAMIN solutions.
| monitoring file changes + process access to files |
1,336,906,003,000 |
Is there some way I can check which of my processes the kernel has killed? Sometimes I log onto my server and find that something that should've run all night just stopped 8 hours in and I'm unsure if it's the applications doing or the kernels.
|
If the kernel killed a process (because the system ran out of memory), there will be a kernel log message. Check in /var/log/kern.log (on Debian/Ubuntu, other distributions might send kernel logs to a different file, but usually under /var/log under Linux).
Note that if the OOM-killer (out-of-memory killer) triggered, it means you don't have enough virtual memory. Add more swap (or perhaps more RAM).
Some process crashes are recorded in kernel logs as well (e.g. segmentation faults).
If the processes were started from cron, you should have a mail with error messages. If the processes were started from a shell in a terminal, check the errors in that terminal. Run the process in screen to see the terminal again in the morning. This might not help if the OOM-killer triggered, because it might have killed the cron or screen process as well; but if you ran into the OOM-killer, that's the problem you need to fix.
| Where can I see a list of kernel killed processes? |
1,336,906,003,000 |
There is a directory A whose contents are changed frequently by other people.
I have made a personal directory B where I keep all the files that have ever been in A.
Currently, I just occasionally run rsync to get the files to be backed up from A to B. However, I fear the possibility that some files will get added in A, and then removed from A before I get the chance to copy them over to B.
What is the best way to prevent this from occurring? Ideally, I'd like to have my current backup script run every time the contents of A get changed.
|
If you have inotify-tools installed you can use inotifywait to trigger an action if a file or directory is written to:
#!/bin/sh
dir1=/path/to/A/
while inotifywait -qqre "attrib,modify,close_write,move,move_self,create,delete,delete_self" "$dir1"; do
/run/backup/to/B
done
Where the -qq switch is completely silent, -r is recursive (if needed) and -e is the event to monitor. From man inotifywait:
attrib The metadata of a watched file or a file within a watched directory was modified. This includes timestamps, file permissions, extended attributes etc.
modify A watched file or a file within a watched directory was written to.
close_write A watched file or a file within a watched directory was closed, after being opened in writeable mode. This does not
necessarily imply the file was written to.
move A file or directory was moved from or to a watched directory. Note that this is actually implemented simply by listening for both
moved_to and moved_from, hence all close events received will be
output as one or both of these, not MOVE.
move_self A watched file or directory was moved. After this event, the file or directory is no longer being watched.
create A file or directory was created within a watched directory.
delete A file or directory within a watched directory was deleted.
delete_self A watched file or directory was deleted. After this event the file or directory is no longer being watched. Note that this
event can occur even if it is not explicitly being listened for.
| How to run a command when a directory's contents are updated? |
1,336,906,003,000 |
Is there a Unix/Linux equivalent of Process Monitor, whether GUI or CUI?
If it makes a difference, I'm looking at Ubuntu, but if there's an equivalent for other systems (Mac, other Linux variants like Fedora, etc.) then knowing any of those would be useful too.
Edit:
Process Monitor is for monitoring system calls (such as file creation or writes), while Process Explorer is for monitoring process status (which is like System Monitor). I'm asking for the former, not the latter. :-)
|
The console standby for this is top, but there are alternatives like my favorite htop that give you a little more display flexibility and allow you a few more operations on the processes.
A less interactive view that is better for use in scripts would be the ps program and all it's relatives.
Edit: Based on your clarified question, you might note that strace handles watching system calls made by a given process including all read-write operations and os function calls. You can activate it on the command line before the program you want to track or attach to a running process by hitting s on a process selected in htop.
| Process Monitor equivalent for Linux? |
1,336,906,003,000 |
I want to monitor memory usage of a process, and I want this data to be logged. Does such a tool exist?
|
I have written a script to do exactly this.
It basically samples ps at specific intervals, to build up a profile of a particular process. The process can be launched by the monitoring tool itself, or it can be an independent process (specified by pid or command pattern).
| Is there a tool that allows logging of memory usage? |
1,336,906,003,000 |
I don't understand iotop output: it shows ~1.5 MB/s of disk write (top right), but all programs have 0.00 B/s. Why?
The video was taken as I was deleting the content of a folder with a few millions of files using perl -e 'for(<*>){((stat)[9]<(unlink))}', on Kubuntu 14.04.3 LTS x64.
iotop was launched using sudo iotop.
|
The information shown by iotop isn't gathered in the same way for individual processes and for the system as a whole. The “actual” global figures are not the sum of the per-process figures (that's what “total” is).
All information is gathered from the proc filesystem.
For each process, iotop reads data from /proc/PID/io, specifically the rchar and wchar values. These are the number of bytes passed in read and write system calls (including variants such as readv, writev, recv, send, etc.).
The global “actual” values are read from /proc/vmstat, specifically the pgpgin and pgpgout values. These measure the data exchanged between the kernel and the hardware (more precisely, this is the data shuffled around by the block device layer in the kernel).
There are many reasons why the per-process data and the block device layer data differ. In particular:
Caching and buffering mean that I/O happening at one layer may not be happening at the same time, or the same number of times, at the other layer. For example, data read from the cache is accounted as a read from the process that accesses it, but there's no corresponding read from the hardware (that already happened earlier, possibly on behalf of another process).
The process-level data includes data exchanged on pipes, sockets, and other input/output that doesn't involve an underlying disk or other block device.
The process-level data only accounts for file contents, not metadata.
That last difference explains what you're seeing here. Removing files only affects metadata, not data, so the process isn't writing anything. It may be reading directory contents to list the files to delete, but that's small enough that it may scroll by unnoticed.
I don't think Linux offers any way to monitor file metadata updates. You can monitor per-filesystem I/O via entries under /sys/fs for some filesystems. I don't think you can account metadata I/O against specific processes, it would be very complicated to do in the general case since multiple processes could be causing the same metadata to be read or changed.
| iotop showing 1.5 MB/s of disk write, but all programs have 0.00 B/s |
1,336,906,003,000 |
Until recently I thought the load average (as shown for example in top) was a moving average on the n last values of the number of process in state "runnable" or "running". And n would have been defined by the "length" of the moving average: since the algorithm to compute load average seems to trigger every 5 sec, n would have been 12 for the 1min load average, 12x5 for the 5 min load average and 12x15 for the 15 min load average.
But then I read this article: http://www.linuxjournal.com/article/9001. The article is quite old but the same algorithm is implemented today in the Linux kernel. The load average is not a moving average but an algorithm for which I don't know a name. Anyway I made a comparison between the Linux kernel algorithm and a moving average for an imaginary periodic load:
.
There is a huge difference.
Finally my questions are:
Why this implementation have been choosen compared to a true moving average, that has a real meaning to anyone ?
Why everybody speaks about "1min load average" since much more than the last minute is taken into account by the algorithm. (mathematically, all the measure since the boot; in practice, taking into account the round-off error -- still a lot of measures)
|
This difference dates back to the original Berkeley Unix, and stems from the fact that the kernel can't actually keep a rolling average; it would need to retain a large number of past readings in order to do so, and especially in the old days there just wasn't memory to spare for it. The algorithm used instead has the advantage that all the kernel needs to keep is the result of the previous calculation.
Keep in mind the algorithm was a bit closer to the truth back when computer speeds and corresponding clock cycles were measured in tens of MHz instead of GHz; there's a lot more time for discrepancies to creep in these days.
| Why isn't a straightforward 1/5/15 minute moving average used in Linux load calculation? |
1,336,906,003,000 |
In a Ubuntu 14.04 server I am experiencing a massive hard disk activity which has no apparent justification: it comes as a burst, it lasts a few minutes and then disappears. It consumes system resources and slows down the whole system.
Is there a (command-line) tool which can be used to monitor the disk activity, listing the processes that are using the disk and the files involved? Something like htop for the CPU.
|
For checking I/O usage I usually use iotop.
It's not installed by default on the distro, but you can easily get it with:
sudo apt-get install iotop
Then launch it with root priviledges:
sudo iotop --only
The --only option will show only the processes currently accessing the I/O.
| Monitor hard disk activity [duplicate] |
1,336,906,003,000 |
I started hosting sites a while back using Cherokee. For external sources (FastCGI, etc) it has an option to launch the process if it can't find one running on the designated socket or port. This is great because it means if PHP or a Django site falls over (as they occasionally do) it restarts it automatically.
On a new server using PHP-FPM I couldn't use Cherokee (it has a bug with PHP) so I've moved to NGINX. I really like NGINX (for its config style) but I'm having serious issues with processes falling over and never respawning. PHP does this sometimes but Django sites are more of a problem. I've created init scripts for them and they come up on boot but this doesn't help me if they conk out between reboots.
I guess I'm looking for a FastCGI proxy. Something that, like Cherokee, knows what processes should be running on which sockets/ports and respawns them on-demand. Does such a thing exist? Is there any way to build this into NGINX (for ease of config)?
|
How about daemontools and specifically the supervise tool
supervise monitors a service. It starts the service and restarts the service if it dies. Setting up a new service is easy: all supervise needs is a directory with a run script that runs the service.
| Ensure a process is always running |
1,336,906,003,000 |
I use Ubuntu Server 10.10 and I would like to see what processes are running. I know that PostgreSQL is running on my machine but I can not see it with the top or ps commands, so I assume that they aren't showing all of the running processes. Is there another command which will show all running processes or is there any other parameters I can use with top or ps for this?
|
From the ps man page:
-e Select all processes. Identical to -A.
Thus, ps -e will display all of the processes. The common options for "give me everything" are ps -ely or ps aux, the latter is the BSD-style. Often, people then pipe this output to grep to search for a process, as in xenoterracide's answer. In order to avoid also seeing grep itself in the output, you will often see something like:
ps -ef | grep [f]oo
where foo is the process name you are looking for.
However, if you are looking for a particular process, I recommend using the pgrep command if it is available. I believe it is available on Ubuntu Server. Using pgrep means you avoid the race condition mentioned above. It also provides some other features that would require increasingly complicated grep trickery to replicate. The syntax is simple:
pgrep foo
where foo is the process for which you are looking. By default, it will simply output the Process ID (PID) of the process, if it finds one. See man pgrep for other output options. I found the following page very helpful:
http://mywiki.wooledge.org/ProcessManagement
| How can I see what processes are running? |
1,336,906,003,000 |
First of all, I found a similar question but it doesn't really solve my problem. I am trying to discover if the USB bus for a device I am using is the bottleneck in my program.
How can I monitor a USB bus (similar to how gnome-system-monitor works) to show bus utilization? Basically I want to identify when the bus is being 'maxed' out. I guess what I am looking for is some interface for usbmon, as that appears like it would do what I need.
This came about from testing the USRP and GNU Radio. I am running into a situation where it appears that the USB bus could be a limiting factor, so I ask the more general question of USB performance monitoring.
|
Since usbmon provides the length of each packet transferred, I would approach this by writing a quick program to parse the 0u file (which has data for all USB devices.) It would pick out the USB bus and device numbers, then keep a running total of the packet length field in both directions for each device.
This will then give you the amount of data transferred per device, in each direction. If you print it once a second you'll get a pretty good idea of each device's throughput. Note that it won't include any USB overhead, but if you compare the figures to a device that is able to saturate the available bandwidth you'll know whether you're getting close to the limit.
| USB performance/traffic monitor? |
1,336,906,003,000 |
I find the output of the shell command top to be a simple and familiar way to get a rough idea of the health of a machine. I'd like to serve top's output (or something very similar to it) from a tiny web server on a machine for crude monitoring purposes.
Is there a way to get top to write its textual output exactly once, without formatting characters? I've tried this:
(sleep 1; echo 'q') | top > output.txt
This seems to be close to what I want, except that (1) there's no guarantee that I won't get more or less than one screenful of info and (2) I have to strip out all the terminal formatting characters.
Or is there some other top-like command that lists both machine-wide and process-level memory/CPU usage/uptime info?
(Ideally, I'd love a strategy that's portable to both Linux and Mac OS X, since our devs use Macs and our prod environment is Linux.)
|
In Linux, you can try this:
top -bn1 > output.txt
From man top:
-b : Batch-mode operation
Starts top in 'Batch' mode, which could be useful for sending
output from top to other programs or to a file. In this
mode, top will not accept input and runs until the iterations
limit you've set with the '-n' command-line option or until
killed.
....
-n : Number-of-iterations limit as: -n number
Specifies the maximum number of iterations, or frames, top
should produce before ending.
With OS X, try:
top -l 1
From top OSX manpage:
-l <samples>
Use logging mode and display <samples> samples, even if
standard output is a terminal. 0 is treated as infinity.
Rather than redisplaying, output is periodically printed in
raw form. Note that the first sample displayed will have an
invalid %CPU displayed for each process, as it is calculated
using the delta between samples.
| Is there a way to get "top" to run exactly once and exit? |
1,336,906,003,000 |
Instead of doing wc -l /proc/net/tcp, is there a faster way of doing it?
I just need a total count of tcp connections.
|
If you just want to get the number and don't need any details you can read the data from /proc/net/sockstat{,6}. Please keep in mind that you have to combine both values to get the absolute count of connections.
If you want to get the information from the kernel itself you can use NETLINK_INET_DIAG to get the information from the kernel without having to read it from /proc
| Getting current TCP connection count on a system |
1,336,906,003,000 |
I've switched from GNU Screen to tmux. They're both similar, except that tmux is still maintained.
GNU Screen has a C-a _ ("silence") command. This command makes GNU Screen monitor the current window and alert me when there's been 30 seconds of inactivity. This is quite useful: for example, GNU Screen can watch a long apt-get dist-upgrade process and alert me when dpkg has a question for me.
Does tmux have an equivalent command? I tried searching the Web but didn't find an answer.
|
The manpage reveals the answer. You will need tmux 1.4 (released Dec. 2010) or better.
Press Ctrl+B then enter the command:
:setw monitor-silence 30
To identify all quiet windows in the session, apply the setting to all windows:
:setw -g monitor-silence 30
| How can I make tmux monitor a window for inactivity? |
1,336,906,003,000 |
Background : I need to receive an alert when my server is down. When the server is down, maybe the Sysload collector will not be able to send any alert. To receive an alert when the server is down, I have an external source (server) to detect it.
Question : Is there any way (i prefer bash script) to detect when my server is down or offline and sends an alert message (Email + SMS)?
|
If you have a separate server to run your check script on, something like this would do a simple Ping test to see if the server is alive:
#!/bin/bash
SERVERIP=192.168.2.3
[email protected]
ping -c 3 $SERVERIP > /dev/null 2>&1
if [ $? -ne 0 ]
then
# Use your favorite mailer here:
mailx -s "Server $SERVERIP is down" -t "$NOTIFYEMAIL" < /dev/null
fi
You can cron the script to run periodically.
If you don't have mailx, you'll have to replace that line with whatever command line email program you have and probably change the options. If your carrier provides an SMS email address, you can send the email to that address. For example, with AT&T, if you send an email to phonenumber@txt.att.net, it will send the email to your phone.
Here's a list of email to SMS gateways:
http://en.wikipedia.org/wiki/List_of_SMS_gateways
If your server is a publicly accessible webserver, there are some free services to monitor your website and alert you if it's down, search the web for free website monitoring to find some.
| Bash script to detect when my server is down or offline |
1,336,906,003,000 |
There's an application on my system which keeps creating an empty ~/Desktop directory again and again. I can't stand capital letters in my home, nor I can stand this “desktop” thingy. So, as picky as I am, I remove the directory each time I see it. I'd really like to know which application is responsible for that (probably some application I won't use so often¹).
Any good ideas to track down the culprit?
—
1. Obviously I'd like to get rid of it, or maybe patch it if I can't live without it.
|
This directory might be created by any application that follows the Freedesktop userdirs standard. That potentially includes all Gnome or KDE applications.
If you want to know which application creates the file, you can use the LoggedFS filesystem or the Linux audit subsystem. See Is it possible to find out what program or script created a given file? for more information.
| Which application should I blame for compulsively creating a directory again and again? |
1,336,906,003,000 |
I have a hypothesis: sometimes TCP connections arrive faster than my server can accept() them. They queue up until the queue overflows and then there are problems.
How can I confirm this is happening?
Can I monitor the length of the accept queue or the number of overflows? Is there a counter exposed somewhere?
|
To check if your queue is overflowing use either netstat or nstat
[centos ~]$ nstat -az | grep -i listen
TcpExtListenOverflows 3518352 0.0
TcpExtListenDrops 3518388 0.0
TcpExtTCPFastOpenListenOverflow 0 0.0
[centos ~]$ netstat -s | grep -i LISTEN
3518352 times the listen queue of a socket overflowed
3518388 SYNs to LISTEN sockets dropped
Reference:
https://perfchron.com/2015/12/26/investigating-linux-network-issues-with-netstat-and-nstat/
To monitor your queue sizes, use the ss command and look for SYN-RECV sockets.
$ ss -n state syn-recv sport = :80 | wc -l
119
Reference:
https://blog.cloudflare.com/syn-packet-handling-in-the-wild/
| How can I monitor the length of the accept queue? |
1,336,906,003,000 |
Using Fedora for a small Samba and development server.
|
They're kernel threads.
[jbd2/%s] are used by JBD2 (the journal manager for ext4) to periodically flush journal commits and other changes to disk.
[kdmflush] is used by Device Mapper to process deferred work that it has queued up from other contexts where doing immediately so would be problematic.
| What is [jbd2/dm-3-8] and [kdmflush]? And why are they constantly on iotop? |
1,336,906,003,000 |
Just for fun:
Is there a way to monitor/capture/dump whatever is being written to /dev/null?
On Debian, or FreeBSD, if it matters, any other OS specific solutions are also welcome.
|
Making /dev/null a named pipe is probably the easiest way. Be warned that some programs (sshd, for example) will act abnormally or fail to execute when they find out that it isn't a special file (or they may read from /dev/null, expecting it to return EOF).
# Remove special file, create FIFO and read from it
rm /dev/null && mkfifo -m622 /dev/null && tail -f /dev/null
# Remove FIFO, recreate special file
rm /dev/null && mknod -m666 /dev/null c 1 3
This should work under all Linux distributions, and all major BSDs.
| Monitor what is being sent to /dev/null? |
1,336,906,003,000 |
Searching for what one can monitor with perf_events on Linux, I cannot find what Kernel PMU event are?
Namely, with perf version 3.13.11-ckt39 the perf list shows events like:
branch-instructions OR cpu/branch-instructions/ [Kernel PMU event]
Overall there are:
Tracepoint event
Software event
Hardware event
Hardware cache event
Raw hardware event descriptor
Hardware breakpoint
Kernel PMU event
and I would like to understand what they are, where they come from. I have some kind of explanation for all, but Kernel PMU event item.
From perf wiki tutorial and Brendan Gregg's page I get that:
Tracepoints are the clearest -- these are macros on the kernel source, which make a probe point for monitoring, they were introduced with ftrace project and now are used by everybody
Software are kernel's low level counters and some internal data-structures (hence, they are different from tracepoints)
Hardware event are some very basic CPU events, found on all architectures and somehow easily accessed by kernel
Hardware cache event are nicknames to Raw hardware event descriptor -- it works as follows
as I got it, Raw hardware event descriptor are more (micro?)architecture-specific events than Hardware event, the events come from Processor Monitoring Unit (PMU) or other specific features of a given processor, thus they are available only on some micro-architectures (let's say "architecture" means "x86_64" and all the rest of the implementation details are "micro-architecture");
and they are accessible for instrumentation via these strange descriptors
rNNN [Raw hardware event descriptor]
cpu/t1=v1[,t2=v2,t3 ...]/modifier [Raw hardware event descriptor]
(see 'man perf-list' on how to encode it)
-- these descriptors, which events they point to and so on is to be found in processor's manuals (PMU events in perf wiki);
but then, when people know that there is some useful event on a given processor they give it a nickname and plug it into linux as Hardware cache event for ease of access
-- correct me if I'm wrong (strangely all Hardware cache event are about something-loads or something-misses -- very like the actual processor's cache..)
now, the Hardware breakpoint
mem:<addr>[:access] [Hardware breakpoint]
is a hardware feature, which is probably common to most modern architectures, and works as a breakpoint in a debugger? (probably it is googlable anyway)
finally, Kernel PMU event I don't manage to google on;
it also doesn't show up in the listing of Events in Brendan's perf
page, so it's new?
Maybe it's just nicknames to hardware events specifically from PMU? (For ease of access it got a separate section in the list of events in addition to the nickname.)
In fact, maybe Hardware cache events are nicknames to hardware events from CPU's cache and Kernel PMU event are nicknames to PMU events? (Why not call it Hardware PMU event then?..)
It could be just new naming scheme -- the nicknames to hardware events got sectionized?
And these events refer to things like cpu/mem-stores/, plus since some linux version events got descriptions in /sys/devices/ and:
# find /sys/ -type d -name events
/sys/devices/cpu/events
/sys/devices/uncore_cbox_0/events
/sys/devices/uncore_cbox_1/events
/sys/kernel/debug/tracing/events
-- debug/tracing is for ftrace and tracepoints, other directories match exactly what perf list shows as Kernel PMU event.
Could someone point me to a good explanation/documentation of what Kernel PMU events or /sys/..events/ systems are?
Also, is /sys/..events/ some new effort to systemize hardware events or something alike? (Then, Kernel PMU is like "the Performance Monitoring Unit of Kernel".)
PS
To give better context, not-privileged run of perf list (tracepoints are not shown, but all 1374 of them are there) with full listings of Kernel PMU events and Hardware cache events and others skipped:
$ perf list
List of pre-defined events (to be used in -e):
cpu-cycles OR cycles [Hardware event]
instructions [Hardware event]
...
cpu-clock [Software event]
task-clock [Software event]
...
L1-dcache-load-misses [Hardware cache event]
L1-dcache-store-misses [Hardware cache event]
L1-dcache-prefetch-misses [Hardware cache event]
L1-icache-load-misses [Hardware cache event]
LLC-loads [Hardware cache event]
LLC-stores [Hardware cache event]
LLC-prefetches [Hardware cache event]
dTLB-load-misses [Hardware cache event]
dTLB-store-misses [Hardware cache event]
iTLB-loads [Hardware cache event]
iTLB-load-misses [Hardware cache event]
branch-loads [Hardware cache event]
branch-load-misses [Hardware cache event]
branch-instructions OR cpu/branch-instructions/ [Kernel PMU event]
branch-misses OR cpu/branch-misses/ [Kernel PMU event]
bus-cycles OR cpu/bus-cycles/ [Kernel PMU event]
cache-misses OR cpu/cache-misses/ [Kernel PMU event]
cache-references OR cpu/cache-references/ [Kernel PMU event]
cpu-cycles OR cpu/cpu-cycles/ [Kernel PMU event]
instructions OR cpu/instructions/ [Kernel PMU event]
mem-loads OR cpu/mem-loads/ [Kernel PMU event]
mem-stores OR cpu/mem-stores/ [Kernel PMU event]
ref-cycles OR cpu/ref-cycles/ [Kernel PMU event]
stalled-cycles-frontend OR cpu/stalled-cycles-frontend/ [Kernel PMU event]
uncore_cbox_0/clockticks/ [Kernel PMU event]
uncore_cbox_1/clockticks/ [Kernel PMU event]
rNNN [Raw hardware event descriptor]
cpu/t1=v1[,t2=v2,t3 ...]/modifier [Raw hardware event descriptor]
(see 'man perf-list' on how to encode it)
mem:<addr>[:access] [Hardware breakpoint]
[ Tracepoints not available: Permission denied ]
|
Googling and ack-ing is over! I've got some answer.
But firstly let me clarify the aim of the question a little more:
I want clearly distinguish independent processes in the system and their performance counters. For instance, a core of a processor, an uncore device (learned about it recently), kernel or user application on the processor, a bus (= bus controller), a hard drive are all independent processes, they are not synchronized by a clock. And nowadays probably all of them have some Process Monitoring Counter (PMC). I'd like to understand which processes the counters come from. (It is also helpful in googling: the "vendor" of a thing zeros it better.)
Also, the gear used for the search: Ubuntu 14.04, linux 3.13.0-103-generic, processor Intel(R) Core(TM) i5-3317U CPU @ 1.70GHz (from /proc/cpuinfo, it has 2 physical cores and 4 virtual -- the physical matter here).
Terminology, things the question involves
From Intel:
processor is a core device (it's 1 device/process) and a bunch of uncore devices, core is what runs the program (clock, ALU, registers etc), uncore are devices put on die, close to the processor for speed and low latency (the real reason is "because the manufacturer can do it"); as I understood it is basically the Northbridge, like on PC motherboard, plus caches; and AMD actually calls these devices NorthBridgeinstead ofuncore`;
ubox which shows up in my sysfs
$ find /sys/devices/ -type d -name events
/sys/devices/cpu/events
/sys/devices/uncore_cbox_0/events
/sys/devices/uncore_cbox_1/events
-- is an uncore device, which manages Last Level Cache (LLC, the last one before hitting RAM); I have 2 cores, thus 2 LLC and 2 ubox;
Processor Monitoring Unit (PMU) is a separate device which monitors operations of a processor and records them in Processor Monitoring Counter (PMC) (counts cache misses, processor cycles etc); they exist on core and uncore devices; the core ones are accessed with rdpmc (read PMC) instruction; the uncore, since these devices depend on actual processor at hand, are accessed via Model Specific Registers (MSR) via rdmsr (naturally);
apparently, the workflow with them is done via pairs of registers -- 1 register sets which events the counter counts, 2 register is the value in the counter; the counter can be configured to increment after a bunch of events, not just 1; + there are some interupts/tech noticing overflows in these counters;
more one can find in Intel's "IA-32 Software Developer's Manual Vol 3B" chapter 18 "PERFORMANCE MONITORING";
also, the MSR's format concretely for these uncore PMCs for version "Architectural Performance Monitoring Version 1" (there are versions 1-4 in the manual, I don't know which one is my processor) is described in "Figure 18-1. Layout of IA32_PERFEVTSELx MSRs" (page 18-3 in mine), and section "18.2.1.2 Pre-defined Architectural Performance Events" with "Table 18-1. UMask and Event Select Encodings for Pre-Defined Architectural Performance Events", which shows the events which show up as Hardware event in perf list.
From linux kernel:
kernel has a system (abstraction/layer) for managing performance counters of different origin, both software (kernel's) and hardware, it is described in linux-source-3.13.0/tools/perf/design.txt; an event in this system is defined as struct perf_event_attr (file linux-source-3.13.0/include/uapi/linux/perf_event.h), the main part of which is probably __u64 config field -- it can hold both a CPU-specific event definition (the 64bit word in the format described on those Intel's figures) or a kernel's event
The MSB of the config word signifies if the rest contains [raw CPU's or kernel's event]
the kernel's event defined with 7 bits for type and 56 for event's identifier, which are enum-s in the code, which in my case are:
$ ak PERF_TYPE linux-source-3.13.0/include/
...
linux-source-3.13.0/include/uapi/linux/perf_event.h
29: PERF_TYPE_HARDWARE = 0,
30: PERF_TYPE_SOFTWARE = 1,
31: PERF_TYPE_TRACEPOINT = 2,
32: PERF_TYPE_HW_CACHE = 3,
33: PERF_TYPE_RAW = 4,
34: PERF_TYPE_BREAKPOINT = 5,
36: PERF_TYPE_MAX, /* non-ABI */
(ak is my alias to ack-grep, which is the name for ack on Debian; and ack is awesome);
in the source code of kernel one can see operations like "register all PMUs dicovered on the system" and structure types struct pmu, which are passed to something like int perf_pmu_register(struct pmu *pmu, const char *name, int type) -- thus, one could just call this system "kernel's PMU", which would be an aggregation of all PMUs on the system; but this name could be interpreted as monitoring system of kernel's operations, which would be misleading;
let's call this subsystem perf_events for clarity;
as any kernel subsystem, this subsystem can be exported into sysfs (which is made to export kernel subsystems for people to use); and that's what are those events directories in my /sys/ -- the exported (parts of?) perf_events subsystem;
also, the user-space utility perf (built into linux) is still a separate program and has its' own abstractions; it represents an event requested for monitoring by user as perf_evsel (files linux-source-3.13.0/tools/perf/util/evsel.{h,c}) -- this structure has a field struct perf_event_attr attr;, but also a field like struct cpu_map *cpus; that's how perf utility assigns an event to all or particular CPUs.
Answer
Indeed, Hardware cache event are "shortcuts" to the events of the cache devices (ubox of Intel's uncore devices), which are processor-specific, and can be accessed via the protocol Raw hardware event descriptor. And Hardware event are more stable within architecture, which, as I understand, name the events from the core device. There no other "shortcuts" in my kernel 3.13 to some other uncore events and counters. All the rest -- Software and Tracepoints -- are kernel's events.
I wonder if the core's Hardware events are accessed via the same Raw hardware event descriptor protocol. They might not -- since the counter/PMU sits on core, maybe it is accessed differently. For instance, with that rdpmu instruction, instead of rdmsr, which accesses uncore. But it is not that important.
Kernel PMU event are just the events, which are exported into sysfs. I don't know how this is done (automatically by kernel all discovered PMCs on the system, or just something hard-coded, and if I add a kprobe -- is it exported? etc). But the main point is that these are the same events as Hardware event or any other in the internal perf_event system.
And I don't know what those
$ ls /sys/devices/uncore_cbox_0/events
clockticks
are.
Details on Kernel PMU event
Searching through the code leads to:
$ ak "Kernel PMU" linux-source-3.13.0/tools/perf/
linux-source-3.13.0/tools/perf/util/pmu.c
629: printf(" %-50s [Kernel PMU event]\n", aliases[j]);
-- which happens in the function
void print_pmu_events(const char *event_glob, bool name_only) {
...
while ((pmu = perf_pmu__scan(pmu)) != NULL)
list_for_each_entry(alias, &pmu->aliases, list) {...}
...
/* b.t.w. list_for_each_entry is an iterator
* apparently, it takes a block of {code} and runs over some lost
* Ruby built in kernel!
*/
// then there is a loop over these aliases and
loop{ ... printf(" %-50s [Kernel PMU event]\n", aliases[j]); ... }
}
and perf_pmu__scan is in the same file:
struct perf_pmu *perf_pmu__scan(struct perf_pmu *pmu) {
...
pmu_read_sysfs(); // that's what it calls
}
-- which is also in the same file:
/* Add all pmus in sysfs to pmu list: */
static void pmu_read_sysfs(void) {...}
That's it.
Details on Hardware event and Hardware cache event
Apparently, the Hardware event come from what Intel calls "Pre-defined Architectural Performance Events", 18.2.1.2 in IA-32 Software Developer's Manual Vol 3B. And "18.1 PERFORMANCE MONITORING OVERVIEW" of the manual describes them as:
The second class of performance monitoring capabilities is referred to as architectural performance monitoring.
This class supports the same counting and Interrupt-based event sampling usages, with a smaller set of available
events.
The visible behavior of architectural performance events is consistent across processor implementations.
Availability of architectural performance monitoring capabilities is enumerated using the CPUID.0AH. These events are discussed in Section 18.2.
-- the other type is:
Starting with Intel Core Solo and Intel Core Duo processors, there are two classes of performance monitoring capa-bilities.
The first class supports events for monitoring performance using counting or interrupt-based event sampling usage.
These events are non-architectural and vary from one processor model to another...
And these events are indeed just links to underlying "raw" hardware events, which can be accessed via perf utility as Raw hardware event descriptor.
To check this one looks at linux-source-3.13.0/arch/x86/kernel/cpu/perf_event_intel.c:
/*
* Intel PerfMon, used on Core and later.
*/
static u64 intel_perfmon_event_map[PERF_COUNT_HW_MAX] __read_mostly =
{
[PERF_COUNT_HW_CPU_CYCLES] = 0x003c,
[PERF_COUNT_HW_INSTRUCTIONS] = 0x00c0,
[PERF_COUNT_HW_CACHE_REFERENCES] = 0x4f2e,
[PERF_COUNT_HW_CACHE_MISSES] = 0x412e,
...
}
-- and exactly 0x412e is found in "Table 18-1. UMask and Event Select Encodings for Pre-Defined Architectural Performance Events" for "LLC Misses":
Bit Position CPUID.AH.EBX | Event Name | UMask | Event Select
...
4 | LLC Misses | 41H | 2EH
-- H is for hex. All 7 are in the structure, plus [PERF_COUNT_HW_REF_CPU_CYCLES] = 0x0300, /* pseudo-encoding *. (The naming is a bit different, addresses are the same.)
Then the Hardware cache events are in structures like (in the same file):
static __initconst const u64 snb_hw_cache_extra_regs
[PERF_COUNT_HW_CACHE_MAX]
[PERF_COUNT_HW_CACHE_OP_MAX]
[PERF_COUNT_HW_CACHE_RESULT_MAX] =
{...}
-- which should be for sandy bridge?
One of these -- snb_hw_cache_extra_regs[LL][OP_WRITE][RESULT_ACCESS] is filled with SNB_DMND_WRITE|SNB_L3_ACCESS, where from the def-s above:
#define SNB_L3_ACCESS SNB_RESP_ANY
#define SNB_RESP_ANY (1ULL << 16)
#define SNB_DMND_WRITE (SNB_DMND_RFO|SNB_LLC_RFO)
#define SNB_DMND_RFO (1ULL << 1)
#define SNB_LLC_RFO (1ULL << 8)
which should equal to 0x00010102, but I don't know how to check it with some table.
And this gives an idea how it is used in perf_events:
$ ak hw_cache_extra_regs linux-source-3.13.0/arch/x86/kernel/cpu/
linux-source-3.13.0/arch/x86/kernel/cpu/perf_event.c
50:u64 __read_mostly hw_cache_extra_regs
292: attr->config1 = hw_cache_extra_regs[cache_type][cache_op][cache_result];
linux-source-3.13.0/arch/x86/kernel/cpu/perf_event.h
521:extern u64 __read_mostly hw_cache_extra_regs
linux-source-3.13.0/arch/x86/kernel/cpu/perf_event_intel.c
272:static __initconst const u64 snb_hw_cache_extra_regs
567:static __initconst const u64 nehalem_hw_cache_extra_regs
915:static __initconst const u64 slm_hw_cache_extra_regs
2364: memcpy(hw_cache_extra_regs, nehalem_hw_cache_extra_regs,
2365: sizeof(hw_cache_extra_regs));
2407: memcpy(hw_cache_extra_regs, slm_hw_cache_extra_regs,
2408: sizeof(hw_cache_extra_regs));
2424: memcpy(hw_cache_extra_regs, nehalem_hw_cache_extra_regs,
2425: sizeof(hw_cache_extra_regs));
2452: memcpy(hw_cache_extra_regs, snb_hw_cache_extra_regs,
2453: sizeof(hw_cache_extra_regs));
2483: memcpy(hw_cache_extra_regs, snb_hw_cache_extra_regs,
2484: sizeof(hw_cache_extra_regs));
2516: memcpy(hw_cache_extra_regs, snb_hw_cache_extra_regs, sizeof(hw_cache_extra_regs));
$
The memcpys are done in __init int intel_pmu_init(void) {... case:...}.
Only attr->config1 is a bit odd. But it is there, in perf_event_attr (same linux-source-3.13.0/include/uapi/linux/perf_event.h file):
...
union {
__u64 bp_addr;
__u64 config1; /* extension of config */
};
union {
__u64 bp_len;
__u64 config2; /* extension of config1 */
};
...
They are registered in kernel's perf_events system with calls to int perf_pmu_register(struct pmu *pmu, const char *name, int type) (defined in linux-source-3.13.0/kernel/events/core.c: ):
static int __init init_hw_perf_events(void) (file arch/x86/kernel/cpu/perf_event.c) with call perf_pmu_register(&pmu, "cpu", PERF_TYPE_RAW);
static int __init uncore_pmu_register(struct intel_uncore_pmu *pmu) (file arch/x86/kernel/cpu/perf_event_intel_uncore.c, there are also arch/x86/kernel/cpu/perf_event_amd_uncore.c) with call ret = perf_pmu_register(&pmu->pmu, pmu->name, -1);
So finally, all events come from hardware and everything is ok. But here one could notice: why do we have LLC-loads in perf list and not ubox1 LLC-loads, since these are HW events and they actualy come from uboxes?
That's a thing of the perf utility and its' perf_evsel structure: when you request a HW event from perf you define the event which processors you want it from (default is all), and it sets up the perf_evsel with the requested event and processors, then at aggregation is sums the counters from all processors in perf_evsel (or does some other statistics with them).
One can see it in tools/perf/builtin-stat.c:
/*
* Read out the results of a single counter:
* aggregate counts across CPUs in system-wide mode
*/
static int read_counter_aggr(struct perf_evsel *counter)
{
struct perf_stat *ps = counter->priv;
u64 *count = counter->counts->aggr.values;
int i;
if (__perf_evsel__read(counter, perf_evsel__nr_cpus(counter),
thread_map__nr(evsel_list->threads), scale) < 0)
return -1;
for (i = 0; i < 3; i++)
update_stats(&ps->res_stats[i], count[i]);
if (verbose) {
fprintf(output, "%s: %" PRIu64 " %" PRIu64 " %" PRIu64 "\n",
perf_evsel__name(counter), count[0], count[1], count[2]);
}
/*
* Save the full runtime - to allow normalization during printout:
*/
update_shadow_stats(counter, count);
return 0;
}
(So, for the utility perf a "single counter" is not even a perf_event_attr, which is a general form, fitting both SW and HW events, it is an event of your query -- the same events may come from different devices and they are aggregated.)
Also a notice: struct perf_evsel contains only 1 struct perf_evevent_attr, but it also has a field struct perf_evsel *leader; -- it is nested.
There is a feature of "(hierarchical) groups of events" in perf_events, when you can dispatch a bunch of counters together, so that they can be compared to each other and so on. Not sure how it works with independent events from kernel, core, ubox. But this nesting of perf_evsel is it. And, most likely, that's how perf manages a query of several events together.
| What are Kernel PMU event-s in perf_events list? |
1,336,906,003,000 |
Often in network monitoring tools there are three values for one measure.
Ex
rx: 2.0 kb/s 40 kb/s 10 kb/s
Are these similar to how cpu load works, they are taken at different length time spans. So one is every two seconds, then four seconds..
Thanks in advance.
One example program would be iftop.
|
Generally speaking, you can't asume the output of different tools have the same meaning. You have to RTM.
Specifically, these three columns in iftop are the average traffic during the last 2, 10 and 40 seconds.
Some similar output on another software could mean something else (like, minimum, average and maximum).
| What do the three values mean on network monitoring tools |
1,336,906,003,000 |
I want to see a list of all outgoing HTTP requests from my desktop. I think it should be possible to monitor HTTPS hostnames as well for local clients using Server Name Indication (SNI).
OS X has a nice GUI utility called Little Snitch, which is a per-app HTTP monitor and firewall rule front-end.
I would settle for a nice terminal utility. tcpdump is overkill as I just want to see where the traffic is going in real-time and not the transmitted data. Ideally, I would like to see what process made the request as well, but just seeing what dials home would be a nice start.
|
You can use lsof and watch to do this, like so:
$ watch -n1 lsof -i TCP:80,443
Example output
dropbox 3280 saml 23u IPv4 56015285 0t0 TCP greeneggs.qmetricstech.local:56003->snt-re3-6c.sjc.dropbox.com:http (ESTABLISHED)
thunderbi 3306 saml 60u IPv4 56093767 0t0 TCP greeneggs.qmetricstech.local:34788->ord08s09-in-f20.1e100.net:https (ESTABLISHED)
mono 3322 saml 15u IPv4 56012349 0t0 TCP greeneggs.qmetricstech.local:54018->204-62-14-135.static.6sync.net:https (ESTABLISHED)
chrome 11068 saml 175u IPv4 56021419 0t0 TCP greeneggs.qmetricstech.local:42182->stackoverflow.com:http (ESTABLISHED)
| Monitor outgoing web requests as they’re happening |
1,336,906,003,000 |
I run a web server (Debian Squeeze on a VPS), and the graphs provided by the hosting company show consistently that around twice as much traffic is incoming to the server compared to the outgoing traffic. I am a little confused by this, so I would like to run some kind of logging utility on the machine that will not only confirm the upload/download figures, but also split them up by the remote host involved, so I can see if a large proportion of the incoming traffic is from one particular source.
I suspect most of the outgoing traffic goes through Apache, but the incoming traffic may be mostly through Apache or could be dominated by other scripts and cron jobs, so I would prefer a tool that would monitor traffic at the interface level rather than something within Apache.
Ideally I would like a tool that I can leave running for a few days, then come back and get an output of "bytes per remote host" for both incoming and outgoing traffic.
Is this possible with a standard Linux tool and a bit of configuration (if so, how?) or with a specialist program (if so, which?)
|
ntop is probably your best solution for doing this. It is designed to run long term and capture exactly what youre looking for.
It can show you what remote destinations are being used the most, how much traffic sent to/from, what protocols and ports were being used etc. It can do the same for the source hosts if you run it on a router so you can see the same stats on local clients as well.
It then uses a web GUI to navigate and display this information.
| What can I use to monitor and log incoming/outgoing traffic to/from remote hosts? |
1,336,906,003,000 |
In the iostat manpage I have found these two similar columns:
await
The average time (in milliseconds) for I/O requests issued to the device to be served. This
includes the time spent by the requests in queue and the time spent servicing them.
svctm
The average service time (in milliseconds) for I/O requests that were issued to the device.
Warning! Do not trust this field any more. This field will be removed in a future sysstat
version.
Are these columns meant to represent the same thing? I seem that sometimes they agree, but sometimes not:
avg-cpu: %user %nice %system %iowait %steal %idle
4.44 0.02 1.00 0.36 0.00 94.19
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 0.07 0.96 0.28 1.28 8.98 47.45 72.13 0.02 11.36 11.49 11.34 5.71 0.89
avg-cpu: %user %nice %system %iowait %steal %idle
8.00 0.00 2.50 2.50 0.00 87.00
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 0.00 9.00 2.00 6.00 12.00 68.00 20.00 0.05 6.00 2.00 7.33 6.00 4.80
avg-cpu: %user %nice %system %iowait %steal %idle
4.57 0.00 0.51 0.00 0.00 94.92
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
avg-cpu: %user %nice %system %iowait %steal %idle
13.93 0.00 1.99 1.49 0.00 82.59
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 0.00 29.00 0.00 4.00 0.00 132.00 66.00 0.03 7.00 0.00 7.00 7.00 2.80
Other than the obvious warning that svctm is depreciated, what is the difference between these two columns?
|
On linux iostat, the await column (average wait) is showing the average time spent by an I/O request computed from its very beginning toward its end.
The svctm column (service time) should display the average time spent servicing the request, i.e. the time spent "outside" the OS. It should be equal or smaller than the previous one as the request might have lost time waiting in a queue if the device is already busy and doesn't accept more concurrent requests.
Unlike most if not all other Unix / Unix like implementations, the Linux kernel doesn't measure the actual service time so iostat on that platform is trying to derive it from existing statistics but fails as this just cannot be done outside trivial use cases.
See this blog and the interesting discussions that follows for details.
| iostat: await vs. svctm |
1,336,906,003,000 |
So recently I found that someone has been using my computer without consent, browsing folders, etc....
I could change all my passwords straight away, but I'm curious as the what the intruding party was looking for. So I would like to set up a trap ( evil grin ).
What software will monitor any activity on my computer? While I know that capturing my screen will work here. I'd rather use a logfile.
For example:
/var/log/activity.log
[1 Aug 2010 20:23] /usr/bin/thunar accessed /multimedia/cctv-records/
[1 Aug 2010 20:25] /usr/bin/mplayer accessed /multimedia/cctv-records/00232.avi
[3 Aug 2010 02:34] /usr/bin/thunderbird was run
[3 Aug 2010 03:33] incomming ssh session from 12.32.132.123
Activities I would like to log is:
Access to files and folders on the filesystem
Commands run ( from console or otherwise )
User Sessions ( login's, ssh sessions and failed attempts )
|
You could use in-kernel mechanism inotify for monitoring accessed files.
First you should check if inotify is turned on in kernel:
pbm@tauri ~ $ zcat /proc/config.gz | grep CONFIG_INOTIFY
CONFIG_INOTIFY=y
CONFIG_INOTIFY_USER=y
Next thing to do is install inotify-tools. Instructions for various distributions you could find at project page - it should be in repositories of all major distributions.
After that inotify is ready to work:
inotifywait /dirs/to/watch -mrq
(m = do not exit after one event, r = recursive, q = quiet)
For example - output after ls /home/pbm
pbm@tauri ~ $ inotifywait /bin /home/pbm -mq
/bin/ OPEN ls
/bin/ ACCESS ls
/bin/ ACCESS ls
/home/pbm/ OPEN,ISDIR
/home/pbm/ CLOSE_NOWRITE,CLOSE,ISDIR
/bin/ CLOSE_NOWRITE,CLOSE ls
Important thing is to properly set directories for watch:
don't watch / recursively - there is a lot of read/write to /dev and /proc
don't watch your home dir recursively - when you use apps there is a lot of read/write to application configuration dirs and browsers profile dirs
In /proc/sys/fs/inotify/max_user_watches there is configuration option that shows how many files can be watched simultaneously. Default value (for Gentoo) is about not so high, so if you set watcher to /home/ you could exceed limit. You could increase limit by using echo (root access needed).
echo 524288 > /proc/sys/fs/inotify/max_user_watches
But before that you should read about consequences of that change.
Options that could be interesting for you:
-d = daemon mode
-o file = output to file
--format = user-specified format, more info in man inotifywait
-e EVENT = what event should be monitored (for example access, modify, etc, more info in man)
| Monitoring activity on my computer. |
1,336,906,003,000 |
If myfile is increasing over time, I can get the number of line per second using
tail -f | pv -lr > /dev/null
It gives instantaneous speed, not average.
How can I get the average speed (i.e the integral of the speed function v(t) over the monitoring time).
|
With pv 1.2.0 (December 2010) and above, it's with the -a option:
Here with both current and average, line-based:
$ find / 2> /dev/null | pv -ral > /dev/null
[6.28k/s] [70.1k/s]
With 1.3.8 (October 2012) and newer, you can also use -F/--format with %a:
$ find / 2> /dev/null | pv -lF 'current: %r, average: %a' > /dev/null
current: [4.66k/s], average: [ 218k/s]
Note that tail -f starts by dumping the last 10 lines of the file. Use tail -n 0 -f file | pv -la to avoid that bias in your average speed calculation.
| How to get an average pipe flow speed |
1,336,906,003,000 |
I've found a nice monitor which allows me to log a variety of runtime data of a single process. I'm looking for an equivalent that does the same for bandwidth usage. Ideally, the command should look like bwmon --pid 1 --log init.log. Is there such? Can it run without admin privileges?
|
something to get you started (just in case you want to write it yourself):
#!/bin/bash
#
# usage: bwmon PID
IN=0; OUT=0; TIME=0
get_traffic() {
t=`awk '/eth0:/ { printf("%s,%d,%d\n",strftime("%s"),$2,$10); }' < /proc/$1/net/dev`
IN=${t#*,}; IN=${IN%,*}
OUT=${t##*,};
TIME=${t%%,*};
}
get_traffic $1
while true
do
_IN=$IN; _OUT=$OUT; _TIME=$TIME
get_traffic $1
echo "$TIME,$(( $TIME - $_TIME )),$IN,$(( $IN - $_IN )),$OUT,$(( $OUT - $_OUT))"
sleep 1
done
comments:
checks only eth0
checks every 1 second
works only under linux, but other unixes work similar (procfs or whatever)
output could be stored into a sqlite.db with stat --printf="%N\n" /proc/PID/exe | cut -d ' ' -f 3
| Is there a tool that can monitor bandwidth usage of a single process? |
1,336,906,003,000 |
A few years ago, a coworker came up with an elegant solution for a watchdog program. The program ran on Windows and used Windows Event objects to monitor the process handles (PID’s) of several applications. If any one of the processes terminated unexpectedly, its process handle would no longer exist and his watchdog would immediately be signaled. The watchdog would then take an appropriate action to “heal” the system.
My question is, how would you implement such a watchdog on Linux? Is there a way for a single program to monitor the PID’s of many others?
|
The traditional, portable, commonly-used way is that the parent process watches over its children.
The basic primitives are the wait and waitpid system calls. When a child process dies, the parent process receives a SIGCHLD signal, telling it it should call wait to know which child died and its exit status. The parent process can instead choose to ignore SIGCHLD and call waitpid(-1, &status, WNOHANG) at its convenience.
To monitor many processes, you would either spawn them all from the same parent, or invoke them all through a simple monitoring process that just calls the desired program, waits for it to terminate and reports on the termination (in shell syntax: myprogram; echo myprogram $? >>/var/run/monitor-collector-pipe). If you're coming from the Windows world, note that having small programs doing one specialized task is a common design in the Unix world, the OS is designed to make processes cheap.
There are many process monitoring (also called supervisor) programs that can report when a process dies and optionally restart it and far more besides: Monit, Supervise, Upstart, …
| Linux: Writing a watchdog to monitor multiple processes |
1,481,570,115,000 |
I have these commands compiled and running but their contents are a bit of a mystery to me.
The processes from intel-gpu-overlay read something like: 15R, 16B, 41ms waits. What is an R, what is a B, what does that wait time indicate?
It has CPU: 152% (I'd guess this is the same as what I get from top). render: 32%, bitstream: 6%, blt: 6%. What kinds of code would cause these values to bottle neck and what would be the behavior of the system when they did?
Here is a sample of intel-gpu-top:
render busy: 23%: ████▋ render space: 12/16384
task percent busy
GAM: 29%: █████▉ vert fetch: 1380772913 (5386667/sec)
CS: 23%: ████▋ prim fetch: 350972637 (1368891/sec)
GAFS: 9%: █▉ VS invocations: 1375586768 (5385212/sec)
TSG: 8%: █▋ GS invocations: 0 (0/sec)
VFE: 7%: █▌ GS prims: 0 (0/sec)
SVG: 3%: ▋ CL invocations: 677098924 (2648400/sec)
VS: 3%: ▋ CL prims: 682224019 (2663834/sec)
URBM: 2%: ▌ PS invocations: 9708568482932 (34396218804/sec)
VF: 2%: ▌ PS depth pass: 15549624948405 (58732230331/sec)
SDE: 0%:
CL: 0%:
SF: 0%:
TDG: 0%:
RS: 0%:
GAFM: 0%:
SOL: 0%:
|
Taken from the link given in the comments in OP.
I was curious as well, so here are just a few things I could grab from the reference manuals. Also of interest is the intel-gpu-tools source, and especially lib/instdone.c which describes what can appear in all Intel GPU models. This patch was also hugely helpful in translating all those acronyms!
Some may be wrong, I'd love it if somebody more knowledgeable could chime in! I'll come back to update the answer with more as I learn this stuff.
First, the three lines on the right :
The render space is probably used by regular 3D operations.
From googling, bitstream seems to be about audio decoding? This is quite a generic term, so hard to find with a query. It does not appear on my GPU though (Skylake HD 530), so it might not be everywhere.
The blitter is described in vol. 11 and seems responsible for hardware acceleration of 2D operations (blitting).
Fixed function (FF) pipeline units (old-school GPU features) :
VF: Vertex Fetcher (vol. 1), the first FF unit in the 3D Pipeline responsible for fetching vertex data from memory.
VS: Vertex Shader (vol.1), computes things on the vertices of each primitive drawn by the GPU. Pretty standard operation on GPUs.
HS: Hull Shader
TE: Tessellation Engine
DS: Domain Shader
GS: Geometry Shader
SOL: Stream Output Logic
CL: Clip Unit
SF: Strips and Fans (vol.1), FF unit whose main function is to decompose primitive topologies such as strips and fans into primitives or objects.
Units used for thread and pipeline management, for both FF units and GPGPU (see Intel Open Source HD Graphics Programmers Manual for a lot of info on how this all works) :
CS: Command Streamer (vol.1), functional unit of the Graphics Processing Engine that fetches commands, parses them, and routes them to the appropriate pipeline.
TDG: Thread Dispatcher
VFE: Video Front-End
TSG: Thread Spawner
URBM: Unified Return Buffer Manager
Other stuff :
GAM: see GFX Page Walker (vol. 5), also called Memory Arbiter, has to do with how the GPU keeps track of its memory pages, seems quite similar to what the TLB (see also SLAT) does for your RAM.
SDE: South Display Engine ; according to vol. 12, "the South Display Engine supports Hot Plug Detection, GPIO, GMBUS, Panel Power Sequencing, and Backlight Modulation".
Credits:
StackOverflow User F.X.
| How do I interpret the output of intel-gpu-top and intel-gpu-overlay? |
1,481,570,115,000 |
Does the hard drive need to be accessed or is everything done in memory? Basically I would like to constantly get updated values from meminfo and cpuinfo.
Do I need to reopen the file and then reread in order to get an updated value or can I just reread? I don't have access to a Linux install at the moment.
|
When you read from /proc, the kernel generates content on the fly. There is no hard drive involved.
What you're doing is similar to what any number of monitoring programs do, so I advise you to look at what they're doing. For example, you can see what top does:
strace top >/dev/null
The trace shows that top opens /proc/uptime, /proc/loadavg, /proc/stat and /proc/meminfo once and for all. For all these files except /proc/uptime, top seeks back to the beginning of the (virtual) file and reads again each time it refreshes its display.
Most of the data in /proc/cpuinfo is constant, but a few fields such as the CPU speed on some machines are updated dynamically.
The proc filesystem is documented in the kernel documentation, in Documentation/filesystems/proc.txt. If you get desperate about some esoteric detail, you can browse the source.
| What happens when I open and read from /proc? [duplicate] |
1,481,570,115,000 |
My laptop (no VM, just plain Ubuntu with encrypted home) freezes for 3 minutes a few times per day. During these 3 minutes, the disk LED indicates intense disk activity, and I can't even move the mouse or press CTRL-ALT-F1.
I want to use iotop to find out which process is causing this.
The problem with iotop is that it shows disk usage for all processes (huge table).
How do I limit iotop's output to only the first row?
The idea is to make iotop more efficient so that it manages to compute and write to the log file even when the system is super-slow, so letting iotop display the whole table and then grepping is not a solution.
|
Not exactly what I was looking for but close: iotop -o
So I will use:
sudo nice -20 sudo iotop -tbod10 > ~/iotop.log
| Make iotop show only the most disk-intensive item |
1,481,570,115,000 |
Is it possible to forcibly add a timing alias (for lack of a better way to phrase it) to every command in bash?
For example, I would like to have a specific user who, whenever a command is run, it is always wrapped either with date before and after, or time.
Is this possible, and, if so, how?
|
You can record the time a command line is started and the time a prompt is displayed. Bash already keeps track of the starting date of each command line in its history, and you can note the time when you display the next prompt.
print_command_wall_clock_time () {
echo Wall clock time: \
$(($(date +%s) - $(HISTTIMEFORMAT="%s ";
set -o noglob;
set $(history 1); echo $2)))
}
PROMPT_COMMAND=print_command_wall_clock_time$'\n'"$PROMPT_COMMAND"
This only gives you second resolution, and only the wall clock time. If you want better resolution, you need to use an external date command that supports the %N format for nanoseconds, and the DEBUG trap to call date before running the command to time.
call_date_before_command () {
date_before=$(date +%s.%N)
}
print_wall_clock_time () {
echo "Wall clock time: $(date +"%s.%N - $date_before" | bc)"
}
trap call_date_before_command DEBUG
PROMPT_COMMAND=print_wall_clock_time
Even with the DEBUG trap, I don't think there's a way of automatically displaying processor times for each command, or being more discriminating than prompt to prompt.
If you're willing to use a different shell, here's how to get a time report for every command in zsh (this doesn't generalize to other tasks):
REPORTTIME=0
You can set REPORTTIME to any integer value, the timing information will only be displayed for commands that used more than this many seconds of processor time.
Zsh took this feature from csh where the variable is called time.
| Forcing an 'added' alias to every command |
1,481,570,115,000 |
When I run several jobs on a head node, I like to monitor the progress using the command top.
However, when I'm using PBS to run several jobs on a cluster, top will of course not show these jobs, and I have resorted to using 'qstat'. However the qstat command needs to be run repeatedly in order to continue monitoring the jobs. top updates in real-time, which means I can have the terminal window open on the side and glance at it occasionally while doing other work.
Is there a way to monitor in real-time (as the top command would do) the jobs on a cluster that I've submitted using the PBS command qsub?
I was surprised to see so little, after extensive searching on Google.
|
If you want to be a super-boss, you can always use 'pbstop'
It's basically a PBS cluster version of what 'htop' is for local processes.
(Note that your cluster may not have this installed. Ask the admins for it!)
(Also, supports interactive filtering by user, queue, etc)
EG:
| PBS equivalent of 'top' command: avoid running 'qstat' repeatedly |
1,481,570,115,000 |
I have a script which produces a file 'Detail.out'. I know that the script is completed whenever the file contains a certain number of lines (roughly 21025). So I find myself sitting at the command prompt running:
[me@somewhere myDir]$ wc -l */Detail.out
21025 A/Detail.out
21025 B/Detail.out
21025 C/Detail.out
12995 D/Detail.out
10652 E/Detail.out
3481 F/Detail.out
21027 G/Detail.out
21025 H/Detail.out
21025 I/Detail.out
... ...
I've used tail -f to watch a specific file, but I'd like to follow the output of the wc -l */Detail.out command shown above. Is this possible? I'm currently using tcsh in Ubuntu 11.04 if that matters.
|
Try the watch command, although I suspect just about everyone has written their own version at one time or another. (The cheapie version is while :; do clear; "$@"; sleep 5; done.)
| Is it possible to follow a command (run repeatedly)? as one would follow a file using tail -f? |
1,481,570,115,000 |
I wonder how to log GPU load. I use Nvidia graphic cards with CUDA.
Not a duplicate: I want to log.
|
It's all there. You just didn't read carefuly :) Use the following python script which uses an optional delay and repeat like iostat and vmstat:
https://gist.github.com/matpalm/9c0c7c6a6f3681a0d39d
You can also use nvidia-settings:
nvidia-settings -q GPUUtilization -q useddedicatedgpumemory
...and wrap it up with some simple bash loop or setup a cron job or just use watch:
watch -n0.1 "nvidia-settings -q GPUUtilization -q
useddedicatedgpumemory"'
| How to log GPU load? [duplicate] |
1,481,570,115,000 |
I need to execute a script as soon as my raspberry pi gets connected to the Internet. However I was wondering if there was a better way than just pinging Google every minute or so.
My problem is that my Internet connection drops 1-2 times during the day so I need a way to log such events.
It's just the ADSL dropping during the day, I was looking for some way to log when it occurs even when i don't notice it. I think I'll setup a script as suggested.
|
you can make a check on:
cat /sys/class/net/wlan0/carrier
where wlan0 is my internet interface. you can use whatever interface you are using , such as eth0 , eth1 , wlan0 for internet connectivity. if the output of that command is 1 then you are connected. otherwise not.so you may write script like this:
#!/bin/bash
# Test for network conection
for interface in $(ls /sys/class/net/ | grep -v lo);
do
if [[ $(cat /sys/class/net/$interface/carrier) = 1 ]]; then ; echo "online"; fi
done
you can also use the command:
#hwdetect --show-net
this script also works well:
#!/bin/bash
WGET="/usr/bin/wget"
$WGET -q --tries=20 --timeout=10 http://www.google.com -O /tmp/google.idx &> /dev/null
if [ ! -s /tmp/google.idx ]
then
echo "Not Connected..!"
else
echo "Connected..!"
fi
| How to log Internet connection drops |
1,481,570,115,000 |
There are scripts that will send an e-mail when a server process is finished.
However, I do not want to check my email every so often just to see whether a job has finished. Therefore I'd like to get an SMS message.
My question is similar to this one, just exchange SMS with all occurrences of "e-mail": Is there a program that can send me a notification e-mail when a process finishes?
Can you think of any workaround / app / script / whatever that would enable an SMS to be sent when a job is finished (or prematurely ended?)
|
There are 3 ways to accomplish the sending of an SMS message from a server and/or application.
Setup your own gateway
If you google for "sms gateway" you'll find a large list of applications that you can setup which will provide this capability. You can also take an old cell phone and a PC and set them up using this tutorial titled: Setting up an SMS Gateway to create your own SMS gateway.
Use a ready made service
There are service providers that offer this capability (typically for a fee). Here are a couple of them:
fee
http://www.esendex.co.uk/
http://www.twilio.com/
free
http://www.kannel.org/
These providers often provide a library and/or API, such as this one from twilio, so that you can integrate them more easily into your application if needed.
Send an email to your provider's SMS gateway
Most providers (Verizon, AT&T, etc.) provide the ability to send an SMS message to your phone using the <phonenumber>@provider.com. Wikipedia also has a pretty exhaustive list of SMS gateways.
| Possible to get SMS/text message notification when process ends or is killed? |
1,481,570,115,000 |
I have two Linux systems communicating over sockets (Desktop and ARM-based development board).
I want to restart (or reset) my client application (running on a development board) when server sends a particular predefined message. I don't want to restart (reboot) Linux, I just want that client application restart itself automatically.
I am unable to understand how it should be done.
|
The normal way to do this is to let your program exit, and use a monitoring system to restart it. The init program offers such a monitoring system. There are many different init programs (SysVinit, BusyBox, Systemd, etc.), with completely different configuration mechanisms (always writing a configuration file, but the location and the syntax of the file differs), so look up the documentation of the one you're using. Configure init to launch your program at boot time or upon explicit request, and to restart it if it dies. There are also fancier monitoring programs but you don't sound like you need them. This approach has many advantages over having the program do the restart by itself: it's standard, so you can restart a bunch of services without having to care how they're made; it works even if the program dies due to a bug.
There's a standard mechanism to tell a process to exit: signals. Send your program a TERM signal. If your program needs to perform any cleanup, write a signal handler. That doesn't preclude having a program-specific command to make it shut down if you have an administrative channel to send it commands like this.
| How to restart (or reset) a running process in linux |
1,481,570,115,000 |
I've got a few processes with a known name that all write to files in a single directory. I'd like to log the number of disk block reads and writes over a period (not just file access) to test whether a parameter change reduces the amount of I/O significantly. I'm currently using iostat -d -p, but that is limited to the whole partition.
|
I realize this is going to sound both simplistic and absurd, but if you have
control over the apps in question (maybe in a test environment) you could
mount ONLY that directory on a partition of its own, then iostat, etc. would
tell you only about it, and nothing else on that spot.
If there are physical drives involved you could fake it up with a loopback
mount à la
dd if=/dev/zero of=/bigdisk/LOOPFILE bs=1024m count=1024m # 1gb loopback file
mke2fs -j /bigdisk/LOOPFILE
mkdir /tmpcopy
mount -o loop /tmpcopy /bigdisk/LOOPFILE
cp -r -p $SPECIALDIR2MONITOR /tmpcopy
umount /tmpcopy
mount -o loop $SPECIALDIR2MONITOR /bigdisk/LOOPFILE,
That would not completely remove all competing disk I/O, but
I'm pretty sure iostat's output would be more specific to your need.
| How can I monitor disk I/O in a particular directory? |
1,481,570,115,000 |
I have been having an overheating issue which makes my laptop shutdown immediately. Is there anyway to monitor the temperature from the sensor and scale down the CPU frequency to avoid that problem? Is there any existing software or shell script that can handle that job?
|
You should have a look at cpufreq-set and cpufreq-info. On Debian and derived distros they are in the cpufrequtils package. For example, on an old laptop with a bad fan that I use as a file server at home I have made these settings:
sudo cpufreq-set -c 0 -g ondemand -u 800000
sudo cpufreq-set -c 1 -g ondemand -u 800000
| Overheating results in system shutdown |
1,481,570,115,000 |
What software is there that will play an alert (PC speaker) when there isn't any internet connectivity for 5 minutes?
My switch/router seems to disconnect every few days, and I want to reset it when it happens.
PC -- TP-Link switch/router -- FO
192.168.x.1 -- 192.168.x.2 / x.y.z.a -- a.b.c.d
|
You can use a modified version of this script to do what you want:
#!/bin/bash
downTime=0
lastAccessTime=$(date +"%s")
while [ true ]; do
if ! ping -c1 google.com >& /dev/null; then
downTime=$(( $(date +"%s") - $lastAccessTime ))
else
downTime=0
lastAccessTime=$(date +"%s")
fi
sleep 15
if [ $downTime -ge 300 ]; then
echo "alert"
fi
done
We're "CONNECTED" Example
With debugging turned on so you can see what the script's doing.
set -x
Running with a valid hostname to demonstrate the "connection is up" state.
$ ./watcher.bash
+ downTime=0
++ date +%s
+ lastAccessTime=1402276955
+ '[' true ']'
The above initializes a couple of variables and determines the last time we went through the loop, $lastAccessTime. We now try to ping Google.
+ ping -c1 google.com
+ downTime=0
++ date +%s
+ lastAccessTime=1402276955
We now calculate any down time, $downTime, if ping fails, otherwise, we reset $downTime to zero, and recalculate $lastAccessTime.
+ sleep 15
Now we wait 15 seconds.
+ '[' 0 -ge 300 ']'
Now we check if we've been down for > 5 minutes (300 seconds). Then we repeat going through the while loop.
+ '[' true ']'
+ ping -c1 google.com
+ downTime=0
++ date +%s
+ lastAccessTime=1402276970
+ sleep 15
....
As long as we're up, nothing will happen other than we check with the ping command every 15 seconds.
We're "DISCONNECTED" Example
Now to simulate a "connection is down" state, we'll swap out the hostname we're pinging and use a fake one, google1234567890.com. Repeating a run of our script with debugging enabled we now see some actual down time getting calculated.
$ ./watcher.bash
+ downTime=0
++ date +%s
+ lastAccessTime=1402277506
+ '[' true ']'
+ ping -c1 google1234567890.com
++ date +%s
+ downTime=0
+ sleep 15
+ '[' 0 -ge 300 ']'
+ '[' true ']'
+ ping -c1 google1234567890.com
++ date +%s
+ downTime=15
+ sleep 15
...
Notice above that $downTime is equal to 15 seconds so far. If we wait a while longer we'll see this:
+ '[' true ']'
+ ping -c1 google1234567890.com
++ date +%s
+ downTime=300
+ sleep 15
We've accrued 300 seconds of down time. So now when we check, we print the message, alert.
+ '[' 300 -ge 300 ']'
+ echo alert
alert
+ '[' true ']'
+ ping -c1 google1234567890.com
++ date +%s
+ downTime=315
+ sleep 15
This state will continue until the connection is restored and the ping is once again successful.
So what about a sound?
That's easy. You can use a variety of tools to do this. I would use something like sox or mplayer to play an audio file such as an .mp3 or .wav file with an appropriate sound you want to hear every 15 seconds, while the connection is down.
mplayer someaudio.wav
Simply replace the alert message above with this line to get audio feedback that the connection is down.
Timing out issues with ping
If you use ping in the manner above you'll likely encounter a slow lag time where it takes ping literally 10-20 seconds for it to fail when the connection is down. See my answer to this U&L Q&A titled: How to redirect the output of any command? for an example using the command line tool fing instead. This tool will fail more quickly than the traditional ping.
| Internet connection drop alert |
1,481,570,115,000 |
Is there tool or a command that helps capture the bandwidth consumption of specific process (PID), just like the System Monitor does, but for a single specific process, as shows the following screenshot
I will be happy with a command line tool that at least exports such history to files. (I'm on Ubuntu 16.04)
Update 1
I want at least a tool like Nethogs that can output into files (Nethogs captures only TCP connexions) a similar tool that targets both TCP and UDP would be great
Update 2
Any script, combination of other tools (like wireshark) would help too.
|
So since I didn't find any easy/clear/"hit the ground running" solution, I had to made a modest one, fixes, refactoring and more options to come.
-> https://github.com/AymenDaoudi/NeTraf
| Linux tool to monitor bandwidth consumption of a specific process (PID) |
1,481,570,115,000 |
From the question here, the OP wants to repeatedly poll the pid of a process using pidof in a shell script. Of course this is inefficient as a new process must be started for the pidof program multiple times per second (I don't know that this is the cause of the CPU spikes in the question, but it seems likely).
Usually the way around this kind of thing in a shell script is to work with a single program that outputs the data you need on stdout and then doing some text processing if necessary. While this involves more programs to be running concurrently, it is likely to be less CPU intensive since new processes are not being continually created to for polling purposes.
So for the above question, one solution might be to have some program which outputs the names and pids of processes as they are created. Then you could do something like:
pids-names |
grep some_program |
cut -f 2 |
while read pid; do
process-pid "$pid"
done
The problem with this is that it raises a more fundamental question, how can pids and process names be printed as they are created?
I have found a program called ps-watcher, though the problem with this is that it is just a perl script which repeatedly runs ps so it doesn't really solve the problem. Another option is to use auditd which could probably work if the log was processed directly via tail -f. An ideal solution would be simpler and more portable than this, though I will accept an auditd solution if it is the best option.
|
Linux-specific answer:
perf-tools contains an execsnoop that does exactly this. It uses various Linux-specific features such as ftrace. On Debian, its in the perf-tools-unstable package.
Example of me running man cat in another terminal:
root@Zia:~# execsnoop
TIME PID PPID ARGS
17:24:26 14189 12878 man cat
17:24:26 14196 14189 tbl
17:24:26 14195 14189 preconv -e UTF-8
17:24:26 14199 14189 /bin/sh /usr/bin/nroff -mandoc -Tutf8
17:24:26 14200 14189 less
17:24:26 14201 14199 locale charmap
17:24:26 14202 14199 groff -mtty-char -Tutf8 -mandoc
17:24:26 14203 14202 troff -mtty-char -mandoc -Tutf8
17:24:26 14204 14202 grotty
I doubt there is a portable way to do this.
| Print pids and names of processes as they are created |
1,481,570,115,000 |
I don't understand why the sum of % in the cpu column in top doesn't match the total CPU % row:
Text version with slightly different values:
ubuntu@server:~$ top
top - 23:20:21 up 5:18, 3 users, load average: 10.28, 10.36, 10.20
Tasks: 299 total, 11 running, 288 sleeping, 0 stopped, 0 zombie
%Cpu(s): 41.7 us, 0.0 sy, 0.0 ni, 58.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem: 99007376 total, 83451488 used, 15555892 free, 36212 buffers
KiB Swap: 0 total, 0 used, 0 free. 5139148 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
5914 ubuntu 20 0 25784 3396 1452 S 1.3 0.0 0:05.33 htop
1473 root 20 0 373896 1444 1012 S 1.0 0.0 0:03.72 automount
263 root 20 0 0 0 0 S 0.3 0.0 1:37.69 kworker/7:1
6000 ubuntu 20 0 23812 1864 1176 R 0.3 0.0 0:00.41 top
1 root 20 0 33500 2908 1496 S 0.0 0.0 0:03.87 init
2 root 20 0 0 0 0 S 0.0 0.0 0:00.36 kthreadd
3 root 20 0 0 0 0 S 0.0 0.0 0:00.06 ksoftirqd/0
4 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0
5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0H
6 root 20 0 0 0 0 S 0.0 0.0 0:03.48 kworker/u48:0
7 root 20 0 0 0 0 S 0.0 0.0 1:49.74 rcu_sched
8 root 20 0 0 0 0 S 0.0 0.0 0:01.74 rcuos/0
9 root 20 0 0 0 0 S 0.0 0.0 0:02.69 rcuos/1
10 root 20 0 0 0 0 S 0.0 0.0 0:01.87 rcuos/2
11 root 20 0 0 0 0 S 0.0 0.0 0:00.90 rcuos/3
12 root 20 0 0 0 0 S 0.0 0.0 0:00.58 rcuos/4
13 root 20 0 0 0 0 S 0.0 0.0 0:01.34 rcuos/5
14 root 20 0 0 0 0 S 0.0 0.0 0:00.79 rcuos/6
15 root 20 0 0 0 0 S 0.0 0.0 0:00.92 rcuos/7
16 root 20 0 0 0 0 S 0.0 0.0 0:00.77 rcuos/8
17 root 20 0 0 0 0 S 0.0 0.0 0:01.51 rcuos/9
What could explain this?
htop shows the same:
The computer has 24 cores. More exactly, it's a virtual machine on an OpenStack cluster.
top video:
|
It can be that top is reading the whole usage of the physical CPUs, not of the virtual CPUs.
It is also possible, that there are some processess running, that are hidden for the ubuntu user.
Also try running ps aux.
You can type the 1 number while running top to get detailed information about the CPUs usages.
Here is the notation from man top point 2b:
us, user : time running un-niced user processes
sy, system : time running kernel processes
ni, nice : time running niced user processes
id, idle : time spent in the kernel idle handler
wa, IO-wait : time waiting for I/O completion
hi : time spent servicing hardware interrupts
si : time spent servicing software interrupts
st : time stolen from this vm by the hypervisor
| The sum of % in the CPU column doesn't match the total CPU % row [duplicate] |
1,481,570,115,000 |
My document management software is doing a lot of IO and I would like to know which files it is accessing the most.
Is there a Linux tool that would give me the list of the top IO consuming files, like iotop but for files, every few seconds?
That could look like:
$ thetool
THRPUT R/W/SWP FILE
40MB/s write /usr/alfresco/repo/1283421/1324928.doc
12MB/s read /usr/alfresco/cache/3928dh29f8if
11MB/s read /tmp/239398hf2f024f472.tmp
I looked in the man pages of iotop,lsof,strace and they do not seem to offer such a feature.
|
I think your "number of bytes" metric is the wrong one. Consider two accesses. One reads 10MB from a file. The other reads every 512th byte of the file for the first 10MB. The "number of bytes" will be 512 times higher for the first access compared to the second. Yet they will both put precisely the same load on the I/O subsystem.
If you can accept "number of operations", which is just about as good or as bad as "number of bytes", then you have something you can actually measure. The inotifywatch program does this, and it's likely part of your distribution's inotify-tools package.
It will immediately tell you which files comprise the bulk of the accesses, and it will likely allow you to solve your actual problem.
| How can I list the top IO consuming files? |
1,481,570,115,000 |
Is there a way to watch what commands are being executed in another shell, as they're being executed? Both shells are bash, and I have root access, if that matters. I can't install any extra software, and I don't want the other shell to have to do anything special like run screen.
Situation: I'm remotely logged into a Linux machine, and so is a coworker. I would like to see the commands she is running in her shell. I know that I could use some combination of watch and ps to see any commands that take longer than a second to run, but I don't believe that would help with very short commands.
|
Since you're root, you could always strace -f -e execve -p her_bash_pid. The -f is necessary because her shell will fork a new process before the exec, but this also means that you'll see anything that the child processes execute as well.
| How to see the commands executed in another shell? |
1,481,570,115,000 |
I want to move large file created by external process as soon as it's closed.
Is this test command correct?
if lsof "/file/name"
then
# file is open, don't touch it!
else
if [ 1 -eq $? ]
then
# file is closed
mv /file/name /other/file/name
else
# lsof failed for some other reason
fi
fi
EDIT: the file represents a dataset and I have to wait until it's complete to move it so another program can act on it. That's why I need to know if the external process is done with the file.
|
From the lsof man page
Lsof returns a one (1) if any error was detected, including the failure
to locate command names, file names, Internet addresses or files, login
names, NFS files, PIDs, PGIDs, or UIDs it was asked to list. If the -V
option is specified, lsof will indicate the search items it failed to
list.
So that would suggest that your lsof failed for some other reason clause would never be executed.
Have you tried just moving the file while your external process still has it open? If the destination directory is on the same filesystem, then there should be no problems with doing that unless you need to access it under the original path from a third process as the underlying inode will remain the same. Otherwise I think mv will fail anyway.
If you really need to wait until your external process is finished with the file, you are better to use a command that blocks instead of repeatedly polling. On Linux, you can use inotifywait for this. Eg:
inotifywait -e close_write /path/to/file
If you must use lsof (maybe for portability), you could try something like:
until err_str=$(lsof /path/to/file 2>&1 >/dev/null); do
if [ -n "$err_str" ]; then
# lsof printed an error string, file may or may not be open
echo "lsof: $err_str" >&2
# tricky to decide what to do here, you may want to retry a number of times,
# but for this example just break
break
fi
# lsof returned 1 but didn't print an error string, assume the file is open
sleep 1
done
if [ -z "$err_str" ]; then
# file has been closed, move it
mv /path/to/file /destination/path
fi
Update
As noted by @JohnWHSmith below, the safest design would always use an lsof loop as above as it is possible that more than one process would have the file open for writing (an example case may be a poorly written indexing daemon that opens files with the read/write flag when it should really be read only). inotifywait can still be used instead of sleep though, just replace the sleep line with inotifywait -e close /path/to/file.
| Move file but only if it's closed |
1,481,570,115,000 |
Is there a linux program that allows you to look into your current download traffic? Something that can list all the addresses I am currently connected to and downloading from.
|
check iftop and nload
iftop does for network usage what top(1) does for CPU usage. It listens to network traffic on a named interface and displays a table of current bandwidth usage by pairs of hosts. Handy for answering the question "why is our ADSL link so slow?".
nload is a console application which monitors network traffic and bandwidth usage in real time. It visualizes the in- and outgoing traffic using two graphs and provides additional info like total amount of transfered data and min/max network usage.
To peek into the data being downloaded/uploaded:
Wireshark
Wireshark is the world's foremost network protocol analyzer. It lets you capture and interactively browse the traffic running on a computer network. It is the de facto (and often de jure) standard across many industries and educational institutions.
| Linux program to look into what you're downloading |
1,481,570,115,000 |
I want to get the current Bandwidth of an interface say "eth0" from the terminal. It better be as simple as possible. Say
up 10 dn 30.
Instead of giving out a lot of text like "vnstat" does.
Edit: I need this for a command line program for auto-monitoring, not to view it manually.
|
There are several tools that can do this.
Bmon
One that should be in most repositories for various distros is bmon.
It can be run in a condensed view too.
If you're looking for something else I'd suggest taking a look at this Linuxaria article titled: Monitor your bandwidth from the Linux shell. It also mentions nload as well as speedometer.
Nload
Speedometer
Ibmonitor
If you're looking for something more basic then you could also give ibmonitor a go. Though basic it has most of the features one would expect when monitoring bandwidth.
| How do I get the current bandwidth speed of an interface from the terminal? |
1,481,570,115,000 |
Having a system for collecting performance statistics can be extremely useful. In the past, I've used Munin for this, and it has been invaluable in analyzing bottlenecks and various other issues. I was recently made aware of collectd, which seems very similar to Munin.
What monitoring applications are available and should be considered (other than Munin and Collectd), and how do you choose which one to use?
|
Munin is a data collector and visualizer (grapher) tool. It is easy to set up. Easy to use, but it uses too much resources and does not scale well. The default collection interval is 5min and it is not easy to change that, because it will overload your machine and some plugin has problems if you do that anyway. The plugins are executed (forked) every time when data collection occurs which is expensive. It has networked setup. You have to set up a local server and node even if you use one machine.
Collectd is a data collector tool only. You can choose 3rd party soultions to graph the collected data, but it does not work out of the box. It has many plugins, mostly written as a C module which gets started once when you start the daemon. You can change the collection interval and you can get fine grained statistics. It can collect data locally or via the network.
| How do you choose which monitoring application to use? |
1,481,570,115,000 |
Sometimes, when I have numerous tabs open in Firefox, one of those tabs will start consuming a lot of CPU%, and I want to know which tab is the culprit. Doing this is a very manual process for which I'd like to find automation.
I wish I had an application that could monitor firefox exclusively in a manner that produces concise output of only the firefox-facts I want to know.
I'm looking for a command/application that will list the processes of each tab running in firefox filtered to only include the following info for each tab-process:
Process ID
Webpage Address of Tab
CPU % usage
Memory used
Additionally, I'd like the info sorted by CPU % descending.
Basically, I hoping there exists a program like htop, but that's exclusively dedicated to just the pertinent stuff I want to monitor in Firefox (while leaving out all the details I don't care about).
|
You can type about:performance in the address bar of firefox. Then you will get a table where there will be pid of each tab of firefox with Resident Set size and Unique Set Size. And below this there will be some lines explaining the performance of each tab (like performing well) and if a tab is not performing well then it will show there and you can close that tab from there using the Close Tab option.
| Monitoring CPU% of Tabs in Firefox |
1,481,570,115,000 |
I need to monitor the I/O statistics of a process that writes to disk. The purpose is to avoid write rates too high for long periods.
I know there's iostat tool to accomplish this task on a system-wide perspective.
Is there something similar to monitor single process disk usage?
|
What you want is iotop. Most distributions have a package for it, usually called (logically enough) iotop.
One very cool command (at least, on a system that isn't very busy) is iotop -bo. This will show I/O as it occurs. It also has options to only monitor specific processes or processes owned by specified users.
| Getting disk i/o statistics for single process in Linux |
1,481,570,115,000 |
I would like to figure out which processes are communicating with which websites over a period of time. All what I found programs like ss that list the connections that open this instant and then exit.
What I, actually, want is something like wireshark, but one that would log process names.
Is there really no such a program?
|
If you have a recent kernel (preferably at least 4.9, but apparently some things work at 4.2), then you can take advantage of the new dtrace facility that allows you to intercept every tcp connect() call in the kernel and show the process id, remote ip address and port.
Since this does not poll, you will not miss any short-lived connections.
From the Brendan Gregg blog of 2016 typical output is
# tcpconnect
PID COMM IP SADDR DADDR DPORT
1479 telnet 4 127.0.0.1 127.0.0.1 23
1469 curl 4 10.201.219.236 54.245.105.25 80
1469 curl 4 10.201.219.236 54.67.101.145 80
1991 telnet 6 ::1 ::1 23
2015 ssh 6 fe80::2000:bff:fe82:3ac fe80::2000:bff:fe82:3ac 22
Further examples are in the bcc-tools package source. Built packages to install are available for several distributions or you can follow the compilation instructions.
| Is there a program that can log network traffic by the process and domain names? |
1,481,570,115,000 |
I would like to use tcpflow to monitor https requests. I have read tutorials on how to monitor http traffic but when I connect to a host using https the output is garbled. I am using tcpflow in the following manner:
sudo tcpflow -s -c -i eth0 src or dst host api.linkedin.com
|
If you have a copy of the key you can use ssldump which uses a syntax almost identical to tcpdump.
It won't be quite as pretty as tcpflow, but you can get at the encrypted content.
| Monitoring HTTPS traffic using tcpflow |
1,481,570,115,000 |
In different screenshots of people's Linux desktops, I've seen different apps that overlay the desktop with information about their computer. Often this gadget/app shows CPU and HDD information. Sometimes it has network and temperature information as well. I've seen these a lot but they often have different looks and different information.
What program does this? Is it built-in to any Linux distribution?
|
I use conky to display date, battery, cpu, ram and swap information. You can find my conky file here or take a look at a thread about conky configs in the arch-linux forum. There you find many different configs and screenshots of conky in use.
| Desktop overlay program showing CPU, HDD, etc. stats |
1,481,570,115,000 |
This is for academic purpose. I want to know which commands are executed when we do something in GUI, for example creating a folder. I want to show that both the mkdir shell command and create folder option from GUI does the same thing.
|
You can observe what the process does with the strace command. Strace shows the system calls performed by a process. Everything¹ a process that affects its environment is done through system calls. For example, creating a directory can only be done by ultimately calling the mkdir system call. The mkdir shell command is a thin wrapper around the system call of the same name.
To see what mkdir is doing, run
strace mkdir foo
You'll see a lot of calls other than mkdir (76 in total for a successful mkdir on my system), starting with execve which loads the process binary image, then calls to load the libraries and data files used by the program, calls to allocate memory, calls to observe the system state, … Finally the command calls mkdir and wraps down, finishing with exit_group.
To observe what a GUI program is doing, start it and only observe it during one action. Find out the process ID of the program (with ps x, htop or any other process viewer), then run
strace -o file_manager.mkdir.strace -p1234
This puts the trace from process 1234 in the file file_manager.mkdir.strace. Press Ctrl+C to stop strace without stopping the program. Note that something like entering the name of the directory may involve thousands or tens of thousands of system calls: handling mouse movements, focus changes and so on is a lot more complex at that level than creating a directory.
You can select what system calls are recorded in the strace output by passing the -e option. For example, to omit read, write and select:
strace -e \!read,write,select …
To only record mkdir calls:
strace -e mkdir …
¹ Ok, almost everything. Shared memory only involves a system call for the initial setup.
| How to know which commands are executed when I do something in GUI |
1,481,570,115,000 |
I was wondering if there are any tools to keep track of the access history of a file. I know of stat, but as far as I understand, it only returns information about the last time the file was accessed.
|
Logging access times is already a fairly heavy requirement (by filesystem performance standards), because it requires a write operation for every read operation. Logging other things would be even costlier. The feature is not present in typical filesystems.
LoggedFS is a stackable filesystem that provides a view of a filesystem tree, and can perform fancier logging of all accesses through that view. To configure it, see LoggedFS configuration file syntax.
On Linux, you can use the audit subsystem to log a large number of things, including filesystem accesses. Make sure the auditd daemon is started, then configure what you want to log with auditctl. Each logged operation is recorded in /var/log/audit/audit.log (on typical distributions). To start watching a particular file:
auditctl -w /path/to/file
If you put a watch on a directory, the files in it and its subdirectories recursively are also watched.
| Access history of a file [duplicate] |
1,481,570,115,000 |
It is occasionally useful for me to know when my machine (Debian wheezy) was last touched. To be precise, I mean the more recent of the times I typed on the keyboard or moved the mouse. When I currently try to do this, I adopt ad-hoc means, like checking the last modification times of files that I was editing. However, a most systematic way would be useful. If not the precise time, approaches to get a reasonably close estimate would be welcome. I would prefer methods that used information that was not easily destroyed, like file modification times.
I suppose the ultimate approach would be to install some kind of program that monitors my computers activity. I don't know if I would go so far, but would be willing to hear about it, at least.
Any software installed for this purpose must be free, and available in Debian, for preference. Having said that, solutions don't have to be Debian-specific, or even Linux specific. In fact, it is better if they are not.
It's ok if only activity in an X11 session is taken into account.
|
There is an xprintidle utility (available as a package, at least in Debian and Ubuntu) that will do this. It gives you the number of milliseconds since last keyboard or mouse activity. Of course, if you type that in a terminal and run it, the result will be near-0.
Alternatively, there is a Perl module.
C code (apparently borrowed from that Perl module) can be found on Stack Overflow.
edit: You mentioned on chat possibly wanting it to be like a munin graph. Actually, you should be able to hook it into munin, but you'll need to get it access to your X display. The minimal requirement to do that is to set the DISPLAY=:0 environment variable (or whatever display you log in on) and also get it access to the magic-cookie, which will come from ~/.Xauthority or $XAUTHORITY. xauth is the command to manipulate xauthority files. See also Open a window on a remote X display (why "Cannot open display")? for some approaches on getting access to the X display.
| When was my machine last touched? |
1,481,570,115,000 |
I am one of the n users of a shared unix machine. For reasons unknown, the machine is not "responsive" enough. For example, it is slow on interactive commands, it takes few noticeable moments for any action (e.g. mouse movement, editor (e.g. gvim) keystrokes) to be visible. The problem is, the people supposedly responsible for addressing the issue do not agree that the machine is not responsive. They do some few simple things and say, "It works fine!"
How can responsiveness be quantified? What (all) can I measure?
I can run shell commands (e.g. top) periodically with cron and collect statistics, but I am clueless regarding what is a good statistic to go after.
EDIT
I connect to the machine over VMC.
|
This isn't strictly speaking the same as "responsiveness", but one metric you should probably check is the system load average; uptime will show the average over the last 1/5/15 minutes:
$ uptime
02:30:33 up 6 days, 6:30, 12 users, load average: 0.85, 0.65, 0.57
A high enough load will perceptibly slow the system down
| Quantify unix responsiveness |
1,481,570,115,000 |
We have a server where another script sFTPs and downloads files everyday.
Question: Is it possible to detect that the file was downloaded and then for me to automatically archive the file after they are done?
To clarify - We host the files and someone else comes and downloads it.
This is the script they use:
let $command = 'sftp -b /usr/tmp/file.sftp someuser@myserver'
show 'FTP command is ' $command
call system using $command #status
##file.sftp##
# Set local directory on PeopleSoft server
lcd /var/tmp
# Set remote directory on the remote server
cd ar/in
# Transfer all remote files to PeopleSoft
get file.dat
get file2.dat
# quit the session
bye
|
There are 3 avenues that I can conceive of that might provide you with a solution.
1. Custom sftp Subsystem
You could wrap the sftp-server daemon via sshd_config and "override" it with your own script that could then intercept what sftp-server is doing, and then act when you see that a file was downloaded. Overriding the default sftp-server in sshd_config is easy:
Subsystem sftp /usr/local/bin/sftp-server
Figuring out what to do in the wrapper script would be the hard part.In /usr/local/bin/sftp-server:
#!/bin/sh
# ...do something...
chroot /my/secret/stuff /usr/libexec/openssh/sftp-server
# ...do something...
2. Watch the logs
If you turn up the debugging of sftp-sever you can get it to show logs of when files are being open/closed and read/written to/from the SFTP server. You could write a daemon/script that watches these logs and then backs the file up when needed. Further details on how to achieve this are already partially covered in my answer to this U&L Q&A tiled: Activity Logging Level in SFTP as well as here in this blog post titled: SFTP file transfer session activity logging.
The SFTP logs can be enhanced so they look like this:
Sep 16 16:07:19 localhost sftpd-wrapper[4471]: user sftp1 session start from 172.16.221.1
Sep 16 16:07:19 localhost sftp-server[4472]: session opened for local user sftp1 from [172.16.221.1]
Sep 16 16:07:40 localhost sftp-server[4472]: opendir "/home/sftp1"
Sep 16 16:07:40 localhost sftp-server[4472]: closedir "/home/sftp1"
Sep 16 16:07:46 localhost sftp-server[4472]: open "/home/sftp1/transactions.xml" flags WRITE,CREATE,TRUNCATE mode 0644
Sep 16 16:07:51 localhost sftp-server[4472]: close "/home/sftp1/transactions.xml" bytes read 0 written 192062308
Sep 16 16:07:54 localhost sftp-server[4472]: session closed for local user sftp1 from [172.16.221.1]
You would then need to develop a daemon/script that would monitor the logs for the open/close event pairs. These represent a completed file transfer. You could also make use of syslog, which could monitor for the "CLOSE" log events and it could be used to perform the copying of the transferred files.
3. Incron
You could make use of Inotify events that the Linux kernel produces every time a file is accessed. There is a service called Incron which works similarly to Cron. Where Cron works based on time, Incron works based on file events. So you could setup a Incron entry that would monitor your SFTP upload directories, and any time a specific file event is detected, copy the file.
Have a look at the inotify man page for description of the various events. I believe you'd want to watch for a read() (IN_ACCESS) followed by a close() (IN_CLOSE_WRITE). These would be for files that were copied from the SFTP server.
Incron rules look like this:
<directory> <file change mask> <command or action> options
/var/www/html IN_CREATE /root/scripts/backup.sh
/sales IN_DELETE /root/scripts/sync.sh
/var/named/chroot/var/master IN_CREATE,IN_ATTRIB,IN_MODIFY /sbin/rndc reload
This article titled: Linux incrond inotify: Monitor Directories For Changes And Take Action shows much more of the details needed, if you want to try and go with this option.
| Is it possible to detect when a file has been downloaded? |
1,495,746,973,000 |
vmstat 1
Above will print virtual memory statistics each seconds. It will also show the CPU utilization for last second.
I have a web server at hand which runs httpd and MySQL. I need to find how much CPU httpd consumed in last second. Like vmstat particularly for httpd.
I tried this :
ps -e -o %mem,%cpu,cmd | grep mysql | awk '{memory+=$1;cpu+=$2} END {print memory,cpu}'
But it will show me the ratio of CPU used since the start of the process.
So, with above, if my process caused a spike and then went to sleep for long time, I won't know it. It's like windows process manager, which shows which process is using how much CPU. I hope I am making my question understandable. I will clarify if anything is missing.
|
You could use top -b -d 1 to achieve that for CPU usage. top displays process CPU usage relative to the last output.
| Linux : See CPU usage by a process for the last second |
1,495,746,973,000 |
I use Ubuntu 12.04.1 Linux. I see a difference between %CPU and C output format of ps command for a process. It is not clearly noted in the ps man page.
Man pages says:
CODE HEADER DESCRIPTION
%cpu %CPU cpu utilization of the process in "##.#" format. Currently,
it is the CPU time used divided by the time the
process has been running (cputime/realtime ratio),
expressed as a percentage. It will not add up to 100%
unless you are lucky. (alias pcpu).
c C processor utilization. Currently, this is the integer
value of the percent usage over the lifetime of the
process. (see %cpu).
So basically it should be the same, but it is not:
$ ps aux | head -1
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
$ ps aux | grep 32473
user 32473 151 38.4 18338028 6305416 ? Sl Feb21 28289:48 ZServer -server -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=./log
$ ps -ef | head -1
UID PID PPID C STIME TTY TIME CMD
$ ps -ef | grep 32473
user 32473 32472 99 Feb21 ? 19-15:29:50 ZServer -server -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=./log
The top shows 2% CPU utilization in the same time. I know the 'top' shows the current CPU utilization while ps shows CPU utilization over the lifetime of the process.
I guess the lifetime definition is somewhat different for these two format options.
|
The %cpu and C columns are showing almost, but not quite, the same thing. If you look at the source for ps in ps/output.c you can see the differences between pr_c and pr_cpu
C is the integer value for %cpu as you can guess. The odd difference is that C is clamped to a maximum of 99 while %cpu is not (there's a check for it for %cpu but it just changes the format from xx.x% to xxx%).
Now, I'm not really sure why C has this clamping; it seems a little arbitrary. It's been there since procps 3.2.7 (2006) so it probably was from the era of single CPUs
| What is the difference in CPU utilization between 'ps aux' and 'ps -ef'? |
1,495,746,973,000 |
I know how to monitor a process. Commands like top and so forth can monitor the CPU time and memory usage for a given process instance.
But say I expect a given executable to be run several times in the next hour, and I want to measure how many times it is run and the CPU time it has consumed. What's a command for that?
|
You could do something like:
mv my-executable my-executable.bin
And create my-executable as a wrapper script that does:
#! /bin/bash -
{ time "$0.bin" "$@" 2>&3 3>&-; } 3>&2 2>> /tmp/times.log
The script could add more information to the log like the time it was started, by whom, the arguments it was passed...
BSD process accounting, at least on Linux does report CPU time (user + sys), though not in a cumulative way like time does (children processes CPU time is not accounted for the parent)
| How to monitor all executions of an executable over a time period |
1,495,746,973,000 |
I have a hypothetical situation:
Let us say we have two strace processes S1 & S2, which are simply monitoring each other.
How can this be possible?
Well, in the command-line options for strace, -p PID is the way to pass the required PID, which (in our case) is not yet known when we issue the strace command. We could change the strace source code, such that -P 0 means, ask user for PID. E.g., read() from STDIN. When we can run two strace processes in two shell sessions and find the PIDs in a third shell, we can provide that input to S1 & S2 and let them monitor each other.
Would S1 & S2 get stuck? Or, go into infinite loops, or crash immediately or...?
Again, let us say we have another strace process S3, with -p -1, which, by modifying the source code, we use to tell S3 to monitor itself. E.g., use getpid() without using STDIN.
Would S3 crash? Or, would it hang with no further processing possible? Would it wait for some event to happen, but, because it is waiting, no event would happen?
In the strace man-page, it says that we can not monitor an init process. Is there any other limitation enforced by strace, or by the kernel, to avoid a circular dependency or loop?
Some Special Cases :
S4 monitors S5, S5 monitors S6, S6 monitors S4.
S7 & S8 monitoring each other where S7 is the Parent of S8.
More special cases are possible.
EDIT (after comments by @Ralph Rönnquist & @pfnuesel) :
https://github.com/bnoordhuis/strace/blob/master/strace.c#L941
if (pid <= 0) {
error_msg_and_die("Invalid process id: '%s'", opt);
}
if (pid == strace_tracer_pid) {
error_msg_and_die("I'm sorry, I can't let you do that, Dave.");
}
Specifically, what will happen if strace.c does not check for pid == strace_tracer_pid or any other special cases? Is there any technical limitation (in kernel) over one process monitoring itself? How about a group of 2 (or 3 or more) processes monitoring themselves? Will the system crash or hang?
|
I will answer for Linux only.
Surprisingly, in newer kernels, the ptrace system call, which is used by strace in order to actually perform the tracing, is allowed to trace the init process. The manual page says:
EPERM The specified process cannot be traced. This could be because
the tracer has insufficient privileges (the required capability
is CAP_SYS_PTRACE); unprivileged processes cannot trace pro‐
cesses that they cannot send signals to or those running set-
user-ID/set-group-ID programs, for obvious reasons. Alterna‐
tively, the process may already be being traced, or (on kernels
before 2.6.26) be init(8) (PID 1).
implying that starting in version 2.6.26, you can trace init, although of course you must still be root in order to do so. The strace binary on my system allows me to trace init, and in fact I can even use gdb to attach to init and kill it. (When I did this, the system immediately came to a halt.)
ptrace cannot be used by a process to trace itself, so if strace did not check, it would nevertheless fail at tracing itself. The following program:
#include <sys/ptrace.h>
#include <stdio.h>
#include <unistd.h>
int main() {
if (ptrace(PTRACE_ATTACH, getpid(), 0, 0) == -1) {
perror(NULL);
}
}
prints Operation not permitted (i.e., the result is EPERM). The kernel performs this check in ptrace.c:
retval = -EPERM;
if (unlikely(task->flags & PF_KTHREAD))
goto out;
if (same_thread_group(task, current)) // <-- this is the one
goto out;
Now, it is possible for two strace processes can trace each other; the kernel will not prevent this, and you can observe the result yourself. For me, the last thing that the first strace process (PID = 5882) prints is:
ptrace(PTRACE_SEIZE, 5882, 0, 0x11
whereas the second strace process (PID = 5890) prints nothing at all. ps shows both processes in the state t, which, according to the proc(5) manual page, means trace-stopped.
This occurs because a tracee stops whenever it enters or exits a system call and whenever a signal is about to be delivered to it (other than SIGKILL).
Assume process 5882 is already tracing process 5890. Then, we can deduce the following sequence of events:
Process 5890 enters the ptrace system call, attempting to trace process 5882. Process 5890 enters trace-stop.
Process 5882 receives SIGCHLD to inform it that its tracee, process 5890 has stopped. (A trace-stopped process appears as though it received the `SIGTRAP signal.)
Process 5882, seeing that its tracee has made a system call, dutifully prints out the information about the syscall that process 5890 is about to make, and the arguments. This is the last output you see.
Process 5882 calls ptrace(PTRACE_SYSCALL, 5890, ...) to allow process 5890 to continue.
Process 5890 leaves trace-stop and performs its ptrace(PTRACE_SEIZE, 5882, ...). When the latter returns, process 5890 enters trace-stop.
Process 5882 is sent SIGCHLD since its tracee has just stopped again. Since it is being traced, the receipt of the signal causes it to enter trace-stop.
Now both processes are stopped. The end.
As you can see from this example, the situation of two process tracing each other does not create any inherent logical difficulties for the kernel, which is probably why the kernel code does not contain a check to prevent this situation from happening. It just happens to not be very useful for two processes to trace each other.
| How can strace monitor itself? |
1,495,746,973,000 |
How can I see the raw memory data used by an application ? Like , suppose I have a file name something.sh . Now I run the command ./something.sh , and then I want see all the data its accessing in ram and all the files its accessing in my filesystem,network data or connection its using.May be the hex dump of the memory used by this application. Can I do that in ubuntu ?
|
How can I see the raw memory data used by an application...
Once you have obtained the process' PID (using ps(1) or pidof(8) for instance), you may access the data in its virtual address space using /proc/PID/maps and /proc/PID/mem. Gilles wrote a very detailled answer about that here.
... and all the files its accessing in my filesystem, network data or connections
lsof can do just that. netstat may be more appropriate for network-related descriptors. For instance :
$ netstat -tln # TCP connections, listening, don't resolve names.
$ netstat -uln # UDP endpoints, listening, don't resolve names.
$ netstat -tuan # TCP and UDP, all sorts, don't resolve names.
$ lsof -p PID # "Files" opened by process PID.
Note: netstat's -p switch will allow you to print the process associated with each line (at least, your processes). To select a specific process, you can simply use grep:
$ netstat -tlnp | grep skype # TCP, listening, don't resolve (Skype).
For more information about these tools: netstat(8) and lsof(8). See also: proc(5) (and the tools mentionned in other answers).
| How do see the memory used by a program in ubuntu? |
1,495,746,973,000 |
I would like to know if there is a tool that would enable you to watch how the program output changes live. Something like tail -f but instead of monitoring the file changes, it would repeatedly call some executable and display it live.
For example, if the tool is caled foobar and I would call foobar 'ps -Al', it would behave kind of like top - displaying the output in real time.
|
Try watch. From the manpage:
Name
watch - execute a program periodically, showing output fullscreen
Synopsis
watch [-dhvt] [-n <seconds>] [--differences[=cumulative]] [--help] [--interval=<seconds>] [--no-title] [--version] <command>
Description
watch runs command repeatedly, displaying its output (the first screenfull). This allows you to watch the program output change over time. By default, the program is run every 2 seconds; use -n or --interval to specify a different interval.
The -d or --differences flag will highlight the differences between successive updates. The --cumulative option makes highlighting "sticky", presenting a running display of all positions that have ever changed. [...]
watch will run until interrupted.
Note that "realtime" would have to be approximated by "once a second" (for example) here...
| Live program output monitoring tool |
1,495,746,973,000 |
I'm using the Linux "top" command to monitor %CPU of particular process. As the values keep on changing every few seconds, is there any way to keep track of values in a separate file or as a graphical representation? Is it possible to do it using any shell scripts?
|
The answer to this question can range from a simple command, to complex monitoring tools, depending on your needs.
You can start by simply running top -b -n 1 >> file.txt (-b for batch mode, -n 1 for running a single iteration of top) and store the output (appended) in the file.txt. You can filter also "top" output like top -b -n 1 | grep init to see only the data for the "init" process or top -b -n 1 | grep "init" | head -1 |awk '{print $9}' to get the 9th column of the init process data (the CPU value).
If you want to use in a shell script, you could:
CPU=$(top -b -n1 | grep "myprocess" | head -1 | awk '{print $9}')
MEM=$(top -b -n1 | grep "myprocess" | head -1 | awk '{print $10}')
Or, with a single execution of top:
read CPU MEM <<<$(top -b -n1 | grep "myprocess" | head -1 | awk '{print $9 " " $10}')
(note that grep, head and awk could be merged in a single awk command but for the shake of simplicity I'm using separate commands).
We used top in this example but there are alternate methods for other metrics (check sar, iostat, vmstat, iotop, ftop, and even reading /proc/*).
Now you have a way to access the data (CPU usage). And in our example we are appending it to a text file. But you can use other tools to store the data and even graph them: store in csv and graph with gnuplot/python/openoffice, use monitoring&graping tools like zabbix, rrdtools, cacti, etc. There is a big world of monitoring tools that allow to collect and graph the data like CPU usage, memory usage, disk io, and even custom metrics (number of mysql connections, etc).
EDIT: finally, to specifically answering your question, if you want to keep track of changes easily for a simple test, you can run top -b -n 1 >> /tmp/file.txt in your /etc/crontab file, by running top every 5 minutes (or any other time interval if you replace the /5 below).
0-59/5 * * * * root top -b -n1 >>/tmp/output.txt
(and a grep + head -1 in the command above if you're only intestered in a single process data).
Note that the output.txt will grow, so if you want to reset it daily or weekly, you can "rm" it with another crontab entry.
| In Linux 'top' command, is there any way to keep track of values? |
1,495,746,973,000 |
I'm currently changing the permissions and ownership on a 4TB HDD filled with files. Is it even possible to monitor the progress of commands such as chmod and chown?
|
You can attach to the running process and see what it's doing now. This will give you an idea of where it's at.
strace -p1234
where 1234 is the process ID of the chmod process. Note that many systems restrict non-root users to monitoring child processes only, so you'd have to do this as root; see after upgrade gdb won't attach to process.
Knowing what file is currently being processed doesn't provide an easy way of knowing what has already been processed. chmod traverses the file tree in depth-first order, and traverses each directory in directory order (the order of ls -U, which is not the same as the order of ls in general).
It would be nice to know how many files the process has already processed, and that can be determined at least approximately by knowing how many system calls the process has made, but as far as I know Linux doesn't keep track of how many system calls a process has made.
| Monitor chmod progress |
1,495,746,973,000 |
I want to know the amount of the network traffic (inbound and outbound) in a time period, generated a specific process and all subprocesses that it spawns.
I have developed a software that contains a "job manager" that runs forever and generates no network traffic on its own. It instead spawns child "workers" that does the main work, including the majority of network traffic. The tricky point is, several "workers" may work simultaneously, and a single worker process is expected to exit after a short period (a few hours). Furthermore, these workers also spawns more subprocesses that generates traffic like git fetch that needs to be monitored as well.
There will be only one instance of "job manager" and it can be started or killed on-demand on my development and testing server, which runs Ubuntu Server 18.04, architecture amd64.
I want to monitor the network traffic of all the workers and the processes that workers spawn, for a prolonged period (one week or more). Is there a solution?
|
Probably the easiest way is to put the job manager in a network namespace. All child processes will also be in that namespace. Connect up the namespace via veth or macvlan, measure traffic on that interface.
| Monitor network traffic of a process and its entire subprocesses tree |
1,495,746,973,000 |
I have a process in a Linux installation that at some point has some kind of spike and passes the max allowed number of threads/processes allowed by the system. I found this by checking ps -elfT | wc -l repeatedly.
But what I don't know is what exactly is it that causes this spike.
The output of ps -elfT has a lot of information, but I cannot easily understand if there is some child process that does some kind of "blurp" in forking and makes a mess.
How could I figure that out?
Example: ps -elfT | cut -d' ' -f3 | sort |uniq gives me the processes running at the time. How could I add a count to see how much each contributes to the total?
|
ps -eo nlwp,pid,args --sort nlwp
Would show a list of processes sorted by their number of threads.
For a top-like view of that, you can always do:
watch -n 1 'ps -eo nlwp,pid,args --sort -nlwp | head'
Or you could use... top.
press f to select the fields to display.
locate nTH (the number of threads) and press d to display it and s to make it the sort order
you can adjust its display position with → and then ↑ and ↓ and ⏎.
q to get back to the process list
press H if you want to see all the threads.
d to adjust the delay.
? for help.
| Figure out which process forks too many threads |
1,495,746,973,000 |
Recently I got an load-too-high issue on our server. I watched top for like half an hour to find out that it was Nagios that forked a lot of short-lived processes. After bouncing Nagios, everything was back to normal.
My question here is, how to find out the root process that forks a lot like this more quickly?
Thanks.
|
If you run an OS that supports dtrace, this script will help you identifying what processes are launching short lived processes:
#!/usr/sbin/dtrace -qs
proc:::exec
{
self->parent=stringof((unsigned char*)curpsinfo->pr_psargs);
}
proc:::exec-success
/self->parent != NULL/
{
printf("%s -> %s\n",self->parent,curpsinfo->pr_psargs);
self->parent=NULL;
}
If you are on an OS without dtrace support, have a look to alternatives, e.g. systemtap or sysdig with Linux, ProbeView with AIX.
Here is a sysdig script that will show all commands launch and exit times with their pid and ppid:
sysdig -p"*%evt.time %proc.pid %proc.ppid %evt.dir %proc.exeline" \
"( evt.dir=< and evt.type=execve ) or evt.type=procexit"
Another method would be to enable process accounting with your OS (if available, commonly the acct package under Linux) and have a look to the generated logs. There is also a top like program that leverage process accounting: atop.
| How to find out the process(es) that forks a lot? |
1,495,746,973,000 |
I have a daemon that monitors various things using the GPIO ports. I have used python to write the code for this using the RPi.GPIO module.
I would like to ensure that the daemon is always running, i.e., restart it after a crash and start it when the system boots (crucially before any user logs in -- this Pi runs headless). There is a little flashing LED that tells me its running, but that's not ideal.
I have read about using MONIT for this purpose but I'm having a few issues. My attempts so far have mainly been around this solution:
https://stackoverflow.com/questions/23454344/use-monit-monitor-a-python-program
This is my bash wrapper file, its called /home/pi/UPSalarm/UPSalarm.bash
#!/bin/bash
PIDFILE=/var/run/UPSalarm.pid
case $1 in
start)
#source /home
#Launch script
sudo python /home/pi/UPSAlarm/UPSalarm.py 2>/dev/null &
# store PID value
echo $! > ${PIDFILE}
;;
stop)
kill `cat ${PIDFILE}`
# Proccess killed, now remove PID
rm ${PIDFILE}
;;
*)
echo "usage: scraper {start|stop}" ;;
esac
exit 0`
This is my monit rule
check process UPSalarm with pidfile /var/run/UPSalarm.pid
start = "/home/pi/UPSalarm/UPSalarm start"
stop = "/home/pi/UPSalarm/UPSalarm stop"
I have two problems: firstly, I get the wrong PID number in UPSalarm.pid. I am wondering if I get a PID number for sudo? This is why I have posted the question here; I need sudo because I need access to the GPIO ports. Secondly, it doesn't work. Thirdly, I am not sure what source does in the bash file?
I know monit has great documentation, but a worked example for python really would be helpful, I've been stuck for a good few days.
The following websites were also helpful:
https://www.the-hawkes.de/monitor-your-raspberrypi-with-monit.html (for setting up monit)
https://mmonit.com/monit/documentation/monit.html
And these two questions are related but don't solve my problem:
https://raspberrypi.stackexchange.com/questions/9938/monitoring-a-python-script-running-in-a-screen-session-with-monit
How to restart the Python script automatically if it is killed or dies
|
That shell wrapper looks like an init script, but apparently it isn't (hence you need to use sudo there; scripts run by init would not require this).
This seems to be a very clumsy way to do this; the shell wrapper does not serve any purpose that could not be better served by the python program itself. Get rid of that; if you want an init script specifically, write a minimal one, but I suggest you move the logic of controlling the daemon from the init script into the daemon (UPSalarm.py) itself.
Since you want only one instance, define a pid file that the process is to use. When UPSalarm.py start is run, it will check for the existence of this file. If it does not exist, it writes its own pid to this file and continues. If it does exist, it gets the pid and then checks with the OS to see if a process with the pid exists and if so, what it is called. This will prove that either UPSalarm.py is already running, or not. If it is, exit with an "Already running" message.
When UPSalarm.py stop is run, a similar sequence is involved -- check for the pid file, if it exists check the pid, if the pid is valid for a process named UPSalarm.py, signal it to stop, presumably with SIGINT. UPSalarm.py itself should implement a signal handler for SIGINT, such that it deletes the pid file before it exits.
I am not a python programmer and this is not a programming site (for that, see Stack Overflow), but I promise all this is easily possible with python.
To get the pid of the current process, use os.getpid().
For mapping a pid to a process name, read /proc/[pid]/cmdline and do a string search for UPSalarm.py (or better yet, the name of process as invoked, which would be sys.argv[0], see here).
For signal handling, start here and here.
To send a signal to another process, use os.kill().
It should be easy to then configure monit to handle this daemon. You also then have the option of instead just using cron (or your own script) to call UPSalarm.py start at intervals, say every 5-10 minutes.
| Running daemon involving GPIO on Pi |
1,495,746,973,000 |
Say, I have a program, and I want to monitor its filesystem activity (what files/directories are created/modified/deleted etc.) This program may be capable of spawning further processes, and thus, I would like to get the activity of these spawned processes too.
How should I go about doing this?
|
You can use strace for this:
strace -f -e trace=file command args...
strace traces system calls and prints a description of them to standard error as they occur. The -f option tells it to track child processes and threads as well. -e lets you modify the calls it will track: -e trace=file will log every use of open, unlink, etc, but no non-file actions.
If you want to see what was read from and written to files, change it to -e trace=file,read,write instead; you can list out any additional calls you want to examine there as well. If you leave off that argument entirely you get every system call.
The output is like this (I ran mkdir /tmp/test in a traced shell):
[pid 1444] execve("/usr/bin/mkdir", ["mkdir", "/tmp/test4"], [/* 33 vars */]) = 0
[pid 1444] access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
[pid 1444] open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
[pid 1444] open("/usr/lib/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
[pid 1444] open("/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3
[pid 1444] mkdir("/tmp/test", 0777) = 0
[pid 1444] +++ exited with 0 +++
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=1444, si_status=0, si_utime=0, si_stime=0} ---
You can log to a file instead of the terminal with -o filename, and make the output (even) more verbose with -v. It's also possible to attach to an already-existing process with -p PID, in case that's more useful.
If you're looking to do this programmatically, rather than to inspect yourself, look at the ptrace call, which is what strace is built on.
| Monitoring filesystem activity |
1,495,746,973,000 |
I'm configuring monit on Ubuntu 11.04. In monitrc, the following setting controls the interval at which the monit daemon monitors services...
set daemon 120
Is this a global setting? If I want to check different services, such as permissions on a directory and an http service, how can I configure monit to check directory permissions every week while pinging the http service every 5 minutes?
I understand it's possible to use the -d interval option when executing monit, but according to the documentation, this checks services only once, then exits, without repeating; not helpful for my needs since I need it to continuously execute.
|
You can set the per-test interval in cycles.
See this similar ServerFault.com question for some more information.
ie: if your interval is 300 seconds you could run an http check every cycle while running the weekly check every 2016 cycles.
| Monit daemon interval setting...global or service-level? |
1,495,746,973,000 |
Underneath the Mac OS X directory /audit I have certain files which users can access and chmod to their liking.
I need to audit any chmod done on any files by recording the time, user and file being chmod, especially the latter.
I can dtrace -n 'syscall::chmod:entry' and detect the events, how do I read the first argument to chmod?
man 2 chmod tells me the path is in the first argument:
chmod(const char *path, mode_t mode);
but how can I read args[0]? I think I am doing this the wrong way around.. perhaps entry doesn't correspond to the actual syscall?
If I have a probe I can monitor, how can I check which parameters it offers for access and what types they are? I am assuming some pointers will need to be dereferenced based on their data layout..
|
The argument's in arg0, but that's the caller's userspace address rather than the actual string. You need to wrap it with a copyinstr() as well:
dtrace -n 'syscall::chmod:entry { printf("%d %s", uid, copyinstr(arg0)); }'
| DTrace to trap any chmod applied to certain files |
1,495,746,973,000 |
I have machine on which the files are uploaded by ftp. From this machine I would like to run cronjob and scp/rsync (simply copy them) to different machine in the same network.
The problem is I don't want to copy files which are not coplete (still during transfer)
Is there a possibility to check whether a file is complete and only then copy to another server?
|
You can use lsyncd:
Lsyncd watches a local directory trees event monitor interface
(inotify or fsevents). It aggregates and combines events for a few
seconds and then spawns one (or more) process(es) to synchronize the
changes. By default this is rsync.
You can specify the time out after which a file has changed is to be synced. Set it to e.g. five times the typical upload time and you're probably fine.
| Operations only on complete files [duplicate] |
1,495,746,973,000 |
Kernel version: 2.6.31-22
I wish to monitor the USB traffic to and from a device. I've searched, but different sites seem to give different information and I'm confused.
Some sites suggest that I need to recompile the kernel, while others suggest that all I need to do is install the latest wireshark. Do I need to recompile?
Can someone suggest a website describing the most recent approach to USB sniffing?
|
You need to recompile kernel/load module. It is present in 2.6.32 (LTS) kernel - probably 2.6.31 as well. less /usr/src/linux/Documentation/usb/usbmon.txt. Format is "well known" and it acts like character devince. It can dump in text format as well.
Wireshark can provide live stream and/or read file from USB as far as GUI is concerned.
| Monitoring USB traffic |
1,495,746,973,000 |
I need to see whole HTTP packets sent and recieved by an application for debugging purposes. How can this be done in command-line?
|
Use tcpdump.
tcpdump -w httpdebug.pcap -i eth0 port 80 will sniff all packets heading to or from port 80 on the eth0 interface and output them to httpdebug.pcap, which you can then read at your leisure, either with tcpdump again (with multiple -x options, refer to the tcpdump manpage ) in console if you're feeling masochistic, or with wireshark.
I really can't recommend the latter highly enough, as it will let you sort out packets and follow the exact stream you want to see.
| How can I see dumps of wholе HTTP packets? |
1,495,746,973,000 |
How can I run a script-based tool which will process files continuously downloaded to given directory as they arrive? I'd like to minimize delay (~1 second is OK), script can have own infinite loop.
I know a few ways, like:
autologin user with .bashrc or .profile calling my script
fork script from cron, then ignore if it is already running
use init scripts somehow (I guess it varies between distributions)
What method would work best?
|
Assuming your script is to run under Linux, you can use inotifywait from an init script. You will probably want a recursive search through the entire download tree (option -r). Bear in mind that each node to watch can eat up to 1kB of kernel memory.
The main advantage of inotify is to prevent a costly polling loop. It triggers an event as soon as a file operation takes place in the watched directory tree and consumes non-noticeable CPU resources otherwise.
| How can I start a file-processing daemon? [closed] |
1,495,746,973,000 |
Similar to my last question: Open a text file and let it update itself; is there a way I could do the same but for a folder instead?
As I have a log folder, can I use tail -f with a folder?
i.e.
$ tail -f /tmp/logs/
I know that this won't work, but is there an alternative?
I am using RHEL 5.10
|
Yes there is an alternative, after a bit of research, I saw that you can use:
$ watch "ls -l"
You need to be in the folder you want to watch.
Also, you can use tail -10 at the end:
$ watch "ls -l | tail -10"
The command types ls every 2 seconds and filters the output to the last 10 files.
If you read the reference link, it has some great tips, also if you can't remember the above command, then you can add the following to your .bashrc file:
alias taildir='watch "ls -l | tail -10"'
So you can just type taildir instead of writing the full command out again.
Reference: How to Tail A Directory.
| Open a directory and let it update itself using "tail -f" |
1,495,746,973,000 |
Is there some performace monitoring tool which would run in background gathering info about all system activity? Somethimes my system (Arch linux, 32 bit) slows down terribly and the top utility doesn't show anything.
I imagine some daemon which would gather info and log it, so when the slowdown pass away I would be able to find what was the problem.
|
How about sar?
| performance monitoring |
1,495,746,973,000 |
I'm basically looking for a utility that displays which processes are using how much bandwidth, similar to how top displays which processes use how much resources.
|
NetHogs is the best tool I have found so far that fulfills my need, but sadly needs to be run as root. (via)
| Network monitoring tool |
1,495,746,973,000 |
I would like to monitor a log file for errors and then send an email to administrators.
The log file contains data like below
11 Aug 02:30 Service1 restarted
11 Aug 05:35 Service1 restarted
11 Aug 08:43 Service2 restarted
11 Aug 11:20 Service1 restarted
11 Aug 14:53 Service2 restarted
I would like to create a script which runs for every 5 minutes and checks the last occurrence of service restart and send an email.
For example : if the script runs at 02:35 it sees that Service1 restarted so it will send an email like Service1 restarted at 02:30 . Now when the script runs at 5:45 then it should send email that Service1 restarted at 05:35 only (should not include 02:35 restart)
Is there a way to achieve this requirement ? I am basically new to Linux and Shell scripting
|
Start by making a five minute crontab:
*/5 * * * * myscript.sh
Which runs myscript.sh (in $HOME dir)
#!/bin/bash
tail -1 /path/to/file.log > /some/dir/after
if cmp -s /some/dir/after /some/dire/before
then
cat /some/dir/after | mail -s "restart" [email protected]
cp /some/dir/after /some/dir/before
fi
With the correct values (of course).
Note that this implies there will not be two restarts within five minutes.
| Log monitoring using shell script |
1,495,746,973,000 |
We're moving websites from one server configuration to a new configuration and the websites will live in different paths than previously. We're planning to diligently go through and replace old paths with new paths, but in case we miss any, is there some way to monitor for any processes trying to access the old paths and also know what UID the process was owned by?
|
You can use this little systemtap script :
#!/usr/bin/stap
function proc:string() { return sprintf("PID(%d) UID(%d) PROC(%s)", pid(), uid(), execname()) }
probe syscall.open.return, syscall.stat.return,
syscall.open64.return ?, syscall.stat64.return ? {
filename = user_string($filename)
if ($return < 0) {
printf("failed %s on %s by %s\n", pn(), proc(), filename)
}
}
It will hook the syscalls open and stat (you can copy/paste the code, maybe I forgot some other syscalls) at the return. As syscalls are the only way to communicate with the kernel, you won't miss anything.
This script will produce this kind of output :
failed syscall.stat.return on PID(4203) UID(1000) PROC(bash) by /tmp/rofl
failed syscall.stat.return on PID(4203) UID(1000) PROC(bash) by /tmp/hihi
among the pros of using systemtap, we have :
less instrusive for the process
system-wide (not only the monitored process) but you can reduce its selection directly in the script
less ressource hungry (only displays failed actions, not all to be grep after)
you can improve the script to get the details about the calling program (eg, its backtrace, time of call, etc...). It depends on your application.
And for the cons :
not standard, you have to install it (but standard enough to be available on most distribution). On Redhat & variants: sudo yum install systemtap
need to have the debuginfos to build the module. On Redhat & variants : sudo debuginfo-install kernel
Some useful links : The tapset (included functions) index, and a beginners guide
Good luck for your migration !
| Monitor processes trying to access non-existent file or directory |
1,495,746,973,000 |
I want to monitor memory usage for several processes and came up with a command like this:
ps aux |grep -e postgres -e unicorn -e nginx|cut -d' ' -f2|for i in $(xargs); do echo $i; done
16112
16113
...
How can I change the bit after the last pipe to feed arguments into top -p $i, so I get an of overall idea of memory consumption for all pids? The final command would produce something like top -p<pid1> -p<pid2> and so on
|
How about something like
pids=( $(pgrep 'postgres|unicorn|nginx') )
to put the PIDs in an array, and then
top "${pids[@]/#/-p }"
to spit them back out into top, prepending each with -p
| monitor multiple pids with top |
1,495,746,973,000 |
I'd like to monitor my network traffic of an specific interface to file.
Then I would like to stop the interface if the traffic counts over 60mb total.
Is there a possible way to do that?
|
dumpcap, the low-level traffic capture program of Wireshark, can be instructed to stop capturing after certain conditions with the option -a. You can stop capturing after writing 60MB. This isn't the same thing as measuring traffic, since it depends on the file encoding, but it should be close enough for most purposes (and anyway the exact traffic depends at which protocol level you measure it — Ethernet, IP, TCP, application, …).
dumpcap -i eth0 -a filesize:61440 -w capture.dump
| Stop writing to a capture file after it reaches a specific size |
1,495,746,973,000 |
A tool such as this might on the surface appear to serve no real useful purpose, but people that take care of systems like to brag, and uptime is just one of those things that they like to brag about right after how much RAM or CPUs their systems have.
Additionally, how many times have you had a system mysteriously reboot, only to find that it had, later on. A tool such as this would help to identify the frequency of both the reboots and the length of time that the system was staying up, between reboots. 2 potentially useful pieces of information when debugging badly behaving systems.
Is anyone aware of such a tool?
|
uptimed
One such tool that I came across many years ago is called uptimed. The project site is here: http://podgorny.cz/moin/Uptimed.
This is a pretty straightforward install, given uptimed appears to be in most of the major distros' repositories.
Installation
$ sudo yum install uptimed
Once installed the service needs to be configured so that it will start upon reboots. The stats of differing uptimes can be seen using the command uprecords.
Example
uprecords
# Uptime | System Boot up
----------------------------+---------------------------------------------------
1 371 days, 06:08:04 | Linux 2.6.18-194.8.1.el5 Fri Jan 13 08:03:18 2012
2 322 days, 13:20:22 | Linux 2.6.18-194.8.1.el5 Wed Feb 23 21:17:19 2011
3 243 days, 13:42:00 | Linux 2.6.18-164.15.1.el Thu Jun 24 21:48:01 2010
4 120 days, 11:08:54 | Linux 2.6.18-194.8.1.el5 Sun Jun 2 08:43:41 2013
5 80 days, 21:27:49 | Linux 2.6.18-128.1.1.el5 Fri Jan 1 16:35:06 2010
6 73 days, 21:47:32 | Linux 2.6.18-194.8.1.el5 Sat Jan 19 13:23:17 2013
-> 7 49 days, 00:12:15 | Linux 2.6.18-194.8.1.el5 Mon Sep 30 19:20:13 2013
8 39 days, 06:12:06 | Linux 2.6.18-194.8.1.el5 Tue Apr 23 06:05:01 2013
9 29 days, 16:18:57 | Linux 2.6.18-92.1.13.el5 Thu Jan 1 00:31:43 2009
10 29 days, 12:41:08 | Linux 2.6.18-92.1.18.el5 Thu Feb 12 02:46:39 2009
----------------------------+---------------------------------------------------
1up in 24 days, 21:35:18 | at Fri Dec 13 19:07:32 2013
no1 in 322 days, 05:55:50 | at Tue Oct 7 04:28:04 2014
collectd
If you're looking for something more graphical then check out collectd. Main project page is here: http://collectd.org/. Again, should be in most major distros' repositories.
Example
Collectd can do way more than just collect uptimes. It has a sophisticated plugin API which has dozens of plugins for collecting data on a variety of services such as MySQL or other system related information.
References
uptimed source tree - in mercurial
| Is there a tool for tracking uptimes across reboots? |
1,495,746,973,000 |
I've written some multi-thread test, and now I want to be sure that the highest CPU usage of this test is equal to 100 * CPU_NUMBER of current machine. Is it possible to do?
UPD 0: I'm talking about Linux system.
|
I think that you're looking for sar. SAR stands for System Activty Report. It's used in unix-like operating systems to report about CPU, memory and IO usage, collected by sysstat.
Then, sysatat can be configured to
Monitor individual processes. Link
How often it collects, and how long sar keeps reports is decided on the first setup.
You just want to note that such a data collection is not "free" so I won't keep it on on production servers.
After it is configured it will be easy for you to extract datas from reports in your script by using the sar command, grep and awk.
You didn't specified what OS are you working on, so I encourage you to search how to set up sar/sysstat on your distro.
| How to detect process's highest cpu usage during it's life |
1,495,746,973,000 |
I have been looking through the documentation for /proc and the "stack" object being a new'ish object in proc, I have also looked through the kernel commit to create it -- however the documentation does not detail exactly what is in the /proc/self/stack file -- and since I intuitively expected it to be the actual stack of the process -- however the old pstack tool gives a different (and more believable) output.
So as an example of the stack for bash
$ cat /proc/self/stack
[<ffffffff8106f955>] do_wait+0x1c5/0x250
[<ffffffff8106fa83>] sys_wait4+0xa3/0x100
[<ffffffff81013172>] system_call_fastpath+0x16/0x1b
[<ffffffffffffffff>] 0xffffffffffffffff
and, using pstack
$ pstack $$
#0 0x00000038cfaa664e in waitpid () from /lib64/libc.so.6
#1 0x000000000043ed42 in ?? ()
#2 0x000000000043ffbf in wait_for ()
#3 0x0000000000430bc9 in execute_command_internal ()
#4 0x0000000000430dbe in execute_command ()
#5 0x000000000041d526 in reader_loop ()
#6 0x000000000041ccde in main ()
The addresses are different, and obviously the symbols are not at all the same....
Does anybody have an explanation for the difference and/or a document which describes what is actually shown in /proc-stack?
|
The file /proc/$pid/stacks shows kernel stacks. On your system, memory addresses of the form ffffffff8xxxxxxx are in the space that's reserved for the kernel. There's not much documentation, you can check the source code. In contracst, the pstack program shows user-space stacks (using its knowledge of executable formats).
| What is the difference between /proc/self/stack and output from pstack? |
1,495,746,973,000 |
Is it possible to automatically run "source .bashrc" every time when I edit the bashrc file and save it?
|
One way, as another answer points out, would be to make a function that replaces your editor call to .bashrc with a two-step process that
opens your editor on .bashrc
sources .bashrc
such as:
vibashrc() { vi $HOME/.bashrc; source $HOME/.bashrc; }
This has some shortcomings:
it would require you to remember to type vibashrc every time you wanted the sourcing to happen
it would only happen in your current bash window
it would attempt to source .bashrc regardless of whether you made any changes to it
Another option would be to hook into bash's PROMPT_COMMAND functionality to source .bashrc in any/all bash shells whenever it sees that the .bashrc file has been updated (and just before the next prompt is displayed).
You would add the following code to your .bashrc file (or extend any existing PROMPT_COMMAND functionality with it):
prompt_command() {
# initialize the timestamp, if it isn't already
_bashrc_timestamp=${_bashrc_timestamp:-$(stat -c %Y "$HOME/.bashrc")}
# if it's been modified, test and load it
if [[ $(stat -c %Y "$HOME/.bashrc") -gt $_bashrc_timestamp ]]
then
# only load it if `-n` succeeds ...
if $BASH -n "$HOME/.bashrc" >& /dev/null
then
source "$HOME/.bashrc"
else
printf "Error in $HOME/.bashrc; not sourcing it\n" >&2
fi
# ... but update the timestamp regardless
_bashrc_timestamp=$(stat -c %Y "$HOME/.bashrc")
fi
}
PROMPT_COMMAND='prompt_command'
Then, the next time you log in, bash will load this function and prompt hook, and each time it is about to display a prompt, it will check to see if $HOME/.bashrc has been updated. If it has, it will run a quick check for syntax errors (the set -n option), and if the file is clean, source it.
It updates the internal timestamp variable regardless of the syntax check, so that it doesn't attempt to load it until the file has been saved/updated again.
| How to run "source .bashrc" automatically after I edit and save it? |
1,495,746,973,000 |
I want to create a service that monitors the existence of a certain process. I wrote the following shell script:
#!/bin/sh
while :
do
w=`ps u -A | grep -P ".+Sl.+/usr/local/MySource/endpoint" -c`
sleep 10
if [ $w -lt 2 ]
then
echo 0 > /sys/class/leds/alix\:2/brightness
killall -9 /usr/local/MySource/endpoint
nohup /usr/local/MySource/endpoint &> /dev/null &
last_endpoint_m=`date`
echo $last_endpoint_m > /tmp/endpoint_msleep
echo $w >> /tmp/endpoint_msleep
else
echo 1 > /sys/class/leds/alix\:2/brightness
fi
sleep 10
done
If the process exists, the script will switch off a LED on my machine and start the process. The LED should be ON while the process is running.
I then run this script by adding the following line to /etc/rc.local:
nohup /usr/local/MyTools/additions/XR50_endpoint_m &> /dev/null &
When I run ps, I find that the XR50_endpoint_m & process is there.
My Machine is an ALIX board with limited resources (embedded) running Debian.
The question is:
The variable $w is always zero (I verified this from the output file /tmp/endpoint_msleep). Although the process exists and the script is working fine if I run it manually ($w=2)!
What do you think the reason is and what is the best way to monitor a process?
|
It is failing because you run ps u. From man ps:
u Display user-oriented format.
This means that ps will only list processes owned by the current user. When you run the script manually, that user is you so your process is listed correctly.
(As @Gilles very correctly pointed out, the use of -A will cause all processes to be printed so that explanation is just wrong. pgrep is still better though).
Anyway, a better way of doing this would be to use pgrep:
pgrep, pkill - look up or signal processes based on
name and other attributes
Change
w=`ps u -A | grep -P ".+Sl.+/usr/local/MySource/endpoint" -c`
to
w=`pgrep -c endpoint`
| Monitor a process in Debian |
1,495,746,973,000 |
I have a Monitor connected to my machine through HDMI.
Now if anyone were to switch off the Monitor, through either the Soft Buttons on it, or by removing it's Power Cord, I wish to be notified and run a Shell Script.
I tried many ways to identify when a monitor is switched on or off (It's always connected). The only technique that comes close is:
# ddccontrol -p
When the external monitor is connected, this returns all kinds of details about the monitor. I could write a script to parse the output for that. However this seems like a unreliable technique for un-supervised usage.
Is there any way through which I could obtain a Yes/No answer to whether the Monitor is Switched On/Off?
EDIT: It would be preferable if I can get a message on status change. Since this will be running continuously for days, I do not wish to poll for the status of the monitor. Instead in case it is switched off, I would like to be informed through a message.
|
I don't see anything wrong with parsing the output of ddccontrol. DDC is the right way to get the information you want. Unlike with VGA, where DDC was created, the HDMI connector was designed to include DDC from the start. They even went back and modified the DDC standard to add more features for HDMI, calling it E-DDC.
On Linux, the userland tool for accessing DDC info is ddccontrol, so the fact that it doesn't have a flag that makes it do what you want out of the box is no reason to avoid using what's currently provided. If anything, it's an invitation to crack the code open and provide a patch.
Meanwhile, here's a short Perl script to limp by with:
#!/usr/bin/perl
# monitor-on.pl
my $CMD = open '-|', 'ddclient -p' or die "Could not run ddclient: $!\n";
local $/ = undef; # slurp command output
my $out = <$CMD>;
if ($out =~ m/> Power control/) {
if ($out =~ m/id=dpms/) {
print "asleep\n";
}
elsif ($out =~ m/id=on/) {
print "on\n";
}
elsif ($out =~ m/id=standby/) {
print "off\n";
}
else {
print "missing?\n";
}
}
else {
# Monitor is either a) not DDC capable; or b) unplugged
print "missing!\n";
}
This script is untested. I don't have any non-headless ("headed"?) Linux boxes to test with here. If it doesn't work, the fix should be obvious.
It could be made smarter. It won't cope with multiple monitors right now, and it's possible its string parsing could be confused, since it doesn't check that the power status strings it searches for are within the > Power control section.
| Detect if HDMI Monitor is switched off |
1,500,035,950,000 |
According to Monit link :
No environment variables are used by Monit. However, when Monit executes a start/stop/restart program or an exec action, it will set several environment variables which can be utilised by the executable to get information about the event, which triggered the action.
Is it possible to use those variables on custom actions?
For example, for notification I don't use mail service, then rather custom script which should receive that ENV monit variable and provide output.
This is a basic example to test env variables.
check process dhcp with pidfile "/var/run/dhcpd.pid"
start = "/etc/init.d/isc-dhcp-server start"
stop = "/etc/init.d/isc-dhcp-server stop"
if does not exist program then exec "/bin/echo $MONIT_EVENT > /tmp/monittest"
depends on lan
And when I intentionally make the program fail, like
check process dhcp with pidfile "/var/run/unexisting.pid"
I get no output in /tmp/monittest. Am I doing something wrong?
|
Yes, there be wrongness. The monit exec appears to perform an exec(3) style execution of the given string, and not a system(3) call; this means shell syntax (redirections and whatnot) are not supported as the supplied data is not being run through a shell. Instead, write suitable code that uses the monit environment variables (which will be exported to the code thus execed):
# cat /root/blah
#!/bin/sh
echo "$MONIT_EVENT" > /root/woot
# chmod +x /root/blah
#
And then call that code from the monit configuration:
# tail -2 /etc/monitrc
check process itsdeadjim with pidfile "/nopenopenope"
if does not exist then exec "/root/blah"
#
This populates the /root/woot file for me:
# rm /root/woot
# rcctl restart monit && sleep 10
monit(ok)
monit(ok)
# cat /root/woot
Does not exist
#
| How to use Monit Environment variables? |
1,500,035,950,000 |
I am working on software that communicates with a PCI card through direct memory access (DMA) transactions. My programs use a suit of drivers and a library that handles the DMA. Everything runs on a Red Hat Linux.
To test and measure the performance of my programs I would like to trace the start and end of the DMA transactions. Now I do this by looking at a couple of functions in the library:
dma_from_host and dma_to_host that initiate the transactions by configuring the values in the registers of the card and writing 1 to a register called DMA_DESC_ENABLE
dma_wait that waits until the transaction has finished by continuously checking the value of the DMA_DESC_ENABLE register.
But I would like to have a more robust confirmation that a transaction has started and a more precise signal when the transaction has ended. Something from Linux or hardware itself would be the best.
I understand that in principle it is a cumbersome situation. The idea of DMA is that the hardware (the PCI card or the DMA controller on the motherboard) copies things directly into the memory of the process, bypassing the CPU and the OS. But I hope that it does not just copy things into RAM without notifying the CPU somehow. Are there some standard ways to trace these transactions or it is very platform-specific?
Are there some special interrupts that notify the CPU about the start and end of the DMA? I could not spot anything like that in the drivers that I use. But I am not experienced with drivers, so I could have easily looked at wrong places.
Another idea, are there any PMU-like hardware monitors that could provide this information? Something that just counts transactions on PCI lanes?
Also an idea, do I understand right that one could write a custom DMA-tracer as a Linux module or a BPF program that would continuously check the value of that DMA_DESC_ENABLE register? Is this a viable approach? Are there known tracers like that?
|
Encouraged by the comment from @dirkt, I looked better at the drivers and found the PCI MSI interrupts that correspond to these DMA transactions.
The driver enables these interrupts with a call
pci_enable_msix(.., msixTable,..)
that sets up the struct msix_entry msixTable[MAXMSIX]. Then it asings them to the handler static irqreturn_t irqHandler() by calling request_irq() in a loop:
request_irq(msixTable[interrupt].vector, irqHandler, 0, devName,...)
The handler just counts the interrupts in a local int array. These counters are exported in the /proc/<devName> file that this driver creates for diagnostics etc. In fact, the proc file is from where I started the search for the interrupts.
But there is a better way: the /proc/interrupts file. The enabled MSI-X interrupts show up there in lines like these:
$ cat /proc/interrupts
CPU0 CPU1 ... CPU5 CPU6 CPU7
66: 0 0 ... 0 0 0 IR-PCI-MSI-edge <devName>
67: 0 0 ... 0 0 0 IR-PCI-MSI-edge <devName>
68: 33 0 ... 0 0 0 IR-PCI-MSI-edge <devName>
69: 0 0 ... 0 0 0 IR-PCI-MSI-edge <devName>
70: 0 0 ... 0 0 0 IR-PCI-MSI-edge <devName>
71: 0 0 ... 0 0 0 IR-PCI-MSI-edge <devName>
72: 0 0 ... 0 0 0 IR-PCI-MSI-edge <devName>
73: 0 0 ... 0 0 0 IR-PCI-MSI-edge <devName>
And one more way is to find the PCI address of the card in the lspci output and to check the interrupts assigned to the card in the /sys directory:
$ ls /sys/bus/pci/devices/0000:17:00.0/msi_irqs
66 67 68 69 70 71 72 73
# but these are empty
$ cat /sys/bus/pci/devices/0000:17:00.0/irq
0
The interrupt number 68 fires up by the end of the transactions. The interrupt handlers have a static tracepoint irq:irq_handler_entry in Linux. The tracepoint parameters in /sys/kernel/debug/tracing/events/irq/irq_handler_entry/format have the interrupt number in the int irq field. Hence, this interrupt can be traced with the standard Linux facilities by this tracepoint with a filter condition:
# setup the ftrace
trace-cmd start -e irq:irq_handler_entry -f "irq == 68"
# for live stream
cat /sys/kernel/debug/tracing/trace_pipe
# or just
trace-cmd stop
trace-cmd show
trace-cmd reset
# with perf
perf record -e "irq:irq_handler_entry" --filter "irq == 68"
What's good is that you get a timestamp of the interrupt. For example:
$ sudo trace-cmd start -e irq:irq_handler_entry -f "irq == 99"
$ sudo trace-cmd stop
$ sudo trace-cmd show | head -n 20
# tracer: nop
#
# entries-in-buffer/entries-written: 860/860 #P:12
#
# _-----=> irqs-off
# / _----=> need-resched
# | / _---=> hardirq/softirq
# || / _--=> preempt-depth
# ||| / delay
# TASK-PID CPU# |||| TIMESTAMP FUNCTION
# | | | |||| | |
<idle>-0 [009] d.H. 6090.224339: irq_handler_entry: irq=99 name=xhci_hcd
...
With this, one thing that is still worth confirming is that these interrupts are essential to the DMA, to be sure that I monitor something relevant to the system instead of just a handy counter for the proc file that might not be implemented in another situation. But I could not spot that any other relevant interrupts by watching at how they increment in the /proc/interrupts. There are interrupts for the devices dmar[0123] that seem like something about DMA, but they have never incremented. And that is to be expected, as in this case the DMA engine must be implemented as an FPGA core in the PCI card itself.
Moreover, of course interrupts do not give you access to the information about the transaction itself, like the size of memory that is transferred. And you need to be sure that there is no bug in the card that could prevent interrupts, and that they are triggered without any delay after the transaction.
| How to trace DMA? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.