date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,500,035,950,000
I want to check which process taking highest I/O. To be exact I want to check which process doing the highest write operation and how much. I know there are some tools like iotop. But as I have to work without a sudo and a foreign environment with very limited privilege I want to know how can I achieve this with built-in tools like ps. I want something like the following which I use to find CPU/Memory usage, $ps -eo pid,command,%cpu,%mem --sort=-%cpu Update: After trying several ways I found that I can't read /proc/[pid]/io file due to lack of privillage so no way to get I/O without proper privilege I guess. $cd /proc/; for i in $(ls | egrep -o ^\[0-9\]*); do cat $i/io; done cat: 1/io: Permission denied cat: 10/io: Permission denied cat: 10284/io: Permission denied cat: 11/io: Permission denied cat: 1174/io: Permission denied cat: 12/io: Permission denied ........
The problem is that you do not have access to this information as ordinary user for other users' processes.
Disk I/O per Process
1,500,035,950,000
I would like to know how to monitor tcp traffic between my localhost and IP address keeping activities in a file. I tried iftop and tcptrack but I can not keep activities in a file. These tools don't target a specify IP address, they're monitoring the interface only : iftop -i eth2 -f "dst port 22" I tried to put the IP address in place of dst but it doesn't work. The idea is for detecting any suspect traffic Thanks for help
As @blametheadmin mentioned in a comment, you can use tshark. Another option is tcpdump: $ tcpdump -w trace.out host <hostname-or-ip> Then later, you can examine that trace with: $ tcpdump -r trace.out
How to monitor tcp traffic between my localhost and IP adress
1,500,035,950,000
I am wondering if someone might point me in the right direction. I've got little experience working with the Linux command line and recently due to various factors in work I've been required to gain knowledge. Basically I have two php scripts that reside in a directory on my server. For the purposes of the application these scripts must be running continuously. Currently I implement that this way: nohup sh -c 'while true; do php get_tweets.php; done' >/dev/null & and nohup sh -c 'while true; do php parse_tweets.php; done' >/dev/null & However, I've noticed that despite the infinte loop the scripts to stop periodically and I'm forced to restart them. I'm not sure why but they do. That has made me look into the prospect of a CRON job that checks if they are running and if they are not, run them/restart them. Would anyone be able to provide me with some information on how to go about this?
I'd like to expand on Davidann's answer since you are new to the concept of a cron job. Every UNIX or Linux system has a crontab stored somewhere. The crontab is a plain text file. Consider the following: (From the Gentoo Wiki on Cron) #Mins Hours Days Months Day of the week 10 3 1 1 * /bin/echo "I don't really like cron" 30 16 * 1,2 * /bin/echo "I like cron a little" * * * 1-12/2 * /bin/echo "I really like cron" This crontab should echo "I really like cron" every minute of every hour of every day every other month. Obviously you would only do that if you really liked cron. The crontab will also echo "I like cron a little" at 16:30 every day in January and February. It will also echo "I don't really like cron" at 3:10 on the January 1st. Being new to cron, you probably want to comment the starred columns so that you know what each column is used for. Every Cron implementation that I know of has always been this order. Now merging Davidann's answer with my commented file: #Mins Hours Days Months Day of week * * * * * lockfile -r 0 /tmp/the.lock && php parse_tweets.php; rm -f /tmp/the.lock Putting no value in each column defaults to: Every Minute Of Every Hour Of Every Day Of Every Month All Week Long, --> Every Minute All Year Long. As Davidann states using a lockfile ensures that only one copy of the php interpreter runs, php parse_tweets.php is the command to "run" the file, and the last bit of the line will delete the lock file to get ready for the next run. I don't like deleting a file every minute, but if this is the behavior you need, this is very acceptable. Writing and Rewriting to disk is just personal preference
Cron job to check if PHP script is running, if not then run
1,500,035,950,000
If I run iostat -x 1 I saw ocassionally large 5MB to 10MB writes. What files are being written? I want to check the recently created files with size over 5MB for example. How would I do so?
Find file modified within X minute under /path find /path -cmin -X Sign before minute: + more than X minutes / over X minutes - less than X minutes / within X minutes (no sign) exact Example: find all files in /var/log (including sub-dir) modified within last 30min find /var/log -cmin -30 Find file with size bigger X under /path find /path -size +X<unit> Sign before size: + larger than - less than (no sign) exact <unit> : b = block (default,512byte) c = byte w = word (2-byte) k = kbyte M = Mbyte G = Gbyte Example: find all files in /var/log (including sub-dir) bigger than 50k find /var/log -size +50k Combine Example: find all files in /var/log (including sub-dir) bigger than 50k modified within last 30min find /var/log -cmin -30 -size +50k If you want to include 50k in your result, change to find /var/log -cmin -30 -size +49k PS: Avoid doing find / ..... as not only it will take a long time, it also include directories(/dev, /sys, /proc, ...) generally not suitable for search.
How to know recently updated files
1,500,035,950,000
When the HDD indicator is blinking (for a long period), how could I know which process is taking most disk bandwidth?
Using iotop. Iotop is a Python program with a top like UI used to show of behalf of which process is the I/O going on. It requires Python ≥ 2.5 (or Python ≥ 2.4 with the ctypes module) and a Linux kernel ≥ 2.6.20 with the TASK_DELAY_ACCT CONFIG_TASKSTATS, TASK_IO_ACCOUNTING and CONFIG_VM_EVENT_COUNTERS options on.
Determine which process is taking most of disk bandwidth?
1,500,035,950,000
In a networked environment, such as SOHO, is there any tools that can monitor the network bandwidth usage based on the computer's IP or mac address? So that we can know which user has the highest bandwidth usage. If possible also come out statistics of each computer usage based on IP or mac address.
iftop for a top like interface for instant view. ntop for a lot of statistics with a web interface argus ditto with a CLI interface. See also iptstate on Linux to get the information tracked by the connection tracker.
Tools that monitor the network bandwidth based on IP?
1,500,035,950,000
Currently I need to have a program running all the time, but when the server is rebooted I need to manually run the program. And sometimes I'm not available when that happens. I can't use a normal configuration to restart my program when the server is starting because I don't have root access and the administrator don't want to install it.
I posted this on a similar question If you have a cron daemon, one of the predefined cron time hooks is @reboot, which naturally runs when the system starts. Run crontab -e to edit your crontab file, and add a line: @reboot /your/command/here I'm told this isn't defined for all cron daemons, so you'll have to check to see if it works on your particular one
how to ensure a program is always running but without root access?
1,500,035,950,000
I have a linux server that apparently has two Intel Xeon X5670. /proc/cpuinfo shows 12 CPUs but dmidecode shows only one CPU and the other one is in the Unpopulated status like it is showing that this is other CPU is because of the hyper threading. My server is a HP Proliant DL380 G7 and it can have up to two CPUs. My question is if my server has one or two physical CPUs or if there is a setting that is powering down the second CPU so it is showing as Unpopulated in the socket. root@linux:~ # cat /proc/cpuinfo | grep processor processor : 0 processor : 1 processor : 2 processor : 3 processor : 4 processor : 5 processor : 6 processor : 7 processor : 8 processor : 9 processor : 10 processor : 11 root@linux:~ # dmidecode --type processor| egrep "Version|Family|Manufacturer|Socket|Status" Socket Designation: Proc 1 Family: Xeon Manufacturer: Intel Signature: Type 0, Family 6, Model 44, Stepping 2 Version: Intel(R) Xeon(R) CPU X5670 @ 2.93GHz Status: Populated, Enabled Upgrade: Socket LGA1366 Socket Designation: Proc 2 Family: Xeon Manufacturer: Intel Signature: Type 0, Family 0, Model 0, Stepping 0 Version: Status: Unpopulated Upgrade: Socket LGA1366
If I read the datasheet correctly, you have one slot filled, six cores, which show as 12 processors because they are hyperthreading. (Also, /proc/cpuinfo should tell you about processor and physical id. The two parts of a hyperthreading core have the same physical id.) This seems like a good read on the matter.
How to identify if there is some power saving setting that powered down a processor in Linux
1,500,035,950,000
Sometimes, I have a rogue Java process which takes up 100% of my CPU and makes it jump about 30C in temperature (usually resulting in a crash if not killed). Problem is, I can never really identify it (its got a long list of parameters and stuff) or analyze it because I have to kill it so quickly. Is there a sort of log I can look at to see the identity of past processes I have killed? If not, is there a way for me to catch that process next time it shows up? If it matters I'm OpenSuse 11.4.
No, not by default. There is such a thing as too much logging (especially when you start risking logging the action of writing a log entry…). BSD process accounting (if you have it, run lastcomm), if active, records the name of every command that is executed and some basic statistics, but not the arguments. The audit subsystem is more general and more flexible. Install the audit package and read the SuSE audit guide (mostly the part about rules), or try auditctl -A exit,always -F path=/usr/bin/java -S execve Or: instead of killing it, kill -STOP it. The STOP suspends the process, no questions asked. You get the option to resume (kill -CONT) or terminate (kill -KILL) later. As long as the process is still around, you can inspect its command line (/proc/12345/cmdline), its memory map (/proc/12345/maps) and so on. Or: attach a debugger to the process and pause it. It's as simple as gdb --pid 12345 (there may be better options for a Java process); attaching a debugger immediately pauses the process (if you exit the debugger, the process receives a SIGCONT and resumes). Note that all this only catches OS-level processes, not JVM threads. You need to turn to JVM features to debug threads.
Is there a log of past threads that are now closed?
1,500,035,950,000
Is there a command that relaunch the application once it finishes from the command line? Letting you do something like: > relaunch python myapp.py If not, then what's my best option? I know I could cron it, but I'd be more interested in something I could just execute from the terminal and that restarts at once. I'm on Debian if that matters.
You can try with a simple infinite loop: while true; do python myapp.py done Edit: the above is just a simple generic example. Most probably modifications are needed to take into account exit errors etc. For example: until `python myapp.py; echo $?`; do echo "exit ok, restarting" done
Relaunch application once finished
1,500,035,950,000
I am looking for a convenient way to monitor my Ubuntu server from my laptop (running Ubuntu Desktop). I understand this is easily achievable by SSHing to the server and running commands, but I'd like to do it on the fly, without going through the process. I'd like to see load graphs, processes, users, etc... just by opening a window. A good example would be MySQL Workbench, it allows to connect via SSH to the server and watch what's happening with MySQL in realtime, graphs provide a nice 'feel' of the loads. Is anyone familiar with a Linux software that does that? Or an alternative solution perhaps? UPDATE: Just to clarify, the monitoring I am looking for is a really basic one, similar to what you would get in your task manager. Just a general graph of CPU, RAM and network and a list of processes. For convenience I am looking for something that is installed and connects to the server (SSH preferably), not a web interface. I'd like to be able to take a look at it on the fly during a lecture and such.
You can SSH to the server and run any console or GUI util. To run GUI program, for example, gnome-system-monitor you need to ssh with -X option -X Enables X11 forwarding. This can also be specified on a per-host basis in a configuration file. for instance, ssh -X user@server and then, gnome-system-monitor it shows next info: Processes Resources File Systems You can install and run also lots of programs. For example Glances and Pysensors are described well in this close thread: https://askubuntu.com/questions/293426/system-monitoring-tools-for-ubuntu also you may consider these applications to monitor bunches of things on your server, read more for each: wmctrl iotop bum smartmontools top hardinfo
Remote monitoring Linux server from Ubuntu workstation via SSH [closed]
1,500,035,950,000
Is there an equivalent to the amazing systat command in Linux-based operating systems? For those who don't know about it, the BSD's systat command is just amazing. It displays live graphs of network traffic, I/O, ICMP, IP, TCP, network sockets (like netstat), swap usage and so on. But the most amazing of all, is the -vmstat display. I'll paste a snapshot of the live display here: 2 users Load 0.10 0.12 0.13 Apr 30 22:50 Mem:KB REAL VIRTUAL VN PAGER SWAP PAGER Tot Share Tot Share Free in out in out Act 79096 5336 210828 9572 112208 count 5 All 144196 16988 2355132 30104 pages 19 Proc: Interrupts r p d s w Csw Trp Sys Int Sof Flt 535 cow 1313 total 2 58 2923 1665 2493 1313 999 1094 299 zfod 999 clk irq0 16 ozfod uart0 irq4 20.0%Sys 3.7%Intr 29.7%User 0.0%Nice 46.6%Idle 5%ozfod 101 vr1 irq5 | | | | | | | | | | | daefr irq7: ==========++>>>>>>>>>>>>>>> 487 prcfr stray irq7 38 dtbuf 786 totfr 128 rtc irq8 Namei Name-cache Dir-cache 35088 desvn 1 react vr2 irq9 Calls hits % hits % 31092 numvn pdwak 52 vr0 irq11 3254 3238 100 8647 frevn pdpgs 27 vr3 irq12 intrn 6 ata0 irq14 Disks ad0 86200 wire ata1 ohci0 KB/t 14.90 89816 act tps 6 209168 inact MB/s 0.08 56 cache %busy 7 112152 free The manpage goes through great lengths to explain all the different parts of this arguably "crowded" display but what I quite miss in Linux about this are: the interrupt-per-second summary (on the right) - sure i can watch -n 1 cat /proc/interrupts, but it's hard to tell what's really going on there... the disk usage (on the bottom left) - just plain and simple MB/s and how busy the disk is (in percentage!) Before you answer, understand that I know very well: top - pales in comparison: only looks at some of those aspects, in too broad strokes vmstat - a classic, but is more useful to draw trends over time than figure out "what's going on now exactly" iftop - useful to diagnose network bottlenecks, but that's it iotop - same for I/O dstat - interesting, but doesn't have the same granularity per interrupt I could mention a lot more of those: basically, I am not aware of a single tool that shows that much of a complete snapshot of the state of a machine in a single 24x80 terminal screen, in any Linux-based distribution. Please prove me wrong. :)
someone just pointed me to Glances and while it still doesn't replace systat, it still pretty awesome. It collects the outputs of top, free, disk and network IO, and shows disk space usage, among other things. It can also run in client/server mode, both through a web interface or a dedicated remote commandline client mode. It can also export data points to other systems like StatsD, RabbitMQ and much more. Quite interesting. What seems to be missing from systat still are: VM/swap page in/out interrupt usage disk % usage and more freebsd-specific counters At this point, I am not sure all those other counters are necessary, but it would be great to have the first three here..
Is there an equivalent to systat in Linux?
1,500,035,950,000
I monitor value from /proc/meminfo file, namely the MemTotal: number. It changes if a ram module breaks, roughly by size of the memory module - this is obvious. I know the definition for the field from kernel documentation: MemTotal: Total usable ram (i.e. physical ram minus a few reserved bits and the kernel binary code) The dmesg also lists kernel data. What other particular actions make the MemTotal number change if I omit hardware failure of the memory module? This happens on both physical & virtual systems. I monitor hundreds of physical, thousands of virtual systems. Although the change is rather rare, it does happen.
I was not comfortable with having bug in kernel or a module, so I digged further and found out... that MemTotal can regularly change, downwards, or upwards. It is not a constant and this value is definitely modified by kernel code on many places, under various circumstances. E.g. virtio_balloon kmod can decrease MemTotal as well as increase it back again. Then of course, mm/memory_hotplug.c is exporting [add|remove]_memory, both of which are used by lot of drivers too.
Why does MemTotal in /proc/meminfo change?
1,500,035,950,000
I've got an AMDGPU on Linux and want to be able to see which processes are utilising my precious 4GB of VRAM I need for gaming. I'd like this to be presented in a similar manner to top listing all processes utilising VRAM by usage. radeontop only shows total VRAM usage.
One tool to accomplish this task is https://gitlab.freedesktop.org/tomstdenis/umr sudo umr -t Will start it in a top-like view. You can then hit v to see VRAM information per-process.
How can I list AMDGPU VRAM usage by process?
1,500,035,950,000
I have set-up an ELK server in testing environment. I intend to send log messages from different clients to ELK, but first i want to test it from localhost to verify it running properly. Previously i had directly , used a python library to interact with elastic-search (since there was a problem in using urllib2 , 400 bad request) , but this time i want to send the message to log-stash and let the log-stash deal with it before it goes to Elastic Search. I used netcat , but there is some problem with the port number: echo "access denied" | nc localhost 5514 Ncat: Connection refused. Seems like there is nothing on this port. The logstash service is running.
You could use logger with the -P switch to set your port to 5514. Check man logger for other suitable switches, eg -t. echo "access denied" | logger -t myservice -P 5514 To check if port 5514 is currently associated with logstash, lsof -i :5514, or check logstash startup logs (meta!). Are you certain your logstash is using that particular port?
How to pipe a sample log message manually to logstash for processing
1,500,035,950,000
We have ten servers in two groups servers, and there is a lot of traffic between those groups - too much we think. We know this because of stats of our provider, but don't know how this traffic is built up, and which servers cause this. There is no cluster or failover or anything - each server stands on its own. I would like to know which server(s) causes this traffic, but don't know how I can monitor this. What program or service can do this? Preferably it could monitor all local network traffic, in 10.x.x.x ip range.
I use a soft called IPTraf for quick monitoring of network connection. It runs on the terminal and can give you divers stats on the current connections status on the machine it's installed. you can also get a brake down based on the services used (or ports) which can be useful if your server only serves a particular function. Downside is that you need to install it on each host. I'm no network expert, but if you need to get a wider view of all traffic going between your sites, you would be better to check at the switch/router level with appropriate software.
Lots of traffic between servers - where does it come from?
1,500,035,950,000
I need to collect data about disk utilization for selected disks. I can use glance-plus monitoring tool to display the current data in percents (it looks similar to top), but I need to collect these values into a file so that I can create graphs from it. Unfortunately this isn't possible in glance so I wanted to create some own script for this purpose. I managed to create a script which collects number of blocks read / write per second, but I don't know how could I easily convert this to percents because I don't really know what the maximal utilization could be. The script is bellow: #!/bin/sh list=`iostat 10 2 | grep -v ' 0' | grep -v 'device' | grep -vE '^ *$' | sed 's/^........ *//' | sed 's/ .*//'` value=0 for rt in `echo $list` do value=`expr $rt + $value` done echo `expr $value / 10` is there any easier way to do this on hp-ux preferably using some free / default tools
Note that glance can be scripted: # cat /opt/perf/examples/adviser/disk_sar #The following glance adviser disk loop shows disk activity comparable #to sar -d data. #Note that values will differ between sar and glance because of differing #data sources, calculation methods, and collection intervals. headersprinted = 0 # For each disk, if there was activity, print a summary: disk loop { if BYDSK_PHYS_IO_RATE > 0 then { # print headers if this is the first active disk found this interval: if headersprinted == 0 then { print "-------- device %util queue r+w/s KB/s msecs-avserv" headersprinted = 1 } print GBL_STATTIME, " ",BYDSK_DEVNAME|15, BYDSK_UTIL|7|2, BYDSK_REQUEST_QUEUE|8|2, BYDSK_PHYS_IO_RATE|8|0, BYDSK_PHYS_BYTE_RATE|8|0, BYDSK_AVG_SERVICE_TIME|16|2 } } if headersprinted == 0 then print GBL_STATTIME, " (no disk activity this interval)" To use that script : glance -aos /opt/perf/examples/adviser/disk_sar -j 5 Here BYDSK_UTIL is the % of time the disk is busy during the collection interval. Read /opt/perf/paperdocs/gp/C/gp-metrics.txt and /opt/perf/paperdocs/ovpa/C/methp.txt to see the available metrics. If you prefere other tools. You can use sar (by default on hp-ux) use egrep -f fiters to filter on your disk for instance : (the awk is to have a time stamp for each disks) sar -d 5 10 | awk '/^[0-9]/ {t=$1} {sub("^........",t,$0); print }' | egrep -f myfilter 11:56:15 device %busy avque r+w/s blks/s avwait avserv 11:57:17 disk1680 23.76 0.50 200 3200 0.00 1.19 11:57:17 disk1689 0.99 0.50 1 507 0.00 5.45 11:57:17 disk1694 41.58 0.50 237 3786 0.00 1.75 11:57:17 disk1696 0.00 0.50 1 16 0.00 2.07 11:57:17 disk1707 0.99 0.50 1 16 0.00 5.82 11:57:17 disk1709 4.95 0.50 2 2044 0.00 24.10 11:57:17 disk1712 3.96 0.50 2 1980 0.00 23.69 ... With myfilter containing the disks you want to watch. # cat myfilter disk1680 disk1689 ... add a blank character " " after each disks names, otherwise disk1 would match disk10
how can I retrieve disk IO utilization in percents on HP-UX
1,500,035,950,000
The main server at my company has recently been having a lot of downtime. For reasons that neither I nor the other admins can determine, it has random (VERY sudden) explosions in memory. It becomes unresponsive because it exhausts all the memory, and then we have to reboot it. Very annoying. It's a Debian system, we haven't upgraded to Squeeze or anything, it's been perfectly stable for a long time. The problem is that the logs are totally useless. They don't seem to indicate that anything is going wrong. I'm guessing that some process is buggy and hogging all of the memory, but I have NO way of proving that at the moment. Remote logging is no help, because it's not complaining about anything -- it thinks everything is peachy. So my question is: how would you approach this problem? Any insight is appreciated. Thanks.
atop is pretty good at monitoring and logging resource usage. It can be used interactively or as a service; the debian package sets it to log to /var/log/atop.log every ten minutes (edit /etc/init.d/atop for something more precise). You can then replay the logs with atop -r /var/log/atop.log -b hh:mm -mM; mM selects a view and a sort appropriate for memory problems, hh:mm should be a few minutes before the incident, use tT to navigate. Also try the A sort.
Unpredictable memory explosions
1,500,035,950,000
I do a lot of work on in the cloud running statistical models that take up a lot of memory, usually with Ubuntu 18.04. One big headache for me is when I set up a model to run for several hours or overnight, and I check on it later to find that the processes was killed. After doing some research, it seems like this is due to something called the Out Of Memory (OOM) killer. I would like to know when the OOM Killer kills one of my processes as soon as it happens, so I don't spend a whole night paying for a cloud VM that is not even running anything. It looks like OOM events are logged in /var/log/, so I suppose I could write a cron job that periodically looks for new messages in /var/log/. But this seems like a kludge. Is there any way to set up the OOM killer so that after it kills a process, it then runs a shell script that I can configure to send me notifications?
You can ask the kernel to panic on oom: sysctl vm.panic_on_oom=1 or for future reboots echo "vm.panic_on_oom=1" >> /etc/sysctl.conf You can adjust a process's likeliness to be killed, but presumably you have already removed most processes, so this may not be of use. See man 5 proc for /proc/[pid]/oom_score_adj. Of course, you can test the exit code of your program. If it is 137 it was killed by SIGKILL, which an oom would do. If using rsyslogd you can match for the oom message (I don't know what shape that has) in the data stream and run a program: :msg, contains, "oom killer..." ^/bin/myprogram
Trigger a script when OOM Killer kills a process
1,500,035,950,000
I am trying to monitor the CPU, RAM and computing time of a process, and all the children processes that it generates, using the Linux top command. I found that I could store the output of the top command using this syntax: $ top -b > top.txt I am then parsing the results with a python script. But I am having trouble identifying the specific process that I am monitoring and its children processes. I found that I could add the PPIDs field in top by typing f when top is running, but this won't work in batch mode with the -b option. Is there a way display the PPIDs and store the output of the top command, so that I can find the processes I am interested in when parsing the results? My specific question is about including PPID in the output file when using top in batch mode. If you have a better suggestion to monitor CPU, RAM and computing time of a process it would also be welcome.
After adding the PPID (or any other field) in the interactive top display, you only need to save the configuration using W (uppercase w). Then exit (q) and use top -b, it will include and show the fields as from the changes you made to top interactively.
Display PPID and output to file using top
1,500,035,950,000
I have a port 0x80 BIOS post debug card installed in a PCI slot. I want to use it purposefully after booting by having the CPU temperature displayed on the card. The address takes one byte and is displayed in HEX. How do I convert the two digit decimal Celsius temperature values into a single byte for writing to the card? Remember, the display is HEX, so the byte output needs to be converted as such into something that is readable in base 10, although just getting the byte would be helpful at this point. Googling is driving me nuts. e.g. echo d | dd of=/dev/port bs=1 count=1 seek=128 gives a display of 64, the ASCII byte for the letter d. cat /sys/class/hwmon/hwmon0/temp2_input | cut -c1-2 gives the CPU temperature in °C in two bytes of ASCII: 58 A bash command string would be preferable as it could be called with a cron job or systemd timers. Thanks!!!
You want to convert the temperature, read as a decimal value, into a character corresponding to the hexadecimal value which when displayed, reads the same as the temperature... The request sounds more complex than it really is; printf can be used to print a character corresponding to a given character code: $ printf "\x64\n" d So you’ll get the result you’re after with printf "\x$(cut -c1-2 < /sys/class/hwmon/hwmon0/temp2_input)" | dd of=/dev/port bs=1 count=1 seek=128
Writing CPU Temperature to Port 0x80 Bios Debug Card with Bash Script
1,500,035,950,000
We have several hundred Linux VMs on EC2 and Google compute engine. We want to monitor basic things like disk free space and memory consumption, in the easiest and lightest way possible. Expectedly, VMs come and go pretty often, as load changes, etc. Currently we use simple scripts that pull such information via SNMP. We don't need fancy app-specific monitoring since it is already being provided by app-specific means. We tried Zenoss, and found it hard to use, and its documentation lacking. We considered Nagios and its forks. We considered Sensu (but my boss is not a fan of RabbitMQ) and Ganglia, but all of them seem a bit too complicated for our very basic needs. SaaS solutions like Circonus would be too expensive with the number of hosts we have. Am I missing some obvious easy solution here? What would you recommend [against]?
If you are looking more into the open-source-direction, Open NMS might suit your needs. I did not use it myself, but I heard good things about it (especially from people, who dislike Nagios). From what I read about it, it is also SNMP-based.
Monitor hundreds of hosts for basic parameters [closed]
1,500,035,950,000
In Unix based systems, is there a log file that stores user's executed command(s)?
Given that you want to track all user commands, you should look at the acct package on your system (on some systems this is also called "process accounting" or psacct). Then after it's been turned on, you can run the lastcomm command to show what programs have been run, by whom, when and for how long. From Google, search "linux acct" for more details. http://beginlinux.com/blog/2010/01/monitoring-user-activity-with-psacct-or-acct/ http://www.cyberciti.biz/tips/howto-log-user-activity-using-process-accounting.html
User's executed commands log file
1,500,035,950,000
I use a script that writes all mountpoints of my devices to a textfile using df. How can I execute my script every time any device (especially USB) is mounted? script to execute: #!/bin/bash # save all mountpoints to textfile df -h /dev/sd*| grep /dev/sd| awk '{print $6}' > /home/<user>/FirstTextfile # do something while read line do echo "mountpoint:${line%/*}/ devicename:${line##*/}}" >> home/<user>/AnotherTextfile Debian 8.0 (jessie), Linux 3.16.0, Gnome 3.14.
write a udev rule which first mounts the USB-drive and second runs my-script # cat /etc/udev/rules.d/11-media-by-label-with-pmount.rules KERNEL!="sd[a-z]*", GOTO="media_by_label_auto_mount_end" ACTION=="add", PROGRAM!="/sbin/blkid %N", GOTO="media_by_label_auto_mount_end" # Get label PROGRAM=="/sbin/blkid -o value -s LABEL %N", ENV{dir_name}="%c" # use basename to correctly handle labels such as ../mnt/foo PROGRAM=="/usr/bin/basename '%E{dir_name}'", ENV{dir_name}="%c" ENV{dir_name}=="", ENV{dir_name}="usbhd-%k" ACTION=="add", ENV{dir_name}!="", RUN+="/bin/su YOURUSERNAME -c '/usr/bin/pmount %N %E{dir_name}'", RUN+="/etc/udev/scripts/my-script.sh" ACTION=="remove", ENV{dir_name}!="", RUN+="/bin/su YOURUSERNAME -c '/usr/bin/pumount /media/%E{dir_name}'" LABEL="media_by_label_auto_mount_end" note: The drive is mounted by root but can be unmounted by the given user. In the last block you have to change YOURUSERNAME with your username and /etc/udev/scripts/my-script.sh with the path to your script. Source and more scripts: https://wiki.archlinux.de/title/Udev#USB_Ger.C3.A4te_automatisch_einbinden Another solution is to use a udisks wrapper like devmon.
how to execute a script every time any USB get mounted
1,500,035,950,000
I'm pretty sure the Linux kernel has a feature which allows to track all the reads and writes (IO) of an application and all its children however I haven't seen any utilities which can calculate it and show it. For instance for CPU time you could simply use time and get neat CPU use information: $ time cat --version > /dev/null real 0m0.001s user 0m0.001s sys 0m0.000s I'm looking for something similar in regard to IO, e.g. $ calc_io task Bytes read: 123456 Bytes written: 0 Of course, we have /proc/$PID/io which contains runtime information but tracking it for applications which spawn and destroy children dynamically, e.g. web-browsers seems like a daunting task. I guess if you run strace -fF firefox then monitor all children being spawned and try to track in real time /proc/$PID/io - nah, seems like too difficult to implement and then how often will you poll this file for information? Children may exist for a split second. Another idea is to use cgroups but then what if I don't want to use them? Also I've checked /sys/fs/cgroup and I don't see any relevant statistics.
I came across this post and found it very interesting. I thought this problem was not that difficult since the question you are asking is quite natural after all. I could only find an imperfect and incomplete solution. I decided to post it anyway, as the question was not answered yet. This requires a system with systemd and cgroups2 (I read what you said about it but it might be interesting to see this solution). I learned about both, I don't master them. I tested only on an arch-based linux distribution. ~]$ cat /etc/systemd/system/user\@1000.service.d/override.conf [Service] Delegate=pids memory io It seems that you need to "delegate" io controller to your "user systemd sub tree" to use this as an unprivileged user (I can't point one specific place. man systemd.resource-control. https://systemd.io/CGROUP_DELEGATION . https://wiki.archlinux.org/title/cgroups#As_unprivileged_user ) ~]$ cat ~/.config/systemd/user/my.slice [Slice] IOAccounting=true Then create a slice with IOAccounting enabled to run you processes in. reboot ~]$ cat foo.sh #!/bin/sh dd if=/dev/random of=/home/yarl/bar bs=1M count=7 dd if=/dev/random of=/home/yarl/bar bs=1M count=3 ~]$ systemd-run --user --slice=my.slice /home/yarl/foo.sh ~]$ systemctl --user status my.slice ● my.slice - Slice /my Loaded: loaded (/home/yarl/.config/systemd/user/my.slice; static) Active: active since Sun 2021-11-07 20:25:20 CET; 12s ago IO: 100.0K read, 10.0M written Tasks: 0 Memory: 3.2M CPU: 162ms CGroup: /user.slice/user-1000.slice/[email protected]/my.slice nov. 07 20:25:20 pbpro systemd[1229]: Created slice Slice /my.
Utility to show disk IO read/write summary for a task/command/program
1,500,035,950,000
While an application is running, I can monitor disk bandwidth usage using linux tools including dstat. Now I'd like to know how many sequential or random disk I/Os are occurring in the system. Does any one know any ways to achieve this?
Ypu can write your own FUSE filesystem (what you can do using almost any scripting/programming language, even bash) , that would just proxy filesystem calls to pointed filesystem (and eventually translate paths) plus monitor what you minght want to monitor. Otherwise you might investigate output of strace for programs performing I/O calls ofninterest, if possible.
Is there a way to monitor disk i/o patterns? (i.e. random or sequential i/o?)
1,500,035,950,000
The problem: Given a process, limit the resources it and its child-processes can use. I.e. set CPU time and virtual memory quotas. When the process group exceeds one of the limits, terminate it, otherwise print the amount of CPU time and virtual memory it has used. The concrete use case: Basically I must execute a couple of binaries, which expect input from a file, but I must ensure that their execution process is strictly limited. For example the binary must not allocate more than 256 MB of memory and it should run for less than 0.5 seconds. However I need information about the amount of memory and CPU it has used. What I have tried: For a couple of days I have been dealing with this perl script, which is the best solution I have found so far. Unfortunately its memory is buggy and it is not very precise. Also there is an official author post about this script here. I have tried using both /usr/bin/timeout and timeout Linux tools, which of course help me with the CPU time quota, but not with the termination of the process due to violation of the virtual memory limit. Using ulimit was attempted as well but as I have said earlier I need not only limitation but feedback for the resource consumption too. The Question: What can solve this issue? .
The setrlimit(2) syscall is relevant to limit resources (CPU time -integral number of seconds, so at least 1 sec- with RLIMIT_CPU, file size with RLIMIT_FSIZE, address space with RLIMIT_AS, etc...). You could also set up disk quotas. The wait4(2) syscall tells you -and gives feedback- about some resource usage. And proc(5) tells you a lot more, and also getrusage(2) (you might code some monitor which periodically stops the entire process group using SIGSTOP, call getrusage or query /proc/$PID/, then send SIGCONT -to continue- or SIGTERM -to terminate- to that process group). The valgrind tool is very useful on Linux to help finding memory leaks. And strace(1) should be helpful too. If you can recompile the faulty software, you could consider -fsanitize=address and -fsanitize=undefined and other -fsanitize=... options to a recent version of the GCC compiler. Perhaps you have some batch processing. Look for batch monitors, or simply code your own thing in C, Python, Ocaml, Perl, .... (which forks the command, and loop on monitoring it...). Maybe you want some process accounting (see acct(5) & sa(8)...) Notice that "amount of memory used" (a program generally allocates with mmap & releases memory with munmap to the kernel while running) and "CPU time" (see time(7), think of multi-threaded programs ...) are very fuzzy concepts. See also PAM and configure things under /etc/security/ ; perhaps inotify(7) might also be helpful (but probably not). Read also Advanced Linux Programming and syscalls(2)
Resource (CPU time and memory) limitation and termination of a process upon violation in Linux
1,420,556,101,000
I am using command nc -lu <port no.> to find on given port any data is receiving or not. I am getting data if there is transmission going on (but don't know from where!). Is there any way that should provide me the transmitters IP address?? wireshark and nmap are there, but I want a shorter way, if possible. UPDATED: I think nc -luv is what I want, But at a time it is showing only one IP. I want to know if more than one system is transmitting through that port??
I like the answers posted so far. Here are some other options: Add the -v option to nc. This will show (only!) the first source address that a UDP packet is received from. Also, netstat -nu seems to have some connection-ish state information for UDP conversations.
find which system is transmitting through a particular port
1,420,556,101,000
I just need to get how much bandwidth is used in 3 or 4 days. Do you have any application in the terminal to do it? I'd prefer if it didn't use SNMP. I found iptraf, wireshark, cacti, but they were not what I am looking for. Of course I need to save my results; for a single computer, not a network. It's very important that I can see the total size of inbound and outboud traffic. What solutions are there for me?
You know you already have that with ifconfig right? Ifconfig keeps counters about your incomming and outgoing bandwidth on each interface by default. Usually you can't reset counters except rebooting (with a few exceptions) From console you can easily leave a cron running each three days and saving results to a file for later check. Something like this: date >> ~/bw.log && ifconfig eth0|grep byte >> ~/bw.log Will produce this kind of output per run on the file bw.log at users home. Thu Oct 18 03:44:05 UTC 2012 RX bytes:414910161 (395.6 MiB) TX bytes:68632105 (65.4 MiB) My two cents...
Bandwidth monitoring in Linux
1,420,556,101,000
I have two folders called: A and B, in different paths on the same computer. When I add any new file(s) into folder A, I want to copy it to folder B automatically. My folders: /auto/std1/nat1/A /auto/std2/nat2/B What I currently do to copy the files: cp -r A B But I want this process to run automatically in the background for every new file and folder in A into B. Added question/problem While copying files I would like specific actions to be performed on certain files types, example: when I have a zip file in folder A, I would like it to unzip this file in folder B automatically. This is on a CentOS 7` system.
Per your bonus question, add the following line below the rsync command in the shell script I provided below. I wrote this in the comment but I'll officially add it to my answer here: find /auto/std2/nat2/B -name '*.zip' -exec sh -c 'unzip -d `dirname {}` {}' ';' This will handle unzipping all the zip files that are copied via rsync from folder /auto/std2/nat2/A to /auto/std2/nat2/B If you have rsync installed why not just cron it and have rsync manage the file mirroring? Create script myrsyncscript.sh Don't forget to make it executable: chmod 700 myrsyncscript.sh #!/bin/sh LOCKFILE=/tmp/.hiddenrsync.lock if [ -e $LOCKFILE ] then echo "Lockfile exists, process currently running." echo "If no processes exist, remove $LOCKFILE to clear." echo "Exiting..." # mailx -s "Rsync Lock - Lock File found" [email protected] <<+ #Lockfile exists, process currently running. #If no processes exist, remove $LOCKFILE to clear. #+ exit fi touch $LOCKFILE timestamp=`date +%Y-%m-%d::%H:%M:%s` echo "Process started at: $timestamp" >> $LOCKFILE ## Run Rsync if no Lockfile rsync -a --no-compress /auto/std1/nat1/A /auto/std2/nat2/B echo "Task Finished, removing lock file now at `date +%Y-%m-%d::%H:%M:%s`" rm $LOCKFILE Options breakdown: -a is for archive, which preserves ownership, permissions etc. --no-compress as there's no lack of bandwidth between local devices Additional options you might consider man rsync: --ignore-existing skip updating files that exist on receiver --update This forces rsync to skip any files which exist on the destination and have a modified time that is newer than the source file. (If an existing destination file has a modification time equal to the source file’s, it will be updated if the sizes are different.) Note that this does not affect the copying of symlinks or other special files. Also, a difference of file format between the sender and receiver is always considered to be important enough for an update, no matter what date is on the objects. In other words, if the source has a directory where the destination has a file, the transfer would occur regardless of the timestamps. This option is a transfer rule, not an exclude, so it doesn’t affect the data that goes into the file-lists, and thus it doesn’t affect deletions. It just limits the files that the receiver requests to be transferred. Add it to cron like so, and set the frequency to whatever you feel most comfortable with: Open cron with crontab -e and add the below: ### Every 5 minutes */5 * * * * /path/to/my/script/myrsyncscript.sh > /path/to/my/logfile 2>&1 # * * * * * command to execute # │ │ │ │ │ # │ │ │ │ │ # │ │ │ │ └───── day of week (0 - 6) (0 to 6 are Sunday to Saturday, or use names; 7 is Sunday, the same as 0) # │ │ │ └────────── month (1 - 12) # │ │ └─────────────── day of month (1 - 31) # │ └──────────────────── hour (0 - 23) # └───────────────────────── min (0 - 59)
Run command automatically when files are copied into a directory
1,420,556,101,000
I would like to find files accessed by specific user (even just read) within a folder tree. I thought the find command had this option, but it actually just searches for owner user. Is there any other command, or command combinations? The stat command offers access information, but doesn't display the user who made access.
This information is not stored by traditional filesystems. You have three main options: See who is accessing it in real time using lsof/fuser or similar; Set up auditing (take a look at auditd); Use something like LoggedFS.
Unix command to find files read by specific user
1,420,556,101,000
I'm running Fedora 31 Security Lab updated to the latest on Acer, with wireless driver ath10k_pci. The case is is that when i run airmon-ng there are no captured packets. Does the hardware is problematic or the driver? I've stopped the Network Manager , then ran airmon-ng check kill and then airmon-ng it shows that wlp3s0mon is started but nothing captured. Also tried without airmon. Checked iwlist and it does not show monitor but when i run iwconfig wlp3s0 mode monitor and check again with iwconfig it shows Mode:Monitor but still no captured packets. No errors in dmesg, rfkill is 'unblocked' and the adapter is detected and properly running when not in Monitor. I`ve read in the Qualcom's forum that QCA9377 can't operate in monitor but I was not sure because of the driver or the hardware.
After some days in research and testing on second distro (Ubuntu), the conclusion is that this adapter does not support monitor (or at least not with the default drivers) so I bought TL-WN823N USB adapter. It is cheap and monitor mode works like charm. So if anyone encounters this problem - this is my solution
No monitor mode on Atheros QCA9377?
1,420,556,101,000
Playing around with some low level functions to monitor my system stats. I would like to get the current network utilization the same way like I can get cpu temp cat /sys/class/thermal/thermal_zone0/temp or fan speed cat /sys/class/hwmon/hwmon6/fan1_input Looking at /sys/class/net/my_network_adapter/ I didn't find a way to see the actual bandwidth consumption, rx_bytes just gives the total amount of data downloaded.
To get the rate of B/s, no need of anything but your shell: Simply read rx_bytes file at each second and compare the current value with the value one second before. rx1=$(cat /sys/class/net/wlp3s0/statistics/rx_bytes) while sleep 1; do rx2=$(cat /sys/class/net/wlp3s0/statistics/rx_bytes) printf 'Download rate: %s B/s\n' "$((rx2-rx1))" rx1=$rx2 done Of course, substitute wlp3s0 by the interface you want to monitor.
Get current network utilization via /sys/class/net
1,420,556,101,000
I have a command running for a long time, which I don't want to disturb. However, I would like to keep check on the process (most of the time remotely). I am constantly monitoring the process via commands like top, iotop, stat etc. The process is a terminal based process which wasn't started via screen or tmux or similar. So the only way to check the output is using physical access. I know that /proc contains lot of info about the process. So I was wondering if it also can display the output (or even just the last batch of output -- char/word/line). I searched in /proc/<pid>/fd, but couldn't find anything useful. Below is the output of ls -l /proc/26745/fd/* lrwx------ 1 user user 64 Oct 28 13:19 /proc/26745/fd/0 -> /dev/pts/17 lrwx------ 1 user user 64 Oct 28 13:19 /proc/26745/fd/1 -> /dev/pts/17 lrwx------ 1 user user 64 Sep 27 22:27 /proc/26745/fd/2 -> /dev/pts/17 Any pointers?
I would use strace for that: strace -qfp PID -e trace=write -e write=1,2 That will trace all write(2) system calls of PID and its child processes, and hexdump the data written to file descriptors 1 and 2. Of course, that won't let you see what the process has already written to the tty, but will start monitoring all writes from a point on. Also, strace is not amenable to change its output format -- you should explore using gdb(1) or write a small program using ptrace(2) if you need more flexibility.
Get latest output of a running command
1,420,556,101,000
I just found some odd behaviour on one of our servers that I can't explain myself. It is about both middle lines. I would assume that the timespans for the boot user must not overlap, however, they do: $ last reboot -F reboot system boot 4.4.44-39.55.amz Wed Feb 15 09:16:30 2017 - Wed Feb 15 09:36:53 2017 (00:20) reboot system boot 4.4.41-36.55.amz Fri Feb 10 20:16:26 2017 - Wed Feb 15 09:16:00 2017 (4+12:59) reboot system boot 4.4.41-36.55.amz Fri Feb 10 14:33:56 2017 - Wed Feb 15 09:16:00 2017 (4+18:42) reboot system boot 4.4.35-33.55.amz Fri Jan 20 17:06:05 2017 - Wed Feb 15 09:16:00 2017 (25+16:09) Does this mean the machine was not properly shutdown before rebooted, so there is no logout entry of the boot user in wtmp? Thanks for any hints.
These are not entries of a boot user logging in or out, it's the system writing an entry upon reboot. The entries are written when a reboot occurs, however, if the system was brought down in some other way (by unplugging the power or whatever), an entry would not have been written. I presume that the next orderly shutdown would therefore produce the effect that you are seeing. Rebooting with reboot -d will also not update the wtmp database.
last reboot -F shows overlapping timespans
1,420,556,101,000
I'm writing an AI personal assistant. One part of the software is a monitor daemon. A small process that monitor's user's active windows. I'm using python( with libwnck and psutils for obtaining info on active windows). One thing I'd like my monitor to do is to keep track of music that the listener often listen's to. Is there anyway I could 'monitor' opening and closing of files? psutils.Process has a function that returns a list of open files, but I need some way to notify it to check for it. Currently it only checks process data when window switches, or a window is opened or closed.
You can monitor the opening/closing of files using the inotify subsystem. pyinotify is one interface to this subsystem. Note that if you have a lot of events going to inotify, some can be dropped, but it works for most cases (especially your case in which user interaction will drive the opening/closing of files). pyinotify is available via easy_install/pip and at https://github.com/seb-m/pyinotify/wiki MWE (based on http://www.saltycrane.com/blog/2010/04/monitoring-filesystem-python-and-pyinotify/): #!/usr/bin/env python import pyinotify class MyEventHandler(pyinotify.ProcessEvent): def process_IN_CLOSE_NOWRITE(self, event): print "File closed:", event.pathname def process_IN_OPEN(self, event): print "File opened::", event.pathname def main(): # Watch manager (stores watches, you can add multiple dirs) wm = pyinotify.WatchManager() # User's music is in /tmp/music, watch recursively wm.add_watch('/tmp/music', pyinotify.ALL_EVENTS, rec=True) # Previously defined event handler class eh = MyEventHandler() # Register the event handler with the notifier and listen for events notifier = pyinotify.Notifier(wm, eh) notifier.loop() if __name__ == '__main__': main() This is quite low-level information - you might be surprised how often your program uses these low-level open/close events. You can always filter and coalesce events (for example, assume events received for the same file in a certain time period correspond to the same access).
How to monitor the opening and closing of files?
1,420,556,101,000
For some unknown reason there is no space left on my /, even 5 minutes after removing 300 MBs of junk packages, there is no space left again. So I've come to conclusion that there is some process that floods my disk space. (--> Recently I installed docker). How can I find which process produces most data on /?
Your best bet is probably iotop: iotop watches I/O usage information output by the Linux kernel (requires 2.6.20 or later) and displays a table of current I/O usage by processes or threads on the system. At least the CON‐ FIG_TASK_DELAY_ACCT, CONFIG_TASK_IO_ACCOUNTING, CONFIG_TASKSTATS and CONFIG_VM_EVENT_COUNTERS options need to be enabled in your Linux ker‐ nel build configuration. Assuming your process is doing a lot of I/O operations, it should show up pretty high in that list.
How to find which process is currently writing to the disk?
1,420,556,101,000
I would like to use a bash script (python would be second best) to monitor regularly (hourly) if my mailserver is online and operating. I know that there are dedicated solutions for this task (Nagios, ...) but I really need something simple that I can use as a cronjob. Only to see the mailserver is alive. I know how to talk with a mailserver with telnet, ie: telnet mail.foo.org 25 EHLO example.com mail from: rcpt to: ... but this is interactive. Is it possible to check with a script that the mailserver is communicating? Obviously, I don't want to go the whole way and actually send an email. I just want to test that the mailserver is responding.
You can use nc to test a SMTP mail server like so: $ nc -w 5 mail.mydom.com 25 << EOF HELO mail.mydom.com QUIT EOF NOTE: The options -w 5 tell nc to wait at most 5 seconds. The server to monitor is mail.mydom.com and 25 is the port we're connecting to. You can also use this form of the above if you find your server is having issues with the HELO: $ echo "QUIT" | nc -w 5 mail.mydom.com 25 NOTE: This form works well with both Postfix and Sendmail! Example Here I'm connecting to my mail server. $ echo "QUIT" | nc -w 5 mail.bubba.net 25 220 bubba.net ESMTP Sendmail 8.14.3/8.14.3; Sat, 19 Apr 2014 16:31:44 -0400 221 2.0.0 bubba.net closing connection $ If you check the status returned by this operation: $ echo $? 0 However if nothing at the other ends accepts our connection: $ echo QUIT | nc -w5 localhost 25 Ncat: Connection refused. $ Checking the status returned from this: $ echo $? 1 Putting it together Here's my version of a script called mail_chkr.bash. #!/bin/bash echo "Checking Mail Server #1" echo "QUIT" | nc -w 5 mail.bubba.net 25 > /dev/null 2>&1 if [ $? == 0 ]; then echo "mail server #1 is UP" else echo "mail server #1 is DOWN" fi echo "Checking Mail Server #2" echo "QUIT" | nc -w 5 localhost 25 > /dev/null 2>&1 if [ $? == 0 ]; then echo "mail server #2 is UP" else echo "mail server #2 is DOWN" fi Running it: $ ./mail_chkr.bash Checking Mail Server #1 mail server #1 is UP Checking Mail Server #2 Ncat: Connection refused. mail server #2 is DOWN
simple script for monitoring a mailserver
1,420,556,101,000
I am working on a unix server and I guess during some time in past the file system had been full. However, I need some solid data to prove it. Will there be any OS logs or something of that sort to confirm my assumption? It's an AIX system.
On AIX, you will get entries in the standard error log if a filesystem operation fails due to a filesystem being full. You can view that error log with the errpt command. You will see something like this, IDENTIFIER TIMESTAMP T C RESOURCE_NAME DESCRIPTION xxxxxxxx xxxxxxxxx I O SYSJ2 UNABLE TO ALLOCATE SPACE IN FILE SYSTEM
What logs would be written if file system is full in UNIX?
1,420,556,101,000
On a Linux (debian) box, I have a NFS server wich seems to be overloaded by requests. In order to identify the problem, I'm trying to monitor with auditd/auditctl accessed files in the partition exported by the NFS server. The problem is that our disk or nfs problem prevents auditd to write logs on /var/log/auditd/auditd.log. What I really need is to send all logs somewhere else than on a local file. Can I simply redirect all logs from 192.168.1.1 to 192.168.1.2 (the network is working correctly) ?
I'm assuming you're on Linux by how you phrased your question. Should that be the case, then yes there is, look into audisp-remote and audispd. These are standard components in the current audit tools on Linux.
Can I send auditd logs to another computer?
1,420,556,101,000
Is there an existing tool for solaris/unix that keeps a history trail of the list of running processes. I'd like to be able to review backwards in time what processes were active/running. I can create a cron job that just regularly logs the output of ps into files, but this is crude and over a large server farm seems inefficient and can create many files. And I need full command arguments so it has to be /usr/ucb/ps auxww output, ideally with cpu times, state, rss, pid, ppid, zone information. Also, if possible the output should be easy to parse--e.g. in a consistent delimited format or some other.
Use auditing. Solaris Auditing (Overview) Auditing generates audit records when specified events occur. Most commonly, events that generate audit records include the following: System startup and system shutdown Login and logout Process creation or process destruction, or thread creation or thread destruction Opening, closing, creating, destroying, or renaming of objects Use of privilege capabilities or role-based access control (RBAC) Identification actions and authentication actions Permission changes by a process or user Administrative actions, such as installing a package Site-specific applications Audit records are generated from three sources: By an application As a result of an asynchronous audit event As a result of a process system call A good blog article on Solaris auditing can be found here.
Is there a process logging tool for solaris?
1,420,556,101,000
There are times as a system administrator, you might not be sure of the log file paths of a new application. Depending on the system, there may be multiple ways to find the same. Please share the different ways we can get a list of open log files on a system.
User X files If you need to see just a single user's open files: $ lsof -u<user> Or only files with a text file descriptor (typically real files): $ lsof -a -u<user> -d txt Example All files in use by user saml. $ lsof -usaml COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME vim 1341 saml cwd DIR 253,2 4096 10370078 /home/saml/mp3s vim 1341 saml rtd DIR 253,0 4096 2 / vim 1341 saml txt REG 253,0 2105272 1215334 /usr/bin/vim vim 1341 saml mem REG 253,0 237616 393586 /lib64/libgssapi_krb5.so.2.2 Only files using a text descriptor and are owned by user saml. $ lsof -a -usaml -d txt Output information may be incomplete. COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME vim 1341 saml txt REG 253,0 2105272 1215334 /usr/bin/vim bash 1468 saml txt REG 253,0 940312 2490450 /bin/bash gvfsd-htt 1777 saml txt REG 253,0 179528 1209465 /usr/libexec/gvfsd-http gnome-key 2051 saml txt REG 253,0 953664 1214068 /usr/bin/gnome-keyring-daemon ... lsof as root Typically though you'll want to run lsof with elevated privileges so you can see all the files on a system owned by an Apache process or root, for example. $ sudo lsof You can also use lsof backwards and find out what process opened a particular file. $ sudo lsof /var/log/messages Output information may be incomplete. COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME rsyslogd 1266 root 4w REG 253,0 372306 1973825 /var/log/messages lsof as top You can also use lsof similarly to top where it will poll every number of seconds and show you what's going on on your system. $ sudo lsof -u saml -c sleep -a -r5 Example The -c ... argument only shows processes with the string ... in their name. Here I'm using the command sleep to show this. I run the lsof command which polls every 5 seconds, and shows any files opened by any processes with the string sleep in them. I then ran sleep 5 in another terminal. $ sudo lsof -u saml -c sleep -a -r5 Output information may be incomplete. ======= ======= ======= ======= COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME sleep 10780 saml cwd DIR 253,2 32768 10354689 /home/saml sleep 10780 saml rtd DIR 253,0 4096 2 / sleep 10780 saml txt REG 253,0 27912 2490470 /bin/sleep sleep 10780 saml mem REG 253,0 151456 393578 /lib64/ld-2.13.so sleep 10780 saml mem REG 253,0 1956608 393664 /lib64/libc-2.13.so sleep 10780 saml mem REG 253,0 99158752 1209621 /usr/lib/locale/locale-archive sleep 10780 saml 0u CHR 136,59 0t0 62 /dev/pts/59 sleep 10780 saml 1u CHR 136,59 0t0 62 /dev/pts/59 sleep 10780 saml 2u CHR 136,59 0t0 62 /dev/pts/59 ======= ======= ======= ======= log files You can use lsof to find log files by simply grepping any of the above output for the names of the log files that you're interested in seeing what's going on with. $ lsof .... | grep "log file name"
List all open '.log' files in *nix
1,420,556,101,000
sensors-detect has this warning : Lastly, we can probe the I2C/SMBus adapters for connected hardware monitoring devices. This is the most risky part, and while it works reasonably well on most systems, it has been reported to cause trouble on some systems. What kind of trouble does it refer to ?
From sensors-detect manpage: sensors-detect needs to access the hardware for most of the chip detections. By definition, it doesn't know which chips are there before it manages to identify them. This means that it can access chips in a way these chips do not like, causing problems ranging from SMBus lockup to permanent hardware damage (a rare case, thankfully.) The authors made their best to make the detection as safe as possible, and it turns out to work just fine in most cases, however it is impossible to guarantee that sensors-detect will not lock or kill a specific system. So, as a rule of thumb, you should not run sensors-detect on production servers, and you should not run sensors-detect if can't afford replacing a random part of your system. Also, it is recommended to not force a detection step which would have been skipped by default, unless you know what you are doing. There is (very) low chance of actually breaking your hardware, generally by overwriting some EEPROMs by accident. Some (old) problems that happened with lm_sensors: Thinkpad laptops not booting Asus laptop display issues (thread on lm-sensors mailing list) These issues are very rare, but it can happen, so I'd just listen to the warning and skip the I2C/SMBus scan. Btw. lm_sensors isn't the only thing that can destroy (or damage) your hardware -- Linux (kernel) was bricking LG CD drives in 2003 :-)
sensors-detect warning on I2C/SMBus
1,420,556,101,000
My router died today, so I used nm-applet on xfce to make a wireless network using my computer and modem. It only appeared to support WEP security. I felt I should keep track of who is connected, but I couldn't find out how, and Google only came up with results for iPhones and Android. How could I tell who's connected and how much bandwidth they're using?
A GUI program I personally like is EtherApe, which has a nice graph showing current network activity with protocol and traffic amount.
How do I tell who is connected to my network and how much bandwidth they're using?
1,420,556,101,000
I'm configuring Zabbix to monitor our servers. Zabbix is new to me. It is up and running, and monitoring works for some services. One of our Centos servers has http running, so it seems logical to monitor that. I've added the "Template App HTTP Service" to the host. I used all default settings, didn't change anything. Now Zabbix reports that this service is down. The httpd service is running however, and I can open webpages. How can I get Zabbix to monitor the HTTP service normally?
This is probably a firewall issue. We maintain here the Linux systems, and most of the time these kind of issues are due to the network team forgetting default firewall rules to new networks or new servers. To debug the situation the best strategy is to try to telnet in the command line from the Zabbix/monitoring server to the web server in question.
Zabbix claims that HTTP service is down
1,420,556,101,000
I'd like to know if I can track commands entered by user in a bash shell, in real time. What I'm trying to do is something similar to thefuck, but I need to prompt the user as and when he enters new commands into the shell. Is there any way I could write a hook to bash that kind of lets me wrap my code around it ? Alternatively: is there a way to pull updated bash history? afaik bash writes to history when the shell is exited, unless you run 'history' command in the same terminal.
Put export PROMPT_COMMAND='history -a' to /etc/profile or other profile file. This causes the history -a command to execute before every command prompt display. history -a flushes history to .bash_history immediately.
Is it possible to track bash commands in real time?
1,420,556,101,000
I know conky can monitor my personal computer, but can it monitor the other Linux servers I have on the network? I'd like to see data on CPU and memory usage and some critical processes each server uses. For instance, one server is our MySQL server, so I'd like to display the CPU and memory usage for this server, how much resources the mysqld processes consumes and the network consumption. For another server, some other information should be display according to its use.
I wrote a program for this very purpose: Conkw. It stands for web based conky. This is a program that, like conky, can monitor many vitals on your system but also all sorts of stuff (stocks, weather, etc) and exposes a REST API with all the data. It can also monitor a Windows or a MacOS machine. There is also a HTML UI to display your stuff on any browser (even quite old). The goal was to find some use to the old ipads/tablets we all have lying around. You can put them next to your screen and have your metrics displayed there. More real estate for the real work on your main screens! But the network-based communication btw UI and API make it trivial to monitor another computer. In fact, you can build a mesh network and have metrics of plenty of different machines on the same UI. It's still very much under development but I've been using it constantly for about 6 month now, so it works well.
Can conky monitor other Linux computers on the network?
1,420,556,101,000
For example, the IDE I'm using at the moment (Aptana Studio) notifies me as soon as a file's contents it has open have been changed by some external program. I can imagine having a periodic loop run stat() on a file and check the time of last data modification. Is this how it's normally done or is there a blocking interrupt-like mechanism used instead?
The inotify system on Linux, or the kqueue system on BSD/OSX, gives you an event-driven ("interrupt-like") mechanism to do this.
Efficient mechanism to determine if open file has been externally modified?
1,420,556,101,000
This is rather a basic linux administration question. We have a CentOS linux machine running a production application. There are 10 application specific processes running on that machine. Once in every 3/4 days, the linux machine freezes and the only way to get it back is to hard reboot it from Amazon AWS console. We have Amazon Cloudwatch enabled which captures the CPU usage every 5 minutes. We see that the CPU reaches 100% (8 cores) within in 10-15 seconds just before it freezes. And unfortunately we could not figure out anything from the process log files. How do we really pin point which process out of those 10 processes is causing the linux server to freeze? Are there any simple CPU/memory monitoring applications that can record the top cpu/memory hoggers to disk, say every 2 seconds? Appreciate any other ideas to figure out the culprit process.
You could simply run top in batch mode and save the output to a file: $ top -b -d 2 > /your/log/file & -d 2 is the sampling period. Be warned that this will generate quite a bit of data. You might want to use the -u option to only list processes for a given user, or even the -p option to explicitly list your application's processes.
find out the culprit process
1,420,556,101,000
We are running virtual machines in kvm and i am trying to collect metrics and send it to influxdb + grafana for graphing. I can see CPU stats using virsh but its in time in second spend, how do i convert this value in proper usage in % or human readable metrics? [root@kvm01 ~]# virsh cpu-stats --total instance-0000047a Total: cpu_time 160808730.755660547 seconds user_time 148000.880000000 seconds system_time 85012531.050000000 seconds
Calculating a CPU percentage is dependent on the time window you are looking at. So if you call virsh cpu-stats once, then call it again 10 seconds later, you would do need to do something like: (cpu_time2 - cpu_time1) / (10 * vcpus) That will tell you what percentage of the total time window the VM CPUs were running in.
libvirt kvm cpu/memory stats collection
1,420,556,101,000
To take a look at the temperature history in on my SSD, I used the smartctl -l scttemphist command. The output says that it is skipping lots of entries. Why is it doing this? I don't think that it is because it does not have them saved. === START OF READ SMART DATA SECTION === SCT Temperature History Version: 2 Temperature Sampling Period: 10 minutes Temperature Logging Interval: 10 minutes Min/Max recommended Temperature: ?/ ? Celsius Min/Max Temperature Limit: ?/ ? Celsius Temperature History Size (Index): 128 (0) Index Estimated Time Temperature Celsius 1 2017-06-26 18:20 30 *********** ... ..( 3 skipped). .. *********** 5 2017-06-26 19:00 30 *********** 6 2017-06-26 19:10 31 ************ 7 2017-06-26 19:20 30 *********** ... ..( 60 skipped). .. *********** 68 2017-06-27 05:30 30 *********** 69 2017-06-27 05:40 29 ********** 70 2017-06-27 05:50 30 *********** 71 2017-06-27 06:00 30 *********** 72 2017-06-27 06:10 29 ********** 73 2017-06-27 06:20 30 *********** 74 2017-06-27 06:30 30 *********** 75 2017-06-27 06:40 29 ********** 76 2017-06-27 06:50 30 *********** 77 2017-06-27 07:00 30 *********** 78 2017-06-27 07:10 29 ********** 79 2017-06-27 07:20 30 *********** ... ..( 2 skipped). .. *********** 82 2017-06-27 07:50 30 *********** 83 2017-06-27 08:00 31 ************ 84 2017-06-27 08:10 30 *********** ... ..( 4 skipped). .. *********** 89 2017-06-27 09:00 30 *********** 90 2017-06-27 09:10 31 ************ 91 2017-06-27 09:20 30 *********** 92 2017-06-27 09:30 31 ************ ... ..( 35 skipped). .. ************ 0 2017-06-27 15:30 31 ************ Is there a way to get smartctl to display the entire history instead of hiding some entries? I didn't see anything about it in the man page.
You can download the sources or see here that smartctl is merely optimising the output by removing groups of identical temperatures. If you want to have all the values, recompile after removing this while{} loop at line 2216 (keep the initialisation line). // Find range of identical temperatures unsigned n1 = n, n2 = n+1, i2 = (i+1) % tmh->cb_size; while (n2 < tmh->cb_size && tmh->cb[i2] == tmh->cb[i]) { n2++; i2 = (i2+1) % tmh->cb_size; }
Why are there "skipped" temperature entries in the SCT Temperature History output?
1,420,556,101,000
I've been using Linux for 99% of the time on my Dell XPS15 9550. It has an Intel i5-6300HQ (Skylake) CPU. On Windows, I can monitor the voltage of the CPU using a plethora of different software: Intel XTU, HWinfo, CPU-Z, AIDA64 and many more. On Linux, my only shot seems to be with LM-Sensors... which unfortunately does not find any voltage sensor even with a deep search from sensors-detect. Other tools such as turbostat, powerstat or i7z also do not read CPU voltages. No voltage sensor is found by any of the generic monitoring software I have tried (such as KSysGuard). Is there any way to read Skylake CPU voltages (directly from the CPU) in Linux, something that is so trivial in Windows? Is there a module which I am not loading, maybe?
I found out i7z actually does report the Vcore on my system. My terminal was simply not wide enough to show the last column, which was indeed Vcore. So a partial answer is: use i7z. However, it would be even better to have this data collected by lm-sensors too. Currently it does not, so most monitoring programs that use lm-sensors as backend do not show the data.
How to monitor CPU voltage for a Dell XPS15 9550 (Skylake i5-6300HQ) under Linux
1,420,556,101,000
On a server hosting a wide range of websites, I often see IO becoming a bottleneck without being able to identify the processes responsible of IO operations with tools such as iotop, iostat or sar. I suspect that those processes are performing a lot of IO on metadata (reading and/or writing attributes, creating or removing a lot of empty files, etc). Unfortunately, it seems that those operations are not accounted "per processes", nor are IO performed using memory-mapped files (mmap). My question is: Is there a way to monitor and/or account (for instance, using cgroups and blkio) IO per process or (maybe even better) per file, including io on metadata and memory-mapped files? Currently, I'm trying to account "which requests hit the disk" using systemtap, probing handle_mm_fault() (mm/memory.c in the kernel) for major page faults, but I haven't been able to verify if manipulation of filesystem metadata generates page faults handled by this function. Thank you for your insights!
I came up with a systemtap script which is close to what I wanted to do, but it does not track writes. The code is on a gist: https://gist.github.com/Martiusweb/10633360
Monitor when read/write on metadata or mmaped files hit the disk
1,420,556,101,000
Just starting to look into mcelog for the first time (I've enabled it and seen syslog output before, but this is the first time I'm trying to do something non-default). I'm looking for information on how to write triggers for it. Specifically, I'm looking for what kinds of events mcelog can react to, how it decides which scripts to execute, and so on. Best I can make from the example trigger is that it sets a bunch of environmental variables before invoking the script. So does it just try to execute everything in the trigger directory (which is /etc/mcelog on RHEL) and let the script decide what it wants to act on? I've seen other trigger scripts with names that look like MCE events, is that convention or does that have a special function? I created a trigger called /etc/mcelog/joel.sh which just sends a basic email to my gmail account. A few days ago apparently the trigger went off because I got an email from the script without manually running the script. I didn't think to pipe env output to the mailx command in joel.sh so I don't know what hardware event triggered the script execution or why mcelog picked joel.sh as the script to execute for it. Basically, I'm looking for an answer that will give me a basic orientation with mcelog, it's triggering system, and how I can use it to monitor my hardware health. I'm pretty sure I can figure out the more advanced stuff once I get my bearings.
Looking at the sample mcelog.conf config file it looks to contain most if not all of the types of triggers it can deal with. DIMMs [dimm] # # execute these triggers when the rate of corrected or uncorrected # errors per DIMM exceeds the threshold # Note when the hardware does not report DIMMs this might also # be per channel # The default of 10/24h is reasonable for server quality· # DDR3 DIMMs as of 2009/10 #uc-error-trigger = dimm-error-trigger uc-error-threshold = 1 / 24h #ce-error-trigger = dimm-error-trigger ce-error-threshold = 10 / 24h Sockets [socket] # Threshold and trigger for uncorrected memory errors on a socket # mem-uc-error-trigger = socket-memory-error-trigger mem-uc-error-threshold = 100 / 24h # Threshold and trigger for corrected memory errors on a socket mem-ce-error-trigger = socket-memory-error-trigger mem-ce-error-threshold = 100 / 24h Cache [cache] # Processing of cache error thresholds reported by Intel CPUs cache-threshold-trigger = cache-error-trigger Page [page] # Memory error accouting per 4K memory page # Threshold for the correct memory errors trigger script memory-ce-threshold = 10 / 24h # Trigger script for corrected errors # memory-ce-trigger = page-error-trigger Triggers Triggers can be controlled in this section. [trigger] # Maximum number of running triggers children-max = 2 # execute triggers in this directory directory = /etc/mcelog Sample triggers There are some sample triggers here on the mcelog github page. Sample trigger script, dimm-error-triggers: #!/bin/sh # This shell script can be executed by mcelog in daemon mode when a DIMM # exceeds a pre-configured error threshold # # environment: # THRESHOLD human readable threshold status # MESSAGE Human readable consolidated error message # TOTALCOUNT total count of errors for current DIMM of CE/UC depending on # what triggered the event # LOCATION Consolidated location as a single string # DMI_LOCATION DIMM location from DMI/SMBIOS if available # DMI_NAME DIMM identifier from DMI/SMBIOS if available # DIMM DIMM number reported by hardware # CHANNEL Channel number reported by hardware # SOCKETID Socket ID of CPU that includes the memory controller with the DIMM # CECOUNT Total corrected error count for DIMM # UCCOUNT Total uncorrected error count for DIMM # LASTEVENT Time stamp of event that triggered threshold (in time_t format, seconds) # THRESHOLD_COUNT Total umber of events in current threshold time period of specific type # # note: will run as mcelog configured user # this can be changed in mcelog.conf logger -s -p daemon.err -t mcelog "$MESSAGE" logger -s -p daemon.err -t mcelog "Location: $LOCATION" [ -x ./dimm-error-trigger.local ] && . ./dimm-error-trigger.local exit 0 References mcelog / mcelog.conf mcelog - memory error handling in user space at Linux Kongress 2010 - paper mcelog - memory error handling in user space at Linux Kongress 2010 - slides Andi's recent papers and presentations andikleen/mcelog - github page
Writing triggers for mcelog
1,516,637,501,000
I am using Linux as a desktop, and keep getting seemingly random I/O Spikes. The machine becomes unusably slow. At first I thought it was just that I don't have enough memory. But looking at the output of free and top there was nothing out of the ordinary. Same thing for CPU load. If I don't kill the offending process right away, the machine grinds quickly (in about 10min) to a near complete halt and I have to hard-reset. A coworker told me he had similar issues and he notices I/O spikes. We have the same machines btw (they ware provided by the company). I have also noticed that those spikes often occur when opening a new tab in Chrome. But it think it has happened on other occasions as well. Like opening a tab in Firefox, or just randomly out of the blue. I decided to run dstat and have a look at the output, but changed my mind and used the ksysguard, simply because the change is easier to see. A screenshot of the monitor: As you can see, there is a spike in disk I/O which coincides with a spike in system load. Strangely, the memory usage goes down at that moment. Could it be related to swap? There are two spikes. The first one is the one I felt immediately, and coincided with clicking on a link in the Chrome (not even opening a tab, but triggering JavaScript code). I immediately clicked the "close" button on that tab and the machine became responsive again. The second one had no noticeable effect. It is possible that the fact that memory usage went down is because I closed the tab, as it happens slightly after the I/O spike. The whole spike (the first one) lasted for about 10s. Any idea what to look out for?
I have found the culprit. It was indeed due to a faulty swap setup. My fstab listed /dev/mapper/cryptswap as swap space. This was nonexistent. My guess is that as soon as the system needed to swap, it saw swapspace defined, but that device did not exist anywhere. For testing I simply created empty files as swapspace. Since then, the machine seems to run a lot more stable. I have had not spikes/crashes ever since. But I do see the new swap files being used.
How to track down the source of I/O spikes?
1,516,637,501,000
I've used yum install nagios on an Amazon Linux instance, and it created a nagios user with shell /sbin/nologin and homedir /var/spool/nagios. This is the behaviour on EC2. I want to use check_by_ssh running locally as user nagios to execute a command on the remote host as some user, without typing in a password. So using ssh-keygen seems logical, but how do I generate a public key for user nagios if that user doesn't have a shell? Is the answer to change the default shell (e.g. to bash) and perhaps the homedir of local user nagios so I can generate the key, or is there another way? Question: Is it bad practice to allow the nagios user to login? Question: Is it bad practice to change the homedir, e.g. to /home/nagios? Question: What is the recommended way of doing this?
Run su -c 'ssh-keygen -N ""' nagios to generate the key pair, or alternatively generate the key pair as another user then copy it in place into ~nagios/.ssh. Then run su -c 'ssh-copy-id someuser@remote-host' nagios to install the public key on the remote machine. You can change the nagios user's home directory if you like, but I don't see the point. There's no need to change the nagios user's shell for what you require here.
How to enable nagios user created by yum to use check_by_ssh
1,516,637,501,000
I am trying to use inotifywait to monitor a folder: inotifywait -m -r /home/oshiro/Desktop/work_folder The command works and if I create files in that folder, all seems to work correctly. While the folder is being monitored, if I delete it, I get the following output: /home/oshiro/Desktop/work_folder/ MOVE_SELF /home/oshiro/Desktop/work_folder/ OPEN,ISDIR /home/oshiro/Desktop/work_folder/ CLOSE_NOWRITE,CLOSE,ISDIR /home/oshiro/Desktop/work_folder/ MOVE_SELF /home/oshiro/Desktop/work_folder/ ATTRIB,ISDIR /home/oshiro/Desktop/work_folder/ OPEN,ISDIR /home/oshiro/Desktop/work_folder/ DELETE Untitled Document /home/oshiro/Desktop/work_folder/ DELETE Untitled Document 2 /home/oshiro/Desktop/work_folder/ CLOSE_NOWRITE,CLOSE,ISDIR /home/oshiro/Desktop/work_folder/ DELETE_SELF If I then re-create that folder again, while the monitoring is still taking place, inotifywait doesn't seem to continue monitoring it, unless I run inotifywait -m -r /home/oshiro/Desktop/work_folder again. How do I get around this issue? I basically want to monitor a USB stick which will be plugged in and removed many times during a day. When it's unplugged and plugged back in, I think inotifywait will stop monitoring it, the same way the folder above was deleted and re-created where inotifywait wasn't able to continue monitoring it, unless I run the above command again, i.e. inotifywait -m -r /home/oshiro/Desktop/work_folder Should I be using something more appropriate for such tasks and not use inotifywait? cron is not suitable for my needs, as I am not after time based actions, I am after event based actions.
First off, if you delete a folder that inotifywait is watching, then, yes, it will stop watching it. The obvious way around that is simply to monitor the directory one level up (you could even create a directory to monitor especially and put your work_folder in there. However this won't work if you have a folder underneath which is unmounted/remounted rather than deleted/re-created, the two are very different processes. I have no idea if using something other than inotifywait is the best thing here since I have no idea what you are trying to to achieve by monitoring the directory. However perhaps the best thing to do is to set up a udev rule to call as script which mounts the USB stick and starts the inotifywait process when it is plugged in and another to stop it again when it is unplugged. You would put the udev rules in a .rules file in /etc/udev/rules.d` directory. The rules would look something like: ENV{ID_SERIAL}=="dev_id_serial", ACTION=="add", \ RUN+="/path/to/script add '%E{DEVNAME}'" ENV{ID_SERIAL}=="dev_id_serial", ACTION=="remove", \ RUN+="/path/to/script remove '%E{DEVNAME}'" Where ID_SERIAL for the device can be determined by: udevadm info --name=/path/to/device --query=property with the script something like: #!/bin/sh pid_file=/var/run/script_name.pid out_file=/var/log/script_name.log # try to kill previous process even with add in case something # went wrong with last remove if [ "$1" = add ] || [ "$1" = remove ]; then pid=$(cat "$pid_file") [ "$(ps -p "$pid" -o comm=)" = inotifywait ] && kill "$pid" fi if [ "$1" = add ]; then /bin/mount "$2" /home/oshiro/Desktop/work_folder /usr/bin/inotifywait -m -r /home/oshiro/Desktop/work_folder \ </dev/null >"$out_file" 2>&1 & echo $! >"$pid_file" fi Also, make sure that the mounting via the udev rule does not conflict with and other process which may try to automatically mount the disk when it is plugged in.
inotifywait not working when folder is deleted and re-created
1,516,637,501,000
I have a process running on a raspberry pi. After ssh ing into in to the process is started like this: nohup .../blah/blah & IIUC this allows me to log out of the pi and the process keeps running. However it dies sometimes, and I have to log in and manually restart it. Is there a way to monitor it and have it restart itself?
Run it in an infinite loop: #!/bin/sh while true; do .../blah/blah done This would be a script that you start with nohup in the background. When blah dies, it is immediately restarted, until the script is killed. A variation that ends the loop if a file called stopme appears in the working directory of the script (a check is only made before (re-)starting blah): #!/bin/sh while true; do [ -e stopme ] && break .../blah/blah done
Automatically restarting a process when it dies?
1,516,637,501,000
I am running a "kiosk" computer - one for general use for anybody in the room - and I want to know if anyone is actually using it. Is there a log that tells me when firefox, chromium or other programs are run? I am not asking for logs of what they are doing in the programs, just whether or not they are being used. Any other ideas are welcome.
You probably should be interested by process accounting. See acct(2), acct(5), sa(8)
Log when program is run
1,516,637,501,000
I would like to monitor total CPU utilization percentage as a counter. The reason I would like it as a counter is that data won't be lost between samples (and I can have the graphing side calculate the rate). My initial approach was to use /proc/uptime with the formula (uptime-(idle_time/num_core))*100. This generally seems to be accurate across a large number of servers (something like 98% of the time), but sometimes I seem to get erroneous results. For example the following seems to suggest that there was negative CPU usage, which doesn't really make sense: [root@ny-lb05 ~]# echo -e "scale=10\n ($(cut -f1 -d' ' /proc/uptime)-($(cut -f2 -d' ' /proc/uptime)/16))*100" | bc 5646895.3750000000 [root@ny-lb05 ~]# echo -e "scale=10\n ($(cut -f1 -d' ' /proc/uptime)-($(cut -f2 -d' ' /proc/uptime)/16))*100" | bc 5646891.5625000000 On this server I'm running: Linux ny-lb05.ds.stackexchange.com 2.6.32-431.11.2.el6.x86_64 #1 SMP Tue Mar 25 19:59:55 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Does someone see an error in this method of calculation? Is there a better way to get CPU utilization as a counter? Update: So what I'm after is the total utilization time as a monotonically increasing counter. I would expect that total utilization should never decrease. But that seems to be the case with the following: [root@ny-lb05 ~]# read uptime idle </proc/uptime; echo -e "scale=1000\n ($uptime*16-($idle))" | bc 903874.23 [root@ny-lb05 ~]# read uptime idle </proc/uptime; echo -e "scale=1000\n ($uptime*16-($idle))" | bc 903870.29 Also, according to /proc/cpuinfo, cores=siblings so I believe HT is not enabled. Update 2: TLDR; /proc/uptime is bugged, use /proc/stat instead.
(uptime-(idle_time/num_core)) May give an idea of how long the system has been busy, in seconds. Multipling that by 100 makes it centiseconds -- is that your intention? IMO it would make more sense to consider how many processor seconds in total were available, and subtract the idle time from that: uptime * num_core - idle_time = total active processor seconds A utilization metric might be: active seconds / (uptime * num_core) E.g., if the system has been up for 10 seconds on 4 cores with 5 seconds of idle_time: (10 * 4 - 5) / (10 * 4) = 0.875 87.5% utilization. Or: (10 - 5 / 4) / 10 = 0.875 Same thing, saves an operation. Is there a better way to get CPU utilization as a counter? I've done this in a system diagnostics C++ library by parsing the first line of /proc/stat, which is a combined total for all cores. The first three fields are user time, low priority (aka nice) time, and system time. The total of these is the amount of active time (note the unit here is not seconds, see /proc/stat under man proc). If you poll this over 5 seconds, assuming a USER_HZ of 100, where total_a is the first sample (user + nice + sys) and total_b is the second sample: (total_b - total_a) / 5 / 100 / num_cores = usage ratio If you multiply that by 100, you have a percentage indicating an average over the 5 second interval. Here's the logic: total_b - total_a = active time between samples Divided by the duration of the sample, 5 seconds. Divided by the units per second of the measurement (USER_HZ) Divided by the number of cores USER_HZ is almost certainly 100. To check: #include <stdio.h> #include <unistd.h> int main (void) { printf ( "%ld\n", sysconf(_SC_CLK_TCK) ); return 0; } Compile: gcc whatever.c, run ./a.out. It will be hard to get an accurate duration for this with shell tools, so you could either keep an increasing measure of the total active time (I think that is your intention) or use a fairly long interval, e.g. 30+ seconds.
How to get CPU Percentage as a Counter?
1,516,637,501,000
I'm about to set up Tenshi in order to get important log excerpts. Everything works fine except if I add some apache2 logs, # /etc/tenshi/tenshi.conf ... set logfile /var/log/apache2/error.log I also created a Tenshi include for apache2, but it doesn't work. # /etc/tenshi/includes-active/apache2 group ^apache2: trash ^apache2: \[client \d+\.\d+\.\d+\.\d+\] client denied by server configuration: trash ^apache2: \[client \d+\.\d+\.\d+\.\d+\] Directory index forbidden by Options directive: group_end Question: What is wrong with the config and how to create an include for non standard logs?
I finally managed this: In tenshi.conf use: set logprefix ^(\[[^\]]+\]\s) This prevents the parsed logfiles from reaching the noprefix queue. My apache2 error logfile looks like this: [Thu Jan 23 23:36:40 2014] [error] [client 255.255.255.255] File does not exist: /var/www/whatever.html So in includes-active/apache2 you need to have the same regex i.e. group ^\[[^\]]+\]\s trash \[error\].*? File does not exist: group_end Thank you guys, close but no cigar.
tenshi and logfiles
1,516,637,501,000
So i have a script that emails me if a login is from anything besides an ip address that starts with "10.1.": #!/usr/bin/python import smtplib, os server = "10.10.10.10" From = "[email protected]" to = ["[email protected]"] # must be a list subject = "SSH Login from outside network" ip = os.environ['SSH_CONNECTION'].split()[0] user = os.environ['USER'] if '10.1.' in ip: print "---SSH IP Check---" print 'Inside address, no alert will be sent.' exit(0) text = user + " just logged in from " + ip # Prepare actual message message = """\ From: %s To: %s Subject: %s %s """ % (From, ", ".join(to), subject, text) # Send the mail server = smtplib.SMTP(server) server.sendmail(From, to, message) server.quit() I've added this to .bashrc for /root/.bashrc and when i login as root to this remote server, this runs, checks the $SSH_CONNECTION variable and emails if it doesn't start with 10.1. But what if someone logs in as user? or another username? I originally had a file /etc/ssh/sshrc which i think is a bash script(no #!/bin/sh on first line though) and it worked ok, but i wanted a check of the ip so thats why i did this in python, in bash it didnt like the double [[ brackets, and i was just piping it out to sendmail. So the question is how can i make this script run on any ssh login? should i keep trying with the sshrc file? I've tried replacing the sshrc file with this python script, but i get this when i log in: /etc/ssh/sshrc: 3: /etc/ssh/sshrc: import: not found /etc/ssh/sshrc: 5: /etc/ssh/sshrc: server: not found /etc/ssh/sshrc: 6: /etc/ssh/sshrc: From: not found /etc/ssh/sshrc: 7: /etc/ssh/sshrc: to: not found /etc/ssh/sshrc: 8: /etc/ssh/sshrc: subject: not found /etc/ssh/sshrc: 10: /etc/ssh/sshrc: Syntax error: "(" unexpected
You could have the script triggered when a login session is opened. pam-script is a PAM module that allows you to execute scripts within the PAM stack during authorization, password changes, and on session opening or closing. In Debian-based Linux distributions it is provided by the libpam-script package. In Fedora the package is simply called pam-script. The following scripts can be triggered by pam-script: pam_script_auth - executed during authentication pam_script_acct - invoked during account management pam_script_passwd - invoked when changing passwords pam_script_ses_open - invoked when session is opened pam_script_ses_close - invoked when a session is closed To run a script on session open add this to /etc/pam.d/common-session: # Attempt to run pam_script_ses_open and pam_script_ses_close. # Report success even if script is not found. session optional pam_script.so onerr=success In Debian, by default, pam-script will execute /usr/share/libpam-script/pam_script_ses_open. The location of the scripts can be configured with the dir=/path/to/scripts/ option. With pam-script it is also convenient to access the IP address of the remote host in a bash script. Each script will be passed the following environment variables (all will exist but some may be null if not applicable): PAM_SERVICE - the application that's invoking the PAM stack PAM_TYPE - the module-type (e.g. auth,account,session,password) PAM_USER - the user being authenticated into PAM_RUSER - the remote user, the user invoking the application PAM_RHOST - remote host PAM_TTY - the controlling tty PAM_AUTHTOK - password in readable text
SSH alerts for outside IP addresses
1,516,637,501,000
In bash, .bashrc (and various other scripts) can load into memory at shell startup. These can be 10 lines long, but can be hundreds (if not thousands) of lines long. Each export will consume a tiny amount of memory, and each function and each alias also a little resources to be held in memory. Another consideration is that we can't just look at the size of the .bashrc and other scripts as they could have lots of comments which consume no memory. I would like to remove all startup scripts, start the system, wait a few minutes for things to settle down and then take some kind of baseline, then put the startup scripts back in place, restart the system and perform the same exercise to try and get some kind of resource / performance diff. Can you suggest what tools might help to determine this? I have a relatively large set of startup scripts, about 15k with many functions and aliases defined, so I'm really curious what impact (if any, as a modern system with 16 GB of memory and a fast modern Core i5, the effect could be negligible) this has upon the system in terms of consumed resources? Even if the impact of my startup scripts is low, I would still love to be able to take a baseline and then a later 'load test' to get some assessment of how systems handle running a given set of applications.
The impact of your startup scripts and the resulting setup will mostly affect interactive shells; to determine the resulting resource consumption, you don’t need to go to huge lengths. Open a terminal window, so that your default shell starts with its default setup, then start a shell without loading the startup scripts, and from that shell, run ps -F: $ bash --norc $ ps -F UID PID PPID C SZ RSS PSR STIME TTY TIME CMD steve 3922819 3921628 0 2307 4812 7 20:49 pts/14 00:00:00 bash steve 3922883 3922819 0 2276 4688 5 20:49 pts/14 00:00:00 bash --norc steve 3922884 3922883 0 2892 4244 0 20:49 pts/14 00:00:00 ps -F Looking at the RSS column shows that my bash setup (which is rather minimal) uses 124KiB more than a no-frills bash. My Zsh setup is more complex: $ zsh -f $ ps -F UID PID PPID C SZ RSS PSR STIME TTY TIME CMD steve 3921244 18008 0 3341 8296 2 20:43 pts/14 00:00:00 zsh steve 3921628 3921244 0 2829 5856 5 20:44 pts/14 00:00:00 zsh -f steve 3923250 3922883 0 2892 4132 7 20:51 pts/14 00:00:00 ps -F The difference there is larger, 2440KiB. Non-interactive shells don’t load the same startup scripts, and they don’t survive long anyway — if you run ps -FC sh, ps -FC bash etc. you should see that there aren’t many (if any at all). What you load in your environment can have a bigger impact; to get some idea of that, look at the real size of /proc/.../environ: $ sudo wc -c /proc/*/environ | tail -n 1 758799 total That’s 741KiB in total, for nearly a thousand running processes.
In bash, is there a way to see how much memory .bashrc and any startup scripts are consuming?
1,516,637,501,000
I need to display in GUI for operator used memory vs available on Linux Server So what would be logically correct value to display as usage? [root@host ~]# free total used free shared buff/cache available Mem: 131753676 110324960 1433296 4182648 19995420 16240640 Swap: 2097148 652076 1445072 used or (total-available)? Difference is: 110324960 vs 115513036 or 5188076 kb ~= ~5 GB So what are with these 5 GB are they effectively used or available or unavailable and nor used? What is more correct to display for used in memory usage %? This is for CentOS 7.3 PC, running 2 java services But there is totally different picture on PostgreSQL Server [root@postgres_server1 ~]# free total used free shared buff/cache available Mem: 131753684 7364056 77736740 15598120 46652888 107942020 Swap: 2097148 0 2097148 where difference between used and (total-available) is much larger: 16447608 kb ~=15.7 GB
It depends what your usage is supposed to reflect. In free’s output: “used” is calculated as “total – free – buffers – cache”, so it reflects the amount of memory currently storing useful data, excluding cache; “available” is supposed to be the amount of physical memory which can be immediately made available for other uses. “Total – available” would mean something along the lines of “physical memory currently in use, and non-replaceable”. The difference between that and “used”, the 5GB and 15GB you mention, is the amount of physical memory currently storing data which is not yet available elsewhere, i.e. dirty buffers (so your PostgreSQL has more data waiting to hit the disk). “Available” reflects the maximum amount of physical memory a program can request without being forced to swap (although nothing guarantees that, if a program were to use that much memory, it wouldn’t swap anyway, given the rest of the system’s behaviour at the time). So your two values both reflect used memory, with slightly different definitions. Which one you use is up to you (or your requirements). “Available” is more accurate than “used”, so it’s probably a more useful value if you only want to keep one. Another way to think about it is to consider what questions the values answer: “used” tells you whether the 128GiB of RAM are useful for your current workload; “available” tells you how much capacity you still have left; “total – available” tells you how much physical RAM you really need (i.e. how much smaller your next server or VM could be, if you’re willing to lose the performance gains from cache). There’s no perfect compromise, which is why free shows both values. Memory immediately available in linux memory management system and How can I get the amount of available memory portably across distributions? provide more detail on the exact meaning of “available”. It isn’t equivalent to “buff/cache” because the latter includes memory which isn’t reclaimable (because it hasn’t been written to disk yet), and there are other memory pools which are reclaimable but aren’t counted in “buff/cache”.
What is "effective used" memory on Linux? "used" or ("total"-"available")?
1,516,637,501,000
I have setup a basic udev rule to detect when I connect or disconnect a mDP cable. The file is /etc/udev/rules.d/95-monitor-hotplug.rules KERNEL=="card0", SUBSYSTEM=="drm", ENV{DISPLAY}=":0", ENV{XAUTHORITY}="/var/run/gdm/auth-for-vazquez-OlbTje/database", RUN+="/usr/bin/arandr" It should just launch arandr when their is a mDP cable connected or disconnected, but nothing happens. I have also reloaded the rules with: udevadm control --reload-rules ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ This is how the problem was solved. With the links provided by @Gilles. I added the following code to my .profile then pointed the ENV{$XAUTHORITY}="/home/user/.Xauthority" and also added ACTION=="change" to the rules file. After that everything was working as it should. Thanks Gilles. case $DISPLAY:$XAUTHORITY in :*:?*) # DISPLAY is set and points to a local display, and XAUTHORITY is # set, so merge the contents of `$XAUTHORITY` into ~/.Xauthority. XAUTHORITY=~/.Xauthority xauth merge "$XAUTHORITY";; esac
An udev rule applies to the add action by default. The udev rule is on a graphics card, not on a monitor; so it runs when a graphics card is added to the system, which in practice means at boot time. Plugging in a monitor results in a change action, not an add action. You can observe this by running udevadm monitor and plugging a monitor in. So the udev rule should specify a change action. KERNEL=="card0", SUBSYSTEM=="drm", ACTION=="change", \ ENV{DISPLAY}=":0", ENV{XAUTHORITY}="/var/run/gdm/auth-for-vazquez-OlbTje/database", RUN+="/usr/bin/arandr" Examples found on the web corroborate my understanding, e.g. codingtony whose monitor-hotplug.sh script may be of interest to you. The file name under /var/run changes each time you reboot, so you should determine it automatically inside your script. This answer should help.
udev monitor hotplug rule not running
1,516,637,501,000
I want to run sudo airodump-ng -w myfile every ten minutes or so, for m minutes. It does not matter if the running time shifts (that it, if it runs m minutes later each time). Notice that this is a monitoring program, which won't just output and exit. I suppose the solution for this one question is also valid for similar monitoring programs. I was thinking about putting something like: */10 * * * * airodump-ng mon0 -w myfile into crontab. There is no need to change the myfile name, airodump can correctly check whether myfile exists and create a myfile-02 and so on. However, how should I stop it running after s secs? pkill airodump is the only thing I can think of. Is this the best for running it 1 minutes twice an hour? 20,40 * * * * airodump-ng mon0 -w myfile 21,41 * * * * pkill airodump-ng
Don't use pkill. Instead, run your app under the timeout command from the coreutils package: */10 * * * * timeout 5m airodump-ng mon0 -w myfile (Where here 5m means to run for 5 minutes.) Use --signal if you need something other than TERM.
Run and stop a monitoring command as sudo for s seconds every m minutes
1,516,637,501,000
How can I know which application is using the most network bandwidth? I saw some graph from KDE's network monitor, but don't know which process did that.
As far as I know, iftop can not show which processes are using the bandwidth. If you need this information, you should check out nethogs.
How to identify the program that uses most bandwidth?
1,516,637,501,000
I just installed atop, waited half an hour, and looked at the logs with atop -r /var/log/atop/atop_20180216. Why does my systemd --user instance show hundreds of megs of disk usage, including tens of megs of writes, during one ten minute interval? What can systemd possibly be doing? PID TID RDDSK WRDSK WCANCL DSK CMD 1/285 2831 - 333.8M 25556K 1196K 87% systemd
[RDDSK / WRDSK] When the kernel maintains standard io statistics (>= 2.6.20): The [read / write] data transfer issued physically on disk (so writing to the disk cache is not accounted for). This counter is maintained for the application process that writes its data to the cache (assuming that this data is physically transferred to disk later on). Notice that disk I/O needed for swapping is not taken into account. Unfortunately, the kernel aggregates the data tranfer of a process to the data transfer of its parent process when terminating, so you might see transfers for (parent) processes like cron, bash or init, that are not really issued by them. https://www.systutorials.com/docs/linux/man/1-atop/ (I agree this is unfortunate. Especially given atop's advertised feature of showing resources used even by processes which exited at some point during the monitoring interval, implemented using process accounting aka psacct).
systemd shows as reading 300M in atop?
1,516,637,501,000
When upgrading from Jessie to Stretch, at the end of dist-upgrade, it ends with an error: Errors were encountered while processing: nagios-nrpe-server E: Sub-process /usr/bin/dpkg returned an error code (1) Have tried running apt upgrade, install, and reinstall without correcting this. What to do?
To finish installing nagios-nrpe-server, I ended up verifying the post-install scripts. At nagios-nrpe-server.postinst: #!/bin/sh set -e # Automatically added by dh_installinit if [ -x "/etc/init.d/nagios-nrpe-server" ]; then update-rc.d nagios-nrpe-server defaults >/dev/null invoke-rc.d nagios-nrpe-server start || exit $? fi # End automatically added section As I have nagios-nrpe being invoked by (x)inetd and not running as a daemon, it failed startup and thus the apt dist-upgrade error. For the moment commented out the start line, considering whether filing up a bug, and/or changing from xinetd to a daemon. I use xinetd because I also use it to invoke the backup daemon.
Strange nagios-nrpe-server error upgrading from Jessie to Stretch
1,516,637,501,000
Is there a way to monitor interface up/downs, especially to check if a route is setup or removed, with udev?
I am really not sure (and highly doubt it) if udev provides an interface for it but you can easily monitor it without udev. You just have to use a netlink socket with NETLINK_ROUTE to get notifications about changed addresses, changed routing tables etc.
Monitoring interface changes with udev, especially if a route is set
1,516,637,501,000
The dedicated server I'm looking after started to crash occasionally, I suspect because of overload, so I need some performance/resource based monitoring software, preferably with web interface, something like OpenNMS which I have tried, but did not like. The OS is Linux CentOS 5.3 P.S. There over 50 websites running on the server, if the monitoring software could show which one is consuming most resources that would be most helpful.
There are a lot of answers. I personally use Zenoss, but there's a big list here: http://en.wikipedia.org/wiki/Comparison_of_network_monitoring_systems
Linux server monitoring software
1,516,637,501,000
Can I make top show info about the web and db servers? Can it be done by piping the pids to the top? They can be many process from every one?!? Or is there any better method for this?
In the case of web and database users, they usually run as their own user. You can use the -u flag to show only processes running as a certain user like this: top -u mysql. You may also use the -p flag with a comma separated list of PIDs you want to follow like this: top -p 123,234,345. You might also find htop more useful in this situation. Besides the options above it has much more flexible display options including following selected items so you don't loose them in the list. Once you setup your columns and sort order, turn on tree mode and then follow the parent of the server you are interested in. All it's children should show up just under the cursor.
Make top shows only server process
1,516,637,501,000
My business purpose is to monitor the remote file system on Linux, and if there are any new files, SFTP them to another machine and delete them. However, the limitation is that I cannot install any libraries on the remote machine. So, I am considering implementing interval SSH command polling to the remote machine. My questions: Is interval polling implementable? Or do you have any better ideas? What kind of SSH command should I use to monitor the remote machine?
The simple and lazy way: crontab with your regular user on local system (every 15 min): */15 * * * * /bin/bash /path/to/script.sh The code: #!/bin/bash source ~/.bashrc ssh-add /home/me/.ssh/id_rsa ssh user@remote-server printf '%s\n' '/path/to/new_files/*' > ~/.$$_remote-files if ! cmp ~/.$$_remote-files ~/.remote-files &>/dev/null; then echo 'new file(s) or dir(s) detected !' # ssh user@remote-server rm -rf '/path/to/new_files/*' mv ~/.$$_remote-files ~/.remote-files fi To implement auto ssh login, you have to generate a ssh-passphrase without password: $ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/me/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/me/.ssh/id_rsa Your public key has been saved in /home/me/.ssh/id_rsa.pub The key fingerprint is: 123456789ABCDEF
How do you monitor remote file system on Linux?
1,516,637,501,000
I have an ubuntu instance on AWS and I want an email when 80% of disk space is consumed. I have checked the cloud watch but there is no such option to monitor disk space. There is only one option which is custom metric https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/mon-scripts.html but I am not sure that it will give an email alert. Please guide me for the same.
Update: 1. Create an instance and attach the IAM Role : AmazonEC2RoleforSSM CloudWatchAgentAdminPolicy CloudWatchAgentServerPolicy AmazonSSMManagedInstanceCore 2. Install CloudWatch agent : : In the RunCommand , choose AWS-ConfigureAWSPackage to the install it on the desired Target. 3. Running the CloudWatch agent wizard : Start the CloudWatch agent configuration wizard by entering the following: sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard At one stage, you will prompted by the wizard to choose which default predefined metrics you want and if you want to store the config in the SSM parameter store. I had chosen Advanced to include all metrics and Yes to store in the config. Once completed , the entire config is available in the Parameter store in AWS Systems Manager. My config snippet has : "disk": { "measurement": [ "used_percent", "inodes_free" ], "metrics_collection_interval": 60, "resources": [ "*" ] }, 4. Start the CloudWatch agent There are 2 ways to start the agent : a. From Run Command b. From command line with Systems Manager Parameter store. The Run Command failed for some reason but the command line worked sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -c ssm:configuration-parameter-store-name -s c. If you encounter any error No package collectd available , install the necessary package to restart the agent. For Amazon Linux : sudo amazon-linux-extras install epel sudo yum install collectd For Ubuntu : sudo apt-get install collectd collectd-utils 5. CloudWatch console Create a Dashboard to monitor the instance metrics which is now available as custom namespaces via the CWAgent. You can also set necessary Alarm to notify / email recipients. CloudWatch custom metrics Previous post You may want to deploy Systems Manager (SSM) Agent installed on you instance to monitor and alert you for disk space usage. For this use, you will need to create a Role in IAM for the EC2 instance to send data to CloudWatch. Once that's complete, you can use a JSON script to monitor disk space from within the Run command of SSM. More details are available at https://blog.justinworrell.com/2017/09/30/monitoring-free-disk-space-on-a-windows-ec2-instance-with-cloudwatch/
How to set email alert for disk space usage for ubuntu instance on AWS?
1,512,548,446,000
I have atop logs stored on a dayly basis with 10 min interval and I can read them with atop -r <path_to_log>, but how can I find a peak memory usage in this log?
The command to analyze the recorded data is atopsar. For example: # atopsar -r /var/log/atop/atop_20170511 -m -R 1 | head trucka 3.4.113-sun7i+ #1 SMP PREEMPT Fri Oct 28 16:54:21 CEST 2016 armv7l 2017/05/11 -------------------------- analysis date: 2017/05/11 -------------------------- 00:00:01 memtotal memfree buffers cached dirty slabmem swptotal swpfree _mem_ 00:10:01 1888M 604M 381M 422M 0M 185M 2047M 2047M 00:20:01 1888M 604M 381M 422M 0M 185M 2047M 2047M 00:30:01 1888M 604M 381M 422M 0M 185M 2047M 2047M 00:40:01 1888M 604M 381M 422M 0M 185M 2047M 2047M You have to consider what memory is important for you in your case. It may make sense for you to sort by the third column (memfree) to find the lowest point of free memory. You could also consider to have a look at the swapfree (9th column) to find the point where most memory is used, which causes the memory management to page out to swap. As an example, I sort the output for lowest memory free with the sort command: # atopsar -r /var/log/atop/atop_20170511 -m -R 1 | sort -b -k 3,3 | head trucka 3.4.113-sun7i+ #1 SMP PREEMPT Fri Oct 28 16:54:21 CEST 2016 armv7l 2017/05/11 06:40:01 1888M 416M 400M 612M 9M 164M 2047M 2047M 06:30:01 1888M 543M 423M 483M 4M 141M 2047M 2047M 03:10:01 1888M 551M 376M 480M 0M 184M 2047M 2047M 03:20:01 1888M 551M 376M 480M 0M 184M 2047M 2047M 03:30:01 1888M 551M 376M 480M 0M 184M 2047M 2047M Just to beautify the output, I will ignore sorting the first 7 rows of autosar's header in the following example: # atopsar -r /var/log/atop/atop_20170511 -m -R 1 | awk 'NR<7{print $0;next}{print $0| "sort -k 3,3"}' | head -11 trucka 3.4.113-sun7i+ #1 SMP PREEMPT Fri Oct 28 16:54:21 CEST 2016 armv7l 2017/05/11 -------------------------- analysis date: 2017/05/11 -------------------------- 00:00:01 memtotal memfree buffers cached dirty slabmem swptotal swpfree _mem_ 06:40:01 1888M 416M 400M 612M 9M 164M 2047M 2047M 06:30:01 1888M 543M 423M 483M 4M 141M 2047M 2047M 03:10:01 1888M 551M 376M 480M 0M 184M 2047M 2047M 03:20:01 1888M 551M 376M 480M 0M 184M 2047M 2047M
atop peak memory usage from log
1,512,548,446,000
I have a monitor with resolution 1440x900, 19 inch, whose PPI is 89.37. I can set the screen DPI with command: # xrandr --dpi 100 But the problem is my monitor's PPI is only 89.37, how can xrandr command sets the monitor's DPI bigger than PPI? (From my understanding, PPI is monitor's property, and it cannot be changed, but DPI is something that can be tuned to get a better display, am I right?) So, my problem is: What happens beneath OS (or how OS handles this?) if the DPI is bigger than PPI? What happens if the DPI is bigger than PPI?
The “DPI” setting that you can set with xrandr is purely an indication to applications, it doesn't configure the hardware. Normally the monitor reports to the computer what its resolution and pixel density is. You can replace the reported value with a fake one if you want, or set a value if the monitor doesn't report any. The OS doesn't care as such, the DPI settings are only used by applications that want to draw something at a certain size, as a multiplier to convert from a distance unit to pixels. For example, if an application wants to typeset text that's 1/6 in tall, then it would multiply 1/6 by the DPI setting to find that it should use a 15px font with the monitor's default setting and a 17px font with a 100 DPI setting.
What if I set DPI bigger than monitor's PPI in gnome using xrandr?
1,512,548,446,000
I have some daemon scripts that run in an infinite loop that do some action if a detected node has failed. For example; in AWS to move an Elastic IP. How can I integrate this script that runs in an infinite loop to push an alert to sensu? The traditional Sensu documentation about checks does not apply (this script runs forever). I need a way to send a custom event directly to sensu. I thought the API might be it, but it doesn't seem like I can push an event.
Sensu has documentation on how to do this here: https://sensuapp.org/docs/latest/clients#client-socket-input Basically, each sensu client (client.json) has an internal socket that you can send external data to; by default this socket only listens on 127.0.0.1:3030 so the config for the client has to be adjusted: { "client": { "name": "my.host", "address": "x.x.x.x", "subscriptions": [ "all" ], "socket": { "bind": "0.0.0.0", "port": 3030 } } } The external script then needs to send data to that clients socket via TCP or UDP in JSON using the following format: { "name": "some_name", "output": "ITS DOWN OH NO!", "status": 2 }
How to integrate daemon scripts with sensu?
1,512,548,446,000
I am using a Ubuntu machine as a NAT router. How can I find out the following: the ports on which the LAN machines are listening or communicating (both tcp and udp); which local machines have established connections with which WAN ip's on those ports; and the size of data that has been transfered on those ports.
I think you want to try netstat-nat I seem to remember using it when I had a Slackware box set up as a NAT-server.
Port monitoring on GNU/Linux based NAT router
1,512,548,446,000
What I need I want to monitor system resources (namely memory and cpu usage) by application - not just by process. Just as the Windows Task Manager is grouping resources by the "calling mother process", I like to see it like that as well. Nowadays, applications like firefox and vscode spawn many child processes and I want to get a quick and complete overview of their usage. The solution can be a GUI or TUI, a bash script or a big one-liner. I do not really care. For it to work, I imagine I could feed it with the pid of the mother process or the name of an executable as a means of filtering. Example Task Manager groupes/accumulates Chrome browser system resources What I Tried I tried htop, but it only shows me a tree where the calling process has its own memory listed - not the ones it called. I tried the gnome-system-monitor, but its the same. I tried a bit with ps and free but have not found the correct set of arguments / pipes to make them do what I want. It stumped me that I could not google a solution for that. Maybe there is a reason for it? Does anybody have an idea? I would very much appreciate it!
This script below requires a lot of additional improvements, but I think it can serve as a basis. I started to write comments, but for now, not able to finish it. I will use edits to my answer to add new comments and fix bugs when I get more free time. In my environment it works fine. I called this script mytop and put it to /usr/local/bin so I have bash command tab completion on it. You can put mytop to your ~/bin directory (if ~/bin not in your $PATH, add it), or whatever place on your machine. Of course execute bit must be set, with chmod u+x mytop. #!/bin/bash # mytop -ver 1.0 # script name (default is: 'mytop') s_name=$(basename $0) # version ver="1.0" # set default time between mytop iterations sec_delay=3 # set default mytop repetitions/iterations mt_rep=1000000 # Help function explaining syntax, options, ... Help() { # Display Help echo echo "Show Totals of %CPU and &MEM using 'top' command." echo echo "Syntax:" echo " $s_name [-h|-V]" echo " $s_name [[-d <S>][-n <N>] <APP_NAME>"] echo echo "Options:" echo " -h Print this Help." echo " -d S Delay/wait S seconds between iterations (default: 3 seconds)." echo " -n N Run/iterate 'mytop' N times (default: 3 times)." echo " -V Print version." echo echo "Examples:" echo " mytop -V" echo " mytop -d1 -n5 chromium" echo echo 'Use CTRL+C for exit!' echo } # Handling options from command line arguments while getopts ":hn:d:V" option; do case $option in h) # display Help Help exit;; V) # print version echo "$s_name $ver" exit;; n) # set how many times 'mytop' will repeat/iterate mt_rep=$OPTARG;; d) # set delays in seconds sec_delay=$OPTARG;; \?) echo "$s_name: inapropriate: '$1'." echo "Usage:" echo " $s_name [-h|-V|-d<S> -n<N> <APP_NAME>]" exit;; esac done # If no arguments given just display Help function and exit if [[ $# -eq 0 ]]; then Help exit else # If last argument starts with '-' exit from app if [[ ${@:$#} =~ -+.* ]]; then echo ${s_name}: error: Last argument must be the name of the application that you want to track. >&2 exit 1 else app_name=${@:$#} fi fi # Set 'dashes' literally #t_dsh='-----------------------------------------------------------' # or set them with printf command t_dsh=$(printf '%0.s-' {1..59}) # Not in use #if [[ -z $mt_rep ]] 2>/dev/null; then # r_endless=1 # mt_rep=1000 #else # r_endless=0 #fi i=0 while [[ $i -lt $mt_rep ]]; do #if [[ "$r_endless" == "0" ]]; then ((i++)); fi ((i++)) # Handle pids of app you want to track by removing 'mytop' pids # get s_name (mytop) pids pgrep $s_name > /tmp/mt_pids # get app_name pids -all of them --not desired behaviour pgrep -f $app_name > /tmp/app_name_pids # get app_name without mytop pids --desired behaviour for e in $(cat /tmp/mt_pids); do sed -i "/$e/d" /tmp/app_name_pids; done if [[ ! -s "/tmp/app_name_pids" ]]; then echo "1000000" > /tmp/app_name_pids; fi # top -b -n1 -p; -b for output without ANSI formating; -n1 for just one iteration of 'top'; -p for feeding processes from 'pgrep' command # Use LC_NUMERIC if your 'top' command outputs 'commas' instead 'dots' - with LC_NUMERIC you will get 'dots' during this script LC_NUMERIC=en_US.UTF-8 top -b -n1 -p $(cat /tmp/app_name_pids | xargs | tr ' ' ,) > /tmp/pstemp wc_l=$(wc -l < /tmp/pstemp) cpu_use=$(tail -n +8 /tmp/pstemp | tr -s ' ' | sed 's/^ *//' | cut -d' ' -f9 | xargs | tr ' ' + | bc) if [[ "$cpu_use" == "0" ]]; then cpu_use="0.0" else if (( $(bc <<< "$cpu_use < 1") )); then cpu_use="0$cpu_use"; fi fi mem_use=$(tail -n +8 /tmp/pstemp | tr -s ' ' | sed 's/^ *//' | cut -d' ' -f10 | xargs | tr ' ' + | bc) if [[ "$mem_use" == "0" ]]; then mem_use="0.0" else if (( $(bc <<< "$mem_use < 1") )); then mem_use="0$mem_use"; fi fi echo -en "\033[2J\033[0;0f" # Use 'echo ...' above or 'tput ...' below (chose the one that works for you) #tput cup 0 0 && tput ed # Align Totals under %CPU and %MEM columns if (( $(bc <<< "$cpu_use < 1") )); then sed "${wc_l}a \\\n\nTotal (%CPU/%MEM): $(printf " %29s")$cpu_use $mem_use\n${t_dsh}" /tmp/pstemp elif (( $(bc <<< "$cpu_use < 100") )); then sed "${wc_l}a \\\n\nTotal (%CPU/%MEM): $(printf " %28s")$cpu_use $mem_use\n${t_dsh}" /tmp/pstemp else sed "${wc_l}a \\\n\nTotal (%CPU/%MEM): $(printf " %27s")$cpu_use $mem_use\n${t_dsh}" /tmp/pstemp fi if [[ $i -lt $mt_rep ]]; then sleep $sec_delay; fi done
Get memory/cpu usage by application
1,512,548,446,000
I'm managing a few compute servers w/ roughly 20 users each. I'm using htop to view current resource usage, however it would be very helpful to have a log of a specific user's cumulative memory and cpu usage. Is there any way to view/log this via htop or bash?
You could use top -bn1 -U {user} to create a file which you can then do additional processing on to gain a cumulative usage. The argument -bn1 makes top run in a non-interactive mode, simply outputting once when finished. You can then pipe that output anywhere for additional processing. for example, top -bn1 -U {user} > user_log.txt for additional processing in another script, or you could do something like, top -bn1 -U {user} | awk {file_processing_script} >> user_log.txt file_processing_script in that case is an awk script that processes the data in whatever way you want. one idea could be awk 'NR>7{cpu += $9; mem += $10} END {printf "%.2f\t%.2f\n", cpu, mem}', which will simply output the total cpu and memory usage from a specific user at the moment it is run. Append several of these together, and you get a nice table showing cpu and memory usage from a user.
Cumulative resource usage
1,512,548,446,000
I have a Raspberry Pi (running Raspbian) that is booting from a microSD card. Since it's acting as a home server, naturally I want to monitor the microSD card for errors. Unfortunately though, microSD cards don't support SMART like other disks I have, so I am unsure how to monitor the disk for errors. How can I monitor / check disks that do not support SMART for errors when they are still in use / have partitions mounted?
You can replace smartctl -t long selftests with badblocks (no parameters). It performs a simple read-only test. You can run it while filesystems are mounted. (Do NOT use the so-called non-destructive write test). # badblocks -v /dev/loop0 Checking blocks 0 to 1048575 Checking for bad blocks (read-only test): done Pass completed, 0 bad blocks found. (0/0/0 errors) Note you should only use this if you don't already suspect there are bad sectors; if you already know it's going bad, use ddrescue instead. (badblocks throws away all data it reads, ddrescue makes a copy that may come in useful later). Other than that, you can do things that SMART doesn't do: use a checksumming filesystem, or dm-integrity layer, or backups & compare, to actually verify contents. Lacking those, just run regular filesystem checks. MicroSD cards also have failure modes that are hard to detect. Some cards may eventually discard writes and keep returning old data on reads. Even simple checksums might not be enough here - if the card happens to return both older data and older checksums, it might still match even if it's the wrong data... Then there are fake capacity cards that just lose data once you've written too much. Neither return any read or write errors, and it can't be detected with badblocks, not even in its destructive write mode (since the pattern it writes are repetitive). For this you need a test that uses non-repetitive patterns, e.g. by putting an encryption layer on it (badblocks write on LUKS detects fake capacity cards when badblocks write on raw device does not).
How to test a disk that does not support SMART for errors?
1,512,548,446,000
--- The context --- I'm using Openbox to create a dedicated workspace/desktop for my browser -- the browser only opens on that workspace, and any other windows created on it get kicked onto a different workspace. This works for most of my browsing, but with fullscreen-capable content such as streaming videos or Flash apps, switching to 'fullscreen mode' actually creates a new window for the content to play in... so Openbox kicks it off the dedicated workspace. Meaning, when I exit fullscreen mode I'm on my random clutter workspace instead of back in my browser. I need to write an exception so Openbox lets the fullscreen content stay on the dedicated workspace. --- The problem --- I would like to use xprop (or just wmctrl -l) to get some info on the window that the fullscreen web content plays in, but the window automatically closes as soon as it loses focus (as far as I can tell) so I can't just switch to my terminal and do it manually. I need some way to log the info in the background. Ideally I'd like some kind of background monitor that logs the name of every window that gets opened. Is there a relatively simple way to script this? I'm sure I could find a monitoring software package that could do this, but it seems like overkill for what I need. EDIT to add answer: Just using a timer (sleep 10; xprop) as suggested by Gilles worked. For future reference the relevant line was _OB_APP_CLASS(UTF8_STRING) = "Plugin-container"
A program that monitors window creation doesn't come to mind, but you don't need that. You can run wmctrl -l in a loop or on a timer (e.g. sleep 10; wmctrl -l) and then start the fullscreen application and record its window properties. If you want more information, you can do something like sleep 10; xprop After 10 seconds, the mouse cursor will change; clicking should make xprop display information about the foreground window. Alternatively, use xdotool (again on a timer or in a loop) and its window matching capabilities to find the window ID, e.g. xdotool --pid if the full-screen window is in a pre-existing process, or xdotool getwindowfocus or getactivewindow to get the window ID of the window that has the focus. Note that the foreground window may in fact not have the input focus (some full-screen applications display an additional full-screen window in the foreground but keep the focus in their “normal” window; you can query the window at some screen location instead, or simply xdotool getmouselocation for a full-screen window (for a multi-monitor setup, if the mouse cursor is already on the right monitor). Alternatively, on Linux, switch to a text console (e.g. Ctrl+Alt+F1), log in, run export DISPLAY=:0, and then you can access the GUI (run xprop, xdotool, etc.). With some setups you may need to set XAUTHORITY as well.
How do I find out the window name of fullscreen internet content (eg Flash)?
1,512,548,446,000
We have a server (camera) sending RTSP video packets via UDP. At a customer site it travels over several hops, one of which may be an unreliable WiFi link that drops the odd packet or five. Usually this goes unnoticed but sometimes it kills the stream for some seconds and causes customer displeasure (I know, their cr*p network is somehow our problem...) On testing using tc to simulate a dodgy connection we have found an odd situation: If we break the connection in the return direction (packets silently discarded), after some seconds the flow of UDP packets from our camera stops, even though the RTSP client (Live555 Wis-Streamer) still believes it's merrily squirting UDP packets up the pipe. This is odd, as obviously UDP packets are not ACK'd and the physical link never drops, so our system has no way of knowing that the packets are dropping into the bit bucket further up the chain and the streamer has no way of knowing that no-one is listening to it (the streamer session timeout does not expire until later). EDIT: We see ARPing (Who has <client>) at the moment the UDP packets stop coming but none prior to that which would tell the stack the connectivity has dropped. So I have two questions: Is there some other mechanism by which the networking stack can tell the connection has issues? Does the network stack silently drop packets under certain circumstances? To demonstrate our testing setup: Normal state, connectivity both ways: Our server <==> Switch <==> TC <==> Switch <==> PC | | Wireshark <-- TAP | | Wireshark <----------------------- TAP Fault state, TC dropping packets going back to our server: Our server --> Switch --> TC <==> Switch <==> PC | | Wireshark <-- TAP | | Wireshark <----------------------- TAP
Well it looks like it was the ARP table going stale (even though we're streaming UDP like crazy, that doesn't kick the ARP timeouts, and TCP traffic is much more sparse under normal operation) upping the timeouts stopped the issue from appearing for "breaks" of less than ~2mins (at which point the RTSP client session times out anyway): ARP gc_staletime extended from 60sec to 360sec ARP base_unreachable time extended from 30sec to 240sec Unfortunately this took a fair bit of poking about as we're on Busybox without an arp command available, but it now seems reliable for the situation we're trying to handle. I'm still keen to understand how the network stack works - at the moment the ARP table/entry goes stale it stops sending out packets yet seemingly doesn't cause errors further up the chain in the code that's trying to send packets.
What is stopping our UDP packets (how does it know the route is down)?
1,512,548,446,000
What is the difference between real and actual memory usage in xymon ?
According to Henrik Stoerner at http://lists.xymon.com/oldarchive/2006/02/msg00115.html , real is the physical memory, actual is the amount of memory in use not including buffers and cache, all based on the output of the free command.
Real vs Actual memory in Xymon/Hobbit
1,512,548,446,000
Based on this question, I would like to log the performance of a specific process, with a frequency of say one second, to a csv (comma separated value) log file. Something like: timestamp(unix),cpu_activity(%),mem_usage(B),network_activity(B) 1355407327,24.6,7451518,345 1355407328,27.6,7451535,12 1355407329,31.6,7451789,467 ...
i tried to get rx_bytes and tx_bytes but no luck, other things is working.. So you can use below script for the same #!/bin/bash # /sys/class/net/eth0/statistics/rx_bytes # /sys/class/net/eth0/statistics/tx_bytes Process="$1" [[ -z $2 ]] && InterVal=1 || InterVal=$2 show_help() { cat <<_EOF Usage : $0 <ProcessName> <Interval (Default 1s)> _EOF } Show_Process_Stats() { pgrep "${Process}" >/dev/null 2>&1 || { echo "Error: Process($1) it not Running.."; exit 1;}; while : do # timestamp(unix),cpu_activity(%),mem_usage(B),network_activity(B) timestamp=$(date +%s) read cpu_activty mem_usage < <( ps --no-headers -o %cpu,rssize -C "${Process}" ) echo "${timestamp}","${cpu_activty}","${mem_usage}" sleep $InterVal done } Main() { case $1 in ""|-h|--help) show_help ;; *) Show_Process_Stats ;; esac } Main $*
Monitor single process to logfile perodicly
1,512,548,446,000
I am looking for a versatile tool that can write either to a MySQL db, or possibly simply even just write some image charts (that i can display on a PHP dashboard) to monitor my network health. Is the best approach to run some type of daemon on my Debian box and somehow capture data from remote hosts via SSH? Also any CPU and process info would be useful in a tool like this. I've seen it done on routers' web interfaces, and I would like to be able to capture this data.
It wasn't clear how 'free' you wanted this tool to be. Intermapper does this: http://www.intermapper.com/ But it isn't free(but does run on UNIX). As far as open source goes, Network Weather map and Weathermap4rrd can do this. Here is an example Weathermap4rrd image: http://weathermap4rrd.tropicalex.net/images/w4_example.png More info on Weathermap4rrd: http://weathermap4rrd.tropicalex.net/whatisw4rrd.php Weathermap: http://netmon.grnet.gr/weathermap/ I've built displays using OpenNMS and Nagios to gather data for this. Anything that can output to rrds can be used as inputs. Some gathering tools: OpenNMS: http://www.opennms.org/ Cacti: http://www.cacti.net/
Reporting Network and Vital Statistics and writing to db and image
1,512,548,446,000
If I run: ssh -fND localhost:6000 USERNAME@IPADDRESS -p PORTNUMBER and I set my webbrowser to use 127.0.0.1:6000 SOCKS5 proxy, is there a way for the remote SSH server to monitor my web traffic? I've seen this post that allows traffic to be monitored on a per-user basis, but what can they do if there is only 1 SSH user, and that 1 SSH user is used by many people behind 1 public IP address/NATed network? I know that if I don't set network.proxy.socks_remote_dns to true in Firefox then they can see my DNS traffic, because it's resolved in my side. So the Q: What are the methods to monitor traffic on the remote SSH server if there is only 1 ssh user with many "real" users using it?
"They" can correlate the ssh session with both your real IP and the traffic coming out of the ssh server. This method of tunneling traffic over ssh is great for encrypting the contents of the traffic between the ssh client and the ssh server, but it won't help you avoid monitoring on the ssh server.
How to monitor traffic when SSH tunneling?
1,424,409,229,000
I have come across vnstat recently, and am enjoying it's simplicity, low resource usage, and its ability to record network history long term. However, I am looking for a similar tool (for long term archival history), which can record the amount of traffic through network ports. Ultimately, I'd like to be able to view the data in a way that shows me: most used TCP/UDP port (e.g. Ports sorted by most bytes TX, or RX) most used TCP/UDP port (e.g. Ports sorted by most number of packets) amount of bytes and/or packets transmitted on that port over "X" period of time (days, months, hours) ability to exclude certain ports (e.g. http:80) I would prefer a non-GUI tool. Wireshark and similar are too bulky for my needs. Progs I've tried bandwidthd bmon bwm, bwm-ng dstat ifstat ifstatus iftop iperf/netperf/uperf iptrack nethogs nload strobe tcptrack
Darkstat serves my purposes nicely. However, I have found that version 3.0.715 works better than 3.0.717.
How to record network throughput per port for long period of time
1,424,409,229,000
I have some questions about viewing who accessed a file. I found there are ways to see if a file was accessed (not modified/changed) through audit subsystem and inotify. However, from what I have read online, according to here: http://www.cyberciti.biz/tips/linux-audit-files-to-see-who-made-changes-to-a-file.html it says to 'watch/monitor' file, I have to set a watch by using command like: # auditctl -w /etc/passwd -p war -k password-file So if I create a new file or directory, do I have to use audit/inotify command to 'set' watch first to 'watch' who accessed the new file? Also is there a way to know if a directory is being 'watched' through audit subsystem or inotify? How/where can I check the log of a file? edit: from further googling, I found this page saying: http://www.kernel.org/doc/man-pages/online/pages/man7/inotify.7.html The inotify API provides no information about the user or process that triggered the inotify event. So I guess this means that I cant figure out which user accessed a file? Only audit subsystem can be used to figure out who accessed a file?
Logs from the audit subsystem are based on paths. You can put a watch on a file name even if that file doesn't exist. You'll get log entries if the file is created and accessed. All logs from auditd are saved in one file (generally /var/log/audit/auditd.log). You can list the audit rules with auditctl -l.
How to know who accessed a file or if a file has 'access' monitor in linux
1,424,409,229,000
My web server is running CentOS and every time a certain page is accessed on my forum, httpd locks up and I can never seem to pinpoint the exact file. Is there any way to view the pages that currently have requests open on a CentOS/UNIX-based server?
You can try using ApacheTop. It shows out output like this:
Is there a way to find out which webpages are being accessed by clients on a UNIX webserver?
1,424,409,229,000
On top (of the default layout) I have the CPU graph box. The title line of this box says on the left : "cpu", "menu", "preset 0", and on the right : BAT○ 98%, and 2000ms. Everything is self-explanatory, except BAT○. Clicking it does nothing. I noticed BAT and ○ don't have the same font. An extensive search of the web and of their github returned nothing. There is no man page. The included help is not searchable. After extensive searching, I suspect this may be related to the bat command (alternative to cat), or this may be the name of the font in use, maybe specific to my system. btop aka btop++ is an alternative to htop
It isn’t a display glitch, nor is it “BAT0”. “BAT” stands for “battery”, and the circle (○) following that indicates that the battery charge status is unknown (this is independent of the battery charge level, which is indicated by the meter following “BAT”). Other possible values are an arrow-head pointing up (▲) if the battery is charging, down (▼) if it’s discharging, and a filled square (■) if it’s full. See the source code for details; as you say this doesn’t seem to be documented.
What is the meaning of BAT○ in btop++?
1,424,409,229,000
I do not need some kind of realtime visual status of my ethernet - I want to run my script when last five minutes I uploaded less than X. So I need to get only one number from some command. What can you recommend? I use Ubuntu 14.04.
ifconfig <interface> gives you throughput of a specific interface. For example, root@trinity:~# ifconfig eth0 eth0 Link encap:Ethernet HWaddr 28:92:4a:32:0c:43 inet addr:192.168.1.10 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::2a92:4aff:fe32:c43/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1554100056 errors:0 dropped:3528 overruns:0 frame:15941 TX packets:570492690 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2186365577866 (1.9 TiB) TX bytes:180850207310 (168.4 GiB) Interrupt:18 Just read the TX bytes bit and do the maths. You'll need to track it in a file somewhere so you can work out the differential. The ifconfig command is being deprecated, and people will suggest using ip. The relevant command with ip is, root@trinity:~# ip -s link ls eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT qlen 1000 link/ether 28:92:4a:32:0c:43 brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped overrun mcast 2186366161514 1554101939 0 3197 15941 9994871 TX: bytes packets errors dropped carrier collsns 180850392034 570493984 0 0 0 0
How much was uploaded in the last five minutes?
1,424,409,229,000
How would you monitor a directory on a Linux machine to check if there was a user (or someone from the network) who attempted to access it?
inotify like so inotifywait -m -e modify,create,delete -r /var/www >> /var/log/i-see-www 2>&1 assuming you meant "worked in" when you said "access", simply listing or reading files .. that'd be harder to do.
Monitor accesses to directory on a Linux machine [duplicate]
1,424,409,229,000
I just deployed my webserver using Apache in CentOS and I was wondering if anybody has any good ideas in how to check every specified amount of time if the server goes down and then I can use postfix to email me when this happens so I can go back to my server right away and fix the problem as well to see what caused the problem. I'm guessing that many websites use some software/script to let them know when their service goes down before clients start complaining about the problem.
You could have a script in your server that makes all kind of test among them if the server is alive, here's one: #!/bin/bash date; echo "uptime:" uptime echo "Currently connected:" w echo "--------------------" echo "Last logins:" last -a |head -3 echo "--------------------" echo "Disk and memory usage:" df -h | xargs | awk '{print "Free/total disk: " $11 " / " $9}' free -m | xargs | awk '{print "Free/total memory: " $17 " / " $8 " MB"}' echo "--------------------" start_log=`head -1 /var/log/messages |cut -c 1-12` oom=`grep -ci kill /var/log/messages` echo -n "OOM errors since $start_log :" $oom echo "" echo "--------------------" echo "Utilization and most expensive processes:" top -b |head -3 echo top -b |head -10 |tail -4 echo "--------------------" echo "Open TCP ports:" nmap -p- -T4 127.0.0.1 echo "--------------------" echo "Current connections:" ss -s echo "--------------------" echo "processes:" ps auxf --width=200 echo "--------------------" echo "vmstat:" vmstat 1 5 This would give you the following results: ./Server-Health.sh Tue Jul 16 22:01:06 IST 2013 uptime: 22:01:06 up 174 days, 4:42, 1 user, load average: 0.36, 0.25, 0.18 Currently connected: 22:01:06 up 174 days, 4:42, 1 user, load average: 0.36, 0.25, 0.18 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT tecmint pts/0 116.72.134.162 21:48 0.00s 0.03s 0.03s sshd: tecmint [priv] -------------------- Last logins: tecmint pts/0 Tue Jul 16 21:48 still logged in 116.72.134.162 tecmint pts/0 Tue Jul 16 21:24 - 21:43 (00:19) 116.72.134.162 -------------------- Disk and memory usage: Free/total disk: 292G / 457G Free/total memory: 3510 / 3838 MB -------------------- OOM errors since Jul 14 03:37 : 0 -------------------- Utilization and most expensive processes: top - 22:01:07 up 174 days, 4:42, 1 user, load average: 0.36, 0.25, 0.18 Tasks: 149 total, 1 running, 148 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 0.0%sy, 0.0%ni, 99.3%id, 0.6%wa, 0.0%hi, 0.0%si, 0.0%st PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 20 0 3788 1128 932 S 0.0 0.0 0:32.94 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root RT 0 0 0 0 S 0.0 0.0 0:14.07 migration/0
Test script/program/app to check if my website is live
1,424,409,229,000
We have a hacker occasionally attempting to hack a site, and so far unsuccessful, but I would like a way to determine if the site has been compromised (i.e., files edited). The type of attacks are LFI and attempting to inject iFrames in some source files. What can I install on my Debian server to alert me via an e-mail if certain files or directories are modified?
You can install Tripwire and introduce a set of dicrectory to Tripwire, When a file has been changed, Tripwire notifies sysadmin.
File monitoring to pre-empt hacker
1,424,409,229,000
I need to monitor if, for example, file /tmp/somefile123 was created after some events. I tried to use inotifywait but here is a problem: # inotifywait -q -e create /tmp/somefile?* Couldn't watch /tmp/somefile?*: No such file or directory because there is exactly no such file, I want to know if it will be there! How can I resolve this issue? UPD: Maybe if I explain what I want to reach it will be more clear. I need to write shell script (sh) with minimal CPU consumption, something like this: if [ $(inotifywait -e create $SPECIFIC_FILE) ]; then blah-blah-blah some actions fi # And then similarly monitor if this file was deleted and then do another actions I expect that script will stop execution on inotifywait -e create $SPECIFIC_FILE till this $SPECIFIC_FILE will not created and it would be better then while [ ! -f $SPECIFIC_FILE ]; do blah-blah-blah some actions sleep 1 done
By having inotifywait check on the parent directory: /tmp$ inotifywait -e create -d -o /home/me/zz /tmp /tmp$ touch z1 /tmp$ cat ~/zz /tmp/ CREATE z1 You can also specify the time format for the event with the -timefmt option. Also, if you want to act immediately by executing some script file, for instance, you may use tail -f in the script file to monitor continuously the log file, here /home/me/zz, or you can create a named pipe file and have inotifywait write to it, while your script reads from it.
How to monitor whether a file was created?
1,424,409,229,000
How can I monitor Memory Usage: 33/512MB (6%) Disk usage: 4.2/20GB (23%) CPU Load: 0.01 on a Solaris 11 System? I want to make a script to monitor my desktop resources.
If you have one system then SAR is the a good alternative out of the box. If you have multiple system you might want to evaluate other choices as well besides SAR. Xymon and dimSTAT are two of them that I use and recommend. dimSTAT is specially good for Solaris as it was developed with Solaris in mind by a Sun Engineer. Xymon is multipurpose and highly customizable. Now if you want to use your own scripting then there are several possibilites and you should use the one that suits you best. examples inline: echo "::memstat"|mdb -k root@solsrv01:~# echo "::memstat" |mdb -k Page Summary Pages Bytes %Tot ----------------- ---------------- ---------------- ---- Kernel 114567 447.5M 11% ZFS Metadata 7312 28.5M 1% ZFS File Data 72180 281.9M 7% Anon 36257 141.6M 3% Exec and libs 1559 6.0M 0% Page cache 6286 24.5M 1% Free (cachelist) 8973 35.0M 1% Free (freelist) 784053 2.9G 75% Total 1048463 3.9G you will need to look at the right line and get the desired values. for cpu load you can use uptime, prstat or even kstat. root@solsrv01:~# uptime 11:35pm up 12 min(s), 1 user, load average: 0.02, 0.29, 0.30 root@solsrv01:~# prstat -c 1 1 Please wait... PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP 5 root 0K 0K sleep 99 -20 0:00:01 0.1% zpool-rpool/147 996 root 11M 3064K cpu0 49 0 0:00:00 0.1% prstat/1 957 root 21M 7064K sleep 59 0 0:00:01 0.1% sshd/1 958 root 11M 3188K sleep 49 0 0:00:00 0.0% bash/1 489 root 3964K 2116K sleep 59 0 0:00:00 0.0% hald-addon-acpi/1 480 root 8204K 6312K sleep 59 0 0:00:00 0.0% hald/4 68 netadm 5320K 3360K sleep 59 0 0:00:00 0.0% ipmgmtd/6 86 root 4044K 2284K sleep 59 0 0:00:00 0.0% svc.periodicd/4 547 root 15M 3040K sleep 59 0 0:01:03 0.0% ldap_cachemgr/8 360 root 10M 2464K sleep 59 0 0:00:00 0.0% picld/4 45 netadm 11M 2288K sleep 59 0 0:00:00 0.0% ibmgmtd/4 42 netcfg 3748K 2588K sleep 59 0 0:00:00 0.0% netcfgd/4 15 root 20M 19M sleep 59 0 0:00:46 0.0% svc.configd/31 13 root 53M 33M sleep 59 0 0:00:13 0.0% svc.startd/15 185 root 18M 3740K sleep 59 0 0:00:00 0.0% rad/4 Total: 62 processes, 397 lwps, load averages: 0.02, 0.25, 0.29 root@solsrv01:~# kstat -p 'unix:0:system_misc:avenrun*'|awk '{print $1"\t"$2/256}' unix:0:system_misc:avenrun_15min 0.269531 unix:0:system_misc:avenrun_1min 0.0195312 unix:0:system_misc:avenrun_5min 0.203125 For disk usage: root@solsrv01:~# df -h Filesystem Size Used Available Capacity Mounted on rpool/ROOT/solaris 19G 2.8G 13G 18% / /devices 0K 0K 0K 0% /devices /dev 0K 0K 0K 0% /dev ctfs 0K 0K 0K 0% /system/contract proc 0K 0K 0K 0% /proc mnttab 0K 0K 0K 0% /etc/mnttab swap 3.9G 1.6M 3.9G 1% /system/volatile objfs 0K 0K 0K 0% /system/object sharefs 0K 0K 0K 0% /etc/dfs/sharetab /usr/lib/libc/libc_hwcap1.so.1 16G 2.8G 13G 18% /lib/libc.so.1 fd 0K 0K 0K 0% /dev/fd rpool/ROOT/solaris/var 19G 221M 13G 2% /var swap 3.9G 4K 3.9G 1% /tmp rpool/VARSHARE 19G 2.4M 13G 1% /var/share rpool/export 19G 32K 13G 1% /export rpool/export/home 19G 38K 13G 1% /export/home rpool 19G 4.5M 13G 1% /rpool rpool/VARSHARE/zones 19G 31K 13G 1% /system/zones rpool/VARSHARE/pkg 19G 32K 13G 1% /var/share/pkg rpool/VARSHARE/pkg/repositories 19G 31K 13G 1% /var/share/pkg/repositories root@solsrv01:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 19.6G 6.08G 13.5G 30% 1.00x ONLINE -
Memory usage and space disk on Solaris 11
1,424,409,229,000
I am running several process at startup using "crontab -e" @reboot tranmission-daemon && python defualt.py && python MonitorService.py But for some reason, those service can terminate by itself due to hdd insufficient, or web host killed it, how do I check if those service are killed and I re run it again? I am using CentOS 6
I recommend supervisord (supervisord dot org) which happens to be written in Python. Here is an article for installing it using the Python package manager: Herd Unix Processes with Supervisor. If you would rather use RPM, then use this guide: Running supervisor 3 on CentOS 5 Hit back if you have any issues. It's a great tool once you get it working.
Detect process if not found then run it
1,424,409,229,000
A few days ago I asked Is there a way to make tail -F beep? Now I want to know if there is any way to use *nix utilities, to beep when a tail -F stops returning new lines for a while! I know, I can write a simple application in any language to do this, but I was curious to know if there is a way to do this just by standard (or semi standard) utils. The goals is to beep when a file (like a log file) no longer grows.
tail -F asdf.log | while true; do if read -t 1 LINE; then echo $LINE; else echo beep; fi; done (Change the number after -t to the number of seconds of inactivity you want)
Is there a way to beep when tail -F stops to fetch new results?
1,424,409,229,000
I want to know if the Swap is used at all. free shows the usage of the memory: # free total used free shared buff/cache available Mem: 1362084 169864 38288 724 1153932 1163816 Swap: 1048572 0 1048572 My understanding is that this is just a snapshot of the memory usage. The numbers change if I repeat the free command. Is there a possibility to see if the Swap was even used?
If you want to see swap activity even if the space was released between checks, you can use a counter for exactly that. $ cat /proc/vmstat | grep pswp pswpin 0 pswpout 0 This has been answered here.
How would one monitor Swap usage?