date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,424,409,229,000 |
In our company there are around 30 to 40 virtual linux machines. Every linux vm has maybe 3 partitions.
And every now and then, somehow, a partition gets full and brings one or more applications to a standstill.
I know, we can write cronjob batch scripts, that run every 30 minutes, and when
a threshold is passed, you can write an email.
But - is there no "monitoring or alerting" infrastructure which is build into normal linux?
|
There are plenty of open source (and proprietary) monitoring tools designed to solve this problem. They rely on tools within Linux, and they in turn rely on system calls within the kernel.
Some tools focus on data gathering and monitoring, while others focus on alerting, which you pick depends on your primary need.
The most well known example of an alerting and monitoring tool would be Nagios. Other tools, more focussed on data gathering and graphing, with some alerting built in would be Cacti and Munin. If you have large clusters with lots of machines, then Ganglia might be your best bet.
These tools are often called Network Monitoring Systems, and Wikipedia has an extensive list.
I recommend you don't re-invent the wheel and look for / use a tool like this.
Depending on which Linux distribution you're using, one or more of these tools many already be available in the distribution repository, with default configurations that support the environment you have.
| Has linux got some kind of monitoring or alerting infrastructure build into itself |
1,424,409,229,000 |
I need to get a full list of files modified, and if possible files accessed too, by a complex script, as well as all files accessed at the same time of the script running by any other process.
So i want to START logging all IO file access before the application start, and then STOP logging when it ends. (or inspect full log between 2 time stamps ?)
How can i do that ?
|
You could use a marker file, touching it before you perform the operation of your main concern, and then using the find command with the -newer or -anewer options to find files that were modified or access after you touched the marker file.
touch /tmp/marker
perform-some-operation
find /path/to/dir -newer /tmp/marker
If the directory you want to monitor is not too large,
an interesting alternative could be to convert it to a Git repository,
and then use Git commands to see what has changed.
cd /path/to/dir
git init .
git add .
git commit -m init
perform-some-operation
git status
git diff
After you are done, you can simply delete the .git directory.
| How to get list of touched files between 2 points of time? [duplicate] |
1,424,409,229,000 |
Running vmstat will give you the average virtual memory usage since last reboot. The si and so values give the average virtual memory I/O. For example:
root@mymachine# vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
1 0 304 300236 244940 967828 0 0 0 1 2 1 0 0 100 0 0
As Ijaz Khan answered, I can how many times I want vmstat to run, as well as the increments in between. This is useful in some cases (+1), but I do not want to have to leave vmstat running.
I want to be able collect the data, then reset the counters so I can leave it for a while, then come back to get an average of from when I reset the counters to when I next check -- instead of since the last boot. Is that possible?
|
The memory information isn't averaged; vmstat shows the instantaneous memory information as provided in /proc/meminfo. So you can use the memory information from vmstat without worrying about changes since the last boot.
The values that are accumulated since boot concern the CPU usage, interrupts and context switches, and swap in/out and pages in/out; these are never reset. You can read the raw values from /proc/stat and /proc/vmstat if you want to be able to calculate your own deltas. For example, si is pswpin from /proc/vmstat, bi is pgpgin from /proc/vmstat.
| Reset vmstat statistics without rebooting |
1,424,409,229,000 |
We use Nagios here for monitoring our servers.
On the test network I upgraded to Debian 9/Stretch, the memory monitoring box/object in our Nagios monitoring platform says CRITICAL and next to it, CHECK_MEMORY CRITICAL - Unable to interpret /usr/bin/free output.
The problem is in several servers; the check is done via a remote plug-in installed with the agent. What to do?
|
I have followed the problem, as in:
$ /usr/lib/nagios/plugins/check_memory
MEMORY CRITICAL - Unable to interpret /usr/bin/free output
What I found is the output of the free command in procps changed the output format.
$free -m
old format:
total used free shared buffers cached
Mem: 3011 1415 1596 4 24 162
-/+ buffers/cache: 1228 1783
Swap: 1023 0 1023
new format:
total used free shared buff/cache available
Mem: 3012 1132 140 0 1739 1703
Swap: 1063 0 1063
The plug-in in question is installed in nagios-plugins-contrib.
$dpkg -S /usr/lib/nagios/plugins/check_memory
nagios-plugins-contrib: /usr/lib/nagios/plugins/check_memory
There also has been a bug report about it here: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=806598
However /usr/lib/nagios/plugins/check_memory in the package nagios-plugins-contrib has not been adjusted to the new free output in Debian Stretch.
Apparently there is a patch here in the meanwhile: https://bugs.debian.org/cgi-bin/bugreport.cgi?att=1;bug=806598;filename=check_memory_new_free_output.patch;msg=5
| Nagios memory free plug-in misbehaving after upgrade to Debian Stretch |
1,424,409,229,000 |
Sometimes my wifi is connected to router but my router is not connected to internet. How can I execute a command when my internet is back (from command line)? I want to execute:
mpg123 /home/user/file.mp3
|
Try this:
#!/bin/bash
while :; do
ping -c 1 8.8.8.8 >/dev/null 2>&1
if [ $? = 0 ]; then
break
else
echo 'No internet'
fi
sleep 1
done
mpg123 /home/user/file.mp3
It will show you 'no internet' message if there is no ping response. And if it gets the response it will execute your command and quit.
| How to execute a command when internet is back |
1,424,409,229,000 |
I'm running into a problem when using sar to collect live system statistics. When I run a sar command such as the following, I get the right output:
$ sar -r 1 -o /tmp/memory_usage
Linux 4.15.0-70-generic () 29/12/20 _x86_64_ (60 CPU)
18:26:55 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty
18:26:56 30855140 78554416 51599624 62.58 321400 48906356 3491612 4.18 25558204 23859156 36
18:26:57 30855124 78554456 51599640 62.58 321400 48906392 3491612 4.18 25558204 23859212 72
18:26:58 30855204 78554536 51599560 62.58 321400 48906424 3491612 4.18 25558204 23859212 104
18:26:59 30855188 78554576 51599576 62.58 321400 48906456 3491612 4.18 25558204 23859268 136
18:27:00 30855204 78554648 51599560 62.58 321400 48906492 3491612 4.18 25558204 23859324 172
18:27:01 30855048 78554492 51599716 62.58 321400 48906524 3491612 4.18 25558228 23859324 0
^C
Average: 30855151 78554521 51599613 62.58 321400 48906441 3491612 4.18 25558208 23859249 87
However, when I load the output file, it seems to have only recorded the cpu usage?
$ sar -f /tmp/memory_usage
Linux 4.15.0-70-generic () 29/12/20 _x86_64_ (60 CPU)
18:26:55 CPU %user %nice %system %iowait %steal %idle
18:26:56 all 0.00 0.00 0.02 0.00 0.02 99.97
18:26:57 all 0.00 0.00 0.02 0.00 0.02 99.97
18:26:58 all 0.00 0.00 0.02 0.00 0.02 99.97
18:26:59 all 0.00 0.00 0.02 0.00 0.02 99.97
18:27:00 all 0.00 0.00 0.00 0.00 0.00 100.00
18:27:01 all 0.02 0.00 0.02 0.00 0.02 99.95
Average: all 0.00 0.00 0.01 0.00 0.01 99.97
This is my system's info:
$ uname -a
Linux 4.15.0-70-generic #79-Ubuntu SMP Tue Nov 12 10:36:11 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
I'm running sar 11.6.1, which was installed through apt, and I did not configure any cron data collection (if that matters), although I did enable the sysstat service:
$ systemctl status sysstat
● sysstat.service - Resets System Activity Data Collector
Loaded: loaded (/lib/systemd/system/sysstat.service; enabled; vendor preset: enabled)
Active: active (exited) since Tue 2020-12-29 16:56:29 GMT; 1h 34min ago
Docs: man:sa1(8)
man:sadc(8)
man:sar(1)
Process: 52376 ExecStart=/usr/lib/sysstat/debian-sa1 --boot (code=exited, status=0/SUCCESS)
Main PID: 52376 (code=exited, status=0/SUCCESS)
Dec 29 16:56:29 systemd[1]: Starting Resets System Activity Data Collector...
Dec 29 16:56:29 systemd[1]: Started Resets System Activity Data Collector.
Any idea what I'm doing wrong? Why is the memory usage not being recorded in the file? Did I misconfigure something, or is this not possible to achieve with sar? Any and all help would be greatly appreciated.
|
I'm an idiot, found the answer. Apparently when you tell sar to collect system statistics into a file, it outputs everything into it, not just the options you passed it.
So, what the command sar -r 1 -o /tmp/memory_usage is really saying is: "capture all options at a sample rate of one per second, and record them in the given file. Also, output the memory statistics to the terminal at the same rate".
Since all the stats are recorded in the output file, it can be queried with the same options as if it was live. The command sar -r -f /tmp/memory_usage outputs the memory usage collected from the file, as I expected.
| sysstat sar only collects cpu usage |
1,424,409,229,000 |
I was wondering, if there is a program similar to time, but instead of just printing out the time it took to execute the command, it also prints out the average CPU and Memory usage.
Alternatively a program which records the CPU and memory usage every couple of seconds and then writes it to a file would also work.
Any help is appreciated!
Thanks
|
Sysstat package is useful. You can customize how often and how many times to harvest
the information. It consists tools for CPU usage , memory , processes. Also can store information in different formats
iostat: Reports all statistics about your CPU and I/O statistics for I/O devices.
mpstat: Details about CPUs (individual or combined).
pidstat: Statistics about running processes/task, CPU, memory etc.
sar: Save and report details about different resources (CPU, Memory, IO, Network, kernel etc..).
sadc: System activity data collector, used for collecting data in backend for sar.
sa1: Fetch and store binary data in sadc data file. This is used with sadc.
sa2: Summaries daily report to be used with sar.
Sadf: Used for displaying data generated by sar in different formats (CSV or XML).
nfsiostat-sysstat: I/O statistics for NFS.
cifsiostat: Statistics for CIFS.
| Program to monitor CPU and Memory usage |
1,424,409,229,000 |
I have a fresh install of Xubuntu 18.04 LTS. I'd like to display in text, not in graphical form, the CPU (% used), RAM (% used), and Battery usage (% avail.) in the top panel near the clock.
I'm having trouble learning how to do this or whether it's possible because of outdated information from previous versions and the available solutions focus on showing a graph in the top panel. I'd rather see text and numbers like CPU: 05% RAM: 10% BAT: 50%
Is this something that Xubuntu can be configured to do? As an Xubuntu user, I'd prefer the most lightweight solution I can find, in terms of resource usage and dependencies.
|
If you can't find a prepackaged solution, you can roll your own.
As part of Xfce, the Generic Monitor panel item should be available in your panel's "Add New Items" list. If not, it should be available as xfce4-genmon-plugin in your repo. From it's About dialog: "Cyclically spawns a script/program, captures its output and displays the resulting string in the panel".
Using the Generic Monitor, you can run a script that returns the info you need, like the one I pieced together:
#!/usr/bin/perl
# stats.pl - returns CPU and RAM usage
# CPU stuff
my $cpuusage = `top -bn 2 -d 0.2 | grep '^%Cpu' | tail -n 1 | gawk '{print \$2+\$4+\$6}'`;
chomp $cpuusage;
$cpuusage =~ s/^([0-9][0-9]*)(\.[0-9][0-9]*)$/$1/;
printf "CPU: %02d%% ","$cpuusage";
# RAM stuff
my $total = `grep -e "^MemTotal" -m 1 /proc/meminfo`;
$total =~ s/([^0-9]*)([0-9]*)(.*)$/$2/;
my $available = `grep -e "^MemAvailable" -m 1 /proc/meminfo`;
$available =~ s/([^0-9]*)([0-9]*)(.*)$/$2/;
my $memusage = 100 - ($available / $total * 100);
printf "RAM: %02d%%\n","$memusage";
The CPU stuff is based on What are the methods available to get the CPU usage in Linux Command line?, and the RAM stuff is based on How can I get processor/RAM/disk specs from the Linux command Line?
Generic Monitor displays the output of stats.pl in a panel as expected (for comparison, the graphic CPU and RAM info is my conky display):
My machine's a desktop, so I have no battery. From poking around a bit, though, upower seems to look promising for the battery info. For example, see 5 Ways To Check Laptop Battery Status And Level From Linux Terminal.
| Show CPU, RAM, and Battery values in text in Xubuntu's top panel |
1,424,409,229,000 |
When running the following command
tcpdump -i deviceName 'host 1.2.3.4' -q -w /mypath/dump.pcap
the dump file contains a huge amount of data because there's a lot of traffic. However, I only need to save the header details of each packet, not the entire contents. I tried using the -q switch (for "quiet") but that's not helping.
I need Time, Source, Destination, Protocol and Length. I do not need any of the other information, and especially not the full contents of each packet.
If there's a way to ignore the contents and only write the header details to disk so as to save space? I'm getting to over a GB in a matter of minutes :(
I've seen many questions about how to increase the amount of data saved, but nothing for reducing it. Am I barking up the wrong tree?
|
I was in the same situation and I solved it by adding -s 96
| How to record only the header info when using `tcpdump` |
1,424,409,229,000 |
I already wasted days to look for a very minimalist (cpu/ram/ping/ssh/disks) monitoring tools that discover hosts by itself instead of installing a client application on each host. (I can accept giving a temporary account with ssh-key) but not any local binaries installation.
(because the solution have to be movable from a LAN to another or another or another for few days only)
I did not find any? Do you know such product?
Currently I use a boring MySQL/Nagios couple of docker containers but this solution is very bad because it doesn't do discovering (so I can only monitor hosts that client knows/remember existence) & I lose half a day to one day to setup it specially for the concerned LAN...
Do you know a solution corresponding to my needs?
|
Seems like you do not use nagios properly. In nagios you define hosts, host groups, services per host group. And you need to set/change host names only and (if needed) reallocate them over host groups.
Other possible solution is to use SNMP. Per host you define snmp agent with probes for the services you need. And one sample script can "browse" the network for available snmp agent and add it in to monitoring solution.
| Monitoring solution wihtout client-install |
1,424,409,229,000 |
I use zabbix to monitor a log file, and I want zabbix to send a mail every time a new line coming in the log file. I define the trigger:
{xxx:log[/tmp/log,"error"].str(error)}=1
I find that zabbix does the 'action' (send a mail) only when the trigger's status changes.
So, when the first line comes into the log file, the trigger becomes PROBLEM. And then the trigger's status keeps PROBLEM, so the following lines will not sending a mail.
There is a way to change trigger's status into OK, if there is no more log in 60 seconds:
{xxx:log[/tmp/log,"error"].str(error)}=1 && {xxx:log[/tmp/log,"error"].nodata(60)}=0
But I want zabbix to send mail for every line in log file.
I thought this is a basic requirement of log monitoring.
Any way to do this?
Thanks in advance
|
Using your original expression {xxx:log[/tmp/log,"error"].str(error)}=1, mark the "Multiple PROBLEM event generation" checkbox in the trigger properties.
| zabbix action on log file |
1,424,409,229,000 |
I'm looking to migrate a monitoring script from Windows (Powershell) to Linux (Shell script).
One of the things I check in Windows is whether an application is 'Not Repsonding'. (e.g. Open Task Manager and it says either "Running" or "Not Responding")
Is there an equivalent in Linux, and if so, how do I find it? I've been scouring the web, but can't find anything to say how to find them, only what to do when an application is not responding.
|
In linux, processes can be in different states:
Running(R): This is a state where a process is either in running or ready to run.
Interruptible(S): This state is a blocked state of a process which awaits for an event or a signal from another process
Uninterruptible(D): It is also a blocked state. The process is forced to halt for certain condition that a hardware status is waited and a signal could not be handled.
Stopped(T): Once the process is completed, this state occurs. This process can be restarted
Zombie(Z): In this state, the process will be terminated and the information will still be available in the process table.
You can run "ps" command and "grep" for the states. for eg:
ps aux | awk '{if ($8 ="D") print}'
| How to Find Non-Responding Applications in Linux |
1,424,409,229,000 |
Environment
Debian Linux 11.5 "bullseye"
Conky 1.11.6 (compiled 2020-08-17, package 1.11.6-2)
Xorg 1.20.11 (package 2:1.20.11-1+deb11u2)
FVWM version 3 release 1.0.5 (built from git 23854ad7)
Problem
I am trying to reduce the Conky window to show just a single graph (chart, plot) with absolutely no other elements. However, it seems Conky keeps adding a gap/border/margin/padding/spacing, above and below the plot area. The gaps appear as horizontal bars in the background color. I've tried every Conky option I can find, but the gap won't go away.
Investigation
I've got gaps, margins, and border width set to zero. I've disabled all borders, ranges, scales, outlines, and shades. I've got the window and graph both set to 64 by 64. If I reduce the graph height, the entire window gets shorter, but the gaps remain in proportion. Likewise for increasing the graph height. If I resize the Conky window smaller with window manager controls, it clips off the graph. I can crop the bottom border this way, but not the top.
Screenshot
In the combined screenshots below, the magenta arrows point to the gaps. The bright green is the plot area. The dark gray surrounds are window manager decorations, and serve to show where the black Conky window background ends. These are both ${cpugraph} charts, with the CPU made artificially busy for test purposes.
Config
The Conky config that produced the above is:
conky.config = {
own_window = true,
own_window_type = 'normal',
own_window_transparent = false,
own_window_hints = '',
alignment = 'top_middle',
own_window_title = 'conky_gaptest',
double_buffer = true,
disable_auto_reload = true,
top_cpu_separate = false,
update_interval = 0.5,
show_graph_range = false,
show_graph_scale = false,
draw_outline = false,
draw_shades = false,
draw_borders = false,
draw_graph_borders = false,
gap_x = 0,
gap_y = 0,
border_inner_margin = 0,
border_outer_margin = 0,
border_width = 0,
extra_newline = false,
default_color = 'white',
maximum_width = 64,
default_graph_width = 64,
default_graph_height = 64,
}
conky.text = [[${cpugraph cpu0 64,64 00ff00 00ff00}]]
Anyone have any suggestions?
Background
(I am doing this because I want to make the Conky window suitable for swallowing into FvwmButtons. I have a vaguely NeXTstep-esque dock/wharf/panel/sidebar, made of 64x64 pixel buttons. I'd like some of the buttons to be Conky graphs. But as long as those gaps are there, it wastes part of the tiny 64x64 space. wmload doesn't have this problem, but sucks in other ways.)
|
The conky object voffset changes the vertical offset position of the object following it by a given positive or negative number of pixels. A construction like the following may do what you need, where a negative value determined by trial-and-error should be substituted for each of y1 and y2:
conky.text = [[${voffset y1}${cpugraph cpu0 64,64 00ff00 00ff00}${voffset y2}]]
In determining y1 and y2, I'd recommend first using the following construction and determining y1 alone by trial-and-error:
conky.text = [[${voffset y1}${cpugraph cpu0 64,64 00ff00 00ff00}]]
Then, add in the second voffset term and determine y2 by trial-and-error.
| Conky 1.11.6 - Want to eliminate gaps above/below graphs |
1,624,839,071,000 |
I open pavucontrol when playing a music on speaker.
Click the input Devices ,line in and front microphone ,rear microphone are all unplugged,so What do the input device monitor--Monitor of Family 17h(Models 10h-1fh) HD Audio Controller Analog Stereo watch?It is watching the output device --line out ?
Card #1
Name: alsa_card.pci-0000_09_00.6
Driver: module-alsa-card.c
Owner Module: 7
Properties:
alsa.card = "1"
alsa.card_name = "HD-Audio Generic"
alsa.long_card_name = "HD-Audio Generic at 0xfccc0000 irq 60"
alsa.driver_name = "snd_hda_intel"
device.bus_path = "pci-0000:09:00.6"
sysfs.path = "/devices/pci0000:00/0000:00:08.1/0000:09:00.6/sound/card1"
device.bus = "pci"
device.vendor.id = "1022"
device.vendor.name = "Advanced Micro Devices, Inc. [AMD]"
device.product.id = "15e3"
device.product.name = "Family 17h (Models 10h-1fh) HD Audio Controller"
device.string = "1"
device.description = "Family 17h (Models 10h-1fh) HD Audio Controller"
module-udev-detect.discovered = "1"
device.icon_name = "audio-card-pci"
Profiles:
input:analog-stereo: Analog Stereo Input (sinks: 0, sources: 1, priority: 65, available: no)
output:analog-stereo: Analog Stereo Output (sinks: 1, sources: 0, priority: 6500, available: yes)
output:analog-stereo+input:analog-stereo: Analog Stereo Duplex (sinks: 1, sources: 1, priority: 6565, available: no)
output:iec958-stereo: Digital Stereo (IEC958) Output (sinks: 1, sources: 0, priority: 5500, available: yes)
output:iec958-stereo+input:analog-stereo: Digital Stereo (IEC958) Output + Analog Stereo Input (sinks: 1, sources: 1, priority: 5565, available: no)
output:iec958-ac3-surround-51: Digital Surround 5.1 (IEC958/AC3) Output (sinks: 1, sources: 0, priority: 300, available: yes)
output:iec958-ac3-surround-51+input:analog-stereo: Digital Surround 5.1 (IEC958/AC3) Output + Analog Stereo Input (sinks: 1, sources: 1, priority: 365, available: no)
off: Off (sinks: 0, sources: 0, priority: 0, available: yes)
Active Profile: output:analog-stereo+input:analog-stereo
Ports:
analog-input-front-mic: Front Microphone (priority: 8500, latency offset: 0 usec, not available)
Properties:
device.icon_name = "audio-input-microphone"
Part of profile(s): input:analog-stereo, output:analog-stereo+input:analog-stereo, output:iec958-stereo+input:analog-stereo, output:iec958-ac3-surround-51+input:analog-stereo
analog-input-rear-mic: Rear Microphone (priority: 8200, latency offset: 0 usec, not available)
Properties:
device.icon_name = "audio-input-microphone"
Part of profile(s): input:analog-stereo, output:analog-stereo+input:analog-stereo, output:iec958-stereo+input:analog-stereo, output:iec958-ac3-surround-51+input:analog-stereo
analog-input-linein: Line In (priority: 8100, latency offset: 0 usec, not available)
Part of profile(s): input:analog-stereo, output:analog-stereo+input:analog-stereo, output:iec958-stereo+input:analog-stereo, output:iec958-ac3-surround-51+input:analog-stereo
analog-output-lineout: Line Out (priority: 9000, latency offset: 0 usec, available)
Part of profile(s): output:analog-stereo, output:analog-stereo+input:analog-stereo
analog-output-headphones: Headphones (priority: 9900, latency offset: 0 usec, not available)
Properties:
device.icon_name = "audio-headphones"
Part of profile(s): output:analog-stereo, output:analog-stereo+input:analog-stereo
iec958-stereo-output: Digital Output (S/PDIF) (priority: 0, latency offset: 0 usec)
Part of profile(s): output:iec958-stereo, output:iec958-stereo+input:analog-stereo
|
In Pulseaudio, every sink (audio destination, output) has an associated source (audio source, input) that is called monitor.
For some reason, your audio hardware provides a sink that is called "Family 17h (Models 10h-1fh)". That's an unusual name, and probably shows only up because your hardware manufacturer put some generic names into the BIOS.
You didn't tell us what sinks you have, so I would assume that this is just the generic "Builtin" sink if your hardware, that can output to different ports, including the "Line out" that you show, and probably others.
So the associated monitor source will mirror whatever audio you output into this sink, no matter if the sink has "Line out" or some other port activated.
It has nothing to do with the generic input source (which you say has as ports Line In, Front Mic, and Rear Mic).
| What do the input device monitor watch? |
1,624,839,071,000 |
Is there a tool that can be used to monitor the traffic a web server is processing in real-time from the command line?
I'm looking for a cli ncurses tool like nload, but one that can show the requests per second going to a web server like nginx or apache (or a cache like varnish) via mod_status or stub_status.
|
It doesn't look like nload, but you an get a ton of useful information from your web server's access logs (NCSA, W3C, squid,or any user-defined custom log format) in an ncurses-based tool called goaccess
In Debian, run:
sudo apt-get install goaccess
goaccess /path/to/access.log -c
It will look something like this
| cli real-time monitoring of web server traffic per second over time (ncurses) |
1,624,839,071,000 |
My /proc/vmstat contains the following rows:
pgalloc_dma 0
pgalloc_dma32 288126724
pgalloc_normal 33952724486
pgalloc_movable 0
I'm wondering what they are measurements of. Are they counters of the total number of page allocations done for as long as the machine has been alive or are they gauges of the current number of allocated pages of each type?
The man page for proc only tells us in which version of the kernel each metric was added, referring the reader to the kernel source code for further information.
Grepping for pgalloc_normal in the Linux kernel source yields nothing. The file mm/vmstat seems to define the list of fields present in /proc/vmstat under the name vmstat_text. I have tried to trace back the source of the metrics which seem to be written in the function vmstat_refresh, but from there I'm lost in the redirection.
|
The pgalloc rows reflect PGALLOC events, which count page allocations per CPU and per zone since the system was booted (and /proc/vmstat folds all the per-CPU values into a single system-wide value). There’s a corresponding pgfree which counts page freeing events (not per zone).
| What does pgalloc_(dma|dma32|normal|movable) in /proc/vmstat measure? |
1,624,839,071,000 |
I need a tool that monitors output of other command, and when it prints specified string, eg. "error", stops monitored command. Then I modify some environment and files and continue job. Is it possible?
Edit:
Example:
Have a following job.sh
for i in $(ls)
do
echo file $i
sleep 0.1
cat $i
done
When run in folder containing files a.txt and b.txt, I want to pause job.sh after it print file b.txt for editing it, then continue job.sh and see new b.txt content.
I can't touch job.sh as it is actually compiled C program.
Sleep symbolizes that pausing do not have to be immediate, but still fast.
|
% ./mystery
a
b
c
%
One could issue the STOP signal to a program and then CONT to continue it; the following TCL code waits for b to appear in the output and at that point stops the process, which should remain stopped until the user types a line for expect_user to act on (at minimum a newline).
#!/usr/bin/env expect
spawn -noecho ./mystery
set spid [exp_pid]
expect -ex b { exec kill -STOP $spid; send_user "STOP\n" }
expect_user -re . { exec kill -CONT $spid }
expect eof
This of course has all sorts of problems, such as if mystery runs too quickly, or if the output is buffered, etc. I had to slow the C down and turn off buffering for it to work out:
% cat mystery.c
#include <stdio.h>
#include <unistd.h>
int main(void)
{
setvbuf(stdout, (char *) NULL, _IONBF, (size_t) 0);
printf("a\n");
sleep(1);
printf("b\n");
sleep(1);
printf("c\n");
return 0;
}
A C program may be better controlled by running it under a debugger such as gdb; breakpoints would be a far more accurate way to halt the execution at an exact point in the code than reacting to I/O. Debugging symbols will help but are not necessary:
% gdb mystery
Reading symbols from mystery...(no debugging symbols found)...done.
(gdb) quit
% otool -dtv mystery | grep callq
0000000100000f26 callq 0x100000f70
0000000100000f32 callq 0x100000f6a
0000000100000f3c callq 0x100000f76
0000000100000f48 callq 0x100000f6a
0000000100000f52 callq 0x100000f76
0000000100000f5e callq 0x100000f6a
So this is actually on a Mac (disassembly will vary by platform). The above are the setvbuf, printf, and sleep calls so with the start address
% otool -dtv mystery | sed 3q
mystery:
_main:
0000000100000f06 pushq %rbp
% perl -E 'say 0x0000000100000f52 - 0x0000000100000f06'
76
% gdb mystery
Reading symbols from mystery...(no debugging symbols found)...done.
(gdb) b *main + 76
Breakpoint 1 at 0x100000f52
(gdb) r
Starting program: /Users/jhqdoe/tmp/mystery
a
b
Breakpoint 1, 0x0000000100000f52 in main ()
(gdb)
And then you can do whatever is necessary and continue the program as desired.
Another idea would be to use LD_PRELOAD to adjust how the program behaves, assuming of course that the most sensible option—recompiling the program from source—is not possible. Yet another option would be to patch the C binary to behave as desired.
| How to pause command when it output some string? |
1,624,839,071,000 |
I'm running Linux 4.16.8-1-MANJARO #1 SMP PREEMPT x86_64 GNU/Linux.
I've installed the sysdig package, and trying to run it, I see:
$ sudo sysdig
Unable to load the driver
error opening device /dev/sysdig0. Make sure you have root credentials and that the sysdig-probe module is loaded.
How do I load the required module on Arch-based Manjaro Linux?
|
I needed to install the kernel headers:
sudo pacman -S linux416-headers
As part of this process, the sysdig module was installed:
:: Running post-transaction hooks...
(1/3) Updating linux416 module dependencies...
(2/3) Install DKMS modules
==> dkms install sysdig/0.21.0 -k 4.16.8-1-MANJARO
| Error opening device /dev/sysdig0: make sure sysdig-probe module is loaded [closed] |
1,624,839,071,000 |
An application is already running without command output, only GUi.
How can I grab the application's command output from a new terminal window?
Another application is already running as CLi, but I would like to monitor that specific application from a different window or even remotely without affecting the application itself?
|
This is very complicated for a running application. You need to attach a debugger, close the file descriptors 0, 1, and 2, open a new controllingf terminal and open the file descriptors accordingly. Probably not even this works if the app notices that it does not have a controlling terminal and thus does not use stdin, stdout, and stderr the usual way, maybe even has closed them and used them for different purposes.
| How to monitor an already running application in a new Bash Terminal window? [duplicate] |
1,624,839,071,000 |
I have a somewhat painful net service provider. In order to mitigate the issue, I usually use something like
$ ping debian.org
which basically goes ad infinitium unless I break it via CTRL+C
I find pinging outside the host country is much more effective than pinging a domestic/same country server as at times international connections die down but the state internet services are still good.
I have two queries -
a. Is there some sort of net etiquette I should follow when pinging or is the above fine or should I use some other technique ?
This is the kind of output I get when it's functionally normally -
64 bytes from mirror-isc3.debian.org (149.20.4.15): icmp_seq=38700 ttl=51 time=274 ms
64 bytes from mirror-isc3.debian.org (149.20.4.15): icmp_seq=38701 ttl=51 time=275 ms
64 bytes from mirror-isc3.debian.org (149.20.4.15): icmp_seq=38702 ttl=51 time=273 ms
64 bytes from mirror-isc3.debian.org (149.20.4.15): icmp_seq=38703 ttl=51 time=274 ms
when it doesn't work it spews errors like these -
From _gateway (192.168.1.1) icmp_seq=38683 Destination Net Unreachable
From _gateway (192.168.1.1) icmp_seq=38684 Destination Net Unreachable
From _gateway (192.168.1.1) icmp_seq=38686 Destination Net Unreachable
From _gateway (192.168.1.1) icmp_seq=38687 Destination Net Unreachable
From _gateway (192.168.1.1) icmp_seq=38689 Destination Net Unreachable
b. I do know that there is in existence ping flood of death
Could the simple ping be misused in sending ping of death via various computers and my ping/computer would be an accomplice without meaning to?
|
Could the simple ping be misused in sending ping of death via various computers and my ping/computer would be an accomplice without meaning to?
No.
The Ping of Death is a kind of attack carried out by sending a specially crafted ICMP packet to a (unpatched) machine in order to crash it.
A ping flood is a DoS attack where one sends a overwhelming amount of ICMP packets. On many UNIX systems only the root user can send a ping flood with zero interval.
None of these apply to you. A normal ping barely takes any resource. You can increase the ping interval or change periodically the target host if it makes you feel better, but pinging remote machines is a reasonable network activity.
| network etiquette, pinging, network connectivity and DDOS |
1,624,839,071,000 |
I have built a little NAS-like device on armbian, that uses external harddisks for its file serving purposes. The (hardware) interface only provides a reduced SATA-command set and overrides some APM/AAM/standby functions, but I would like to have a longer interval until standby.
I am succesfully able to keep the drives awake by repeatedly issuing some SATA commands, but I have trouble implementing a certain logic.
I would like to mimic, disk-standby after xx minutes of last activity.
Is there any clever way or monitoring utility that would tell me the last time, when either SMBD, ZFS or ideally the harddrive itself performed some read/write activity?
Something like the interval in ifplugd... Should I get to know "dtrace"?
|
Perhaps you can just poll the counters of the number of read/write operations on the block device, and do your action when they no longer change. For a block device like sda, the statistics are in /sys/block/sda/stat, and the columns are described in the kernel Documentation/iostats.txt. In particular columns 1 and 5 added together give the total completed i/o operations.
| How can I detect time of last harddisk/samba or zfs activity for a ifplugd like action? |
1,624,839,071,000 |
I tested mdadm software RAID on a Debian 9 virtual machine. I migrated to RAID myself (i.e. didn't rely on the installer).
It works nicely, and dpkg-reconfigure mdadm even offered to set up monthly scrubs and email alerts. I can see this runs /sbin/mdadm --monitor --scan. However mail -u root shows no mail after booting with only one device.
What is the simplest way to ensure a notification is generated when booting in degraded mode?
|
Although dpkg-reconfigure mdadm defaults to sending mail to root, Debian's Mail Delivery Agent no longer supports sending mail to root. If you left everything as default, you need to use mail -u mail instead. The best approach is to make sure all root mail is directed somewhere it will be read - you can use your normal user for this. dpkg-reconfigure exim4-config will prompt for such a user (it recommends against just leaving the default forward to mail).
| mdadm RAID monitoring on Debian |
1,624,839,071,000 |
I installed glances as a python package in the home directory:
pip install --user glances
How can I launch glances?
[me@server]$ glances
-bash: glances: command not found
|
I just checked the glances github.io and it presented user install as:
export PYTHONUSERBASE=~/mylocalpath
pip install --user glances
And ensure the local path is part of your $PATH:
export PATH=$PATH:~/mylocalpath/bin
and you should be set.
| How can I launch glances when installed as a python package in the home directory? |
1,624,839,071,000 |
I am running some experiments that use cpu, disk, and network resources.
(by the way, I use Cent OS 7)
I want to measure its cpu, disk, and network resource usage.
Some tools I know (dstat, iostat) only provides a second as the minimum interval between two measurements.
How can I take several measurements even within a second?
I googled a lot but couldn't find one.
Thanks
|
Hopefully someone can point you to some tools to do what you want but if not and you're committed you could get the data straight from the source. iostat mostly is just parsing special files like /proc/diskstats and those files are updated whenever you read them. I just did a quick test where I read diskstats many times a second and the values were changing with each read.
The iostat man page lists the relevant files at the end:
/proc/stat contains system statistics.
/proc/uptime contains system uptime.
/proc/partitions contains disk statistics (for pre 2.5 kernels that have been patched).
/proc/diskstats contains disks statistics (for post 2.5 kernels).
/sys contains statistics for block devices (post 2.5 kernels).
/proc/self/mountstats contains statistics for network filesystems.
/dev/disk contains persistent device names.
It's not too hard to find information about what the fields in these files represent. For example:
The /proc/diskstats file displays the I/O statistics
of block devices. Each line contains the following 14
fields:
1 - major number
2 - minor mumber
3 - device name
4 - reads completed successfully
5 - reads merged
6 - sectors read
7 - time spent reading (ms)
8 - writes completed
9 - writes merged
10 - sectors written
11 - time spent writing (ms)
12 - I/Os currently in progress
13 - time spent doing I/Os (ms)
14 - weighted time spent doing I/Os (ms)
For more details refer to Documentation/iostats.txt
| How to measure disk and network IO more frequently than a second? |
1,624,839,071,000 |
I need to monitor an Lenovo system x3650 m5 (8871) server. Unfortunately lm_sensor just show the CPU temperature. Do anyone have an advice, how I could monitor the fan speed with an commandline tool?
Output sensors:
sensors
power_meter-acpi-0
Adapter: ACPI interface
power1: 141.00 W (interval = 1.00 s)
coretemp-isa-0000
Adapter: ISA adapter
Physical id 0: +30.0°C (high = +92.0°C, crit = +102.0°C)
Core 0: +21.0°C (high = +92.0°C, crit = +102.0°C)
Core 2: +23.0°C (high = +92.0°C, crit = +102.0°C)
Core 3: +22.0°C (high = +92.0°C, crit = +102.0°C)
Core 4: +22.0°C (high = +92.0°C, crit = +102.0°C)
Core 8: +23.0°C (high = +92.0°C, crit = +102.0°C)
Core 10: +20.0°C (high = +92.0°C, crit = +102.0°C)
Core 11: +21.0°C (high = +92.0°C, crit = +102.0°C)
Core 12: +20.0°C (high = +92.0°C, crit = +102.0°C)
coretemp-isa-0001
Adapter: ISA adapter
Physical id 1: +28.0°C (high = +92.0°C, crit = +102.0°C)
Core 0: +22.0°C (high = +92.0°C, crit = +102.0°C)
Core 2: +22.0°C (high = +92.0°C, crit = +102.0°C)
Core 3: +21.0°C (high = +92.0°C, crit = +102.0°C)
Core 4: +21.0°C (high = +92.0°C, crit = +102.0°C)
Core 8: +22.0°C (high = +92.0°C, crit = +102.0°C)
Core 10: +21.0°C (high = +92.0°C, crit = +102.0°C)
Core 11: +21.0°C (high = +92.0°C, crit = +102.0°C)
Core 12: +22.0°C (high = +92.0°C, crit = +102.0°C)
Output sudo sensors-detect
sensors-detect
# sensors-detect revision 3.4.0-4 (2016-06-01)
# System: LENOVO System x3650 M5: -[8871AC1]- [13]
# Board: LENOVO 01KN179
# Kernel: 3.10.0-514.6.1.el7.x86_64 x86_64
# Processor: Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz (6/79/1)
This program will help you determine which kernel modules you need
to load to use lm_sensors most effectively. It is generally safe
and recommended to accept the default answers to all questions,
unless you know what you're doing.
Some south bridges, CPUs or memory controllers contain embedded sensors.
Do you want to scan for them? This is totally safe. (YES/no):
Silicon Integrated Systems SIS5595... No
VIA VT82C686 Integrated Sensors... No
VIA VT8231 Integrated Sensors... No
AMD K8 thermal sensors... No
AMD Family 10h thermal sensors... No
AMD Family 11h thermal sensors... No
AMD Family 12h and 14h thermal sensors... No
AMD Family 15h thermal sensors... No
AMD Family 16h thermal sensors... No
AMD Family 15h power sensors... No
AMD Family 16h power sensors... No
Intel digital thermal sensor... Success!
(driver `coretemp')
Intel AMB FB-DIMM thermal sensor... No
Intel 5500/5520/X58 thermal sensor... No
VIA C7 thermal sensor... No
VIA Nano thermal sensor... No
Some Super I/O chips contain embedded sensors. We have to write to
standard I/O ports to probe them. This is usually safe.
Do you want to scan for Super I/O sensors? (YES/no):
Probing for Super-I/O at 0x2e/0x2f
Trying family `National Semiconductor/ITE'... Yes
Found unknown chip with ID 0x3711
Probing for Super-I/O at 0x4e/0x4f
Trying family `National Semiconductor/ITE'... Yes
Found unknown chip with ID 0x7f00
Some systems (mainly servers) implement IPMI, a set of common interfaces
through which system health data may be retrieved, amongst other things.
We first try to get the information from SMBIOS. If we don't find it
there, we have to read from arbitrary I/O ports to probe for such
interfaces. This is normally safe. Do you want to scan for IPMI
interfaces? (YES/no):
Found `IPMI BMC KCS' at 0xcc0... Success!
(confidence 8, driver `to-be-written')
Some hardware monitoring chips are accessible through the ISA I/O ports.
We have to write to arbitrary I/O ports to probe them. This is usually
safe though. Yes, you do have ISA I/O ports even if you do not have any
ISA slots! Do you want to scan the ISA I/O ports? (YES/no):
Probing for `National Semiconductor LM78' at 0x290... No
Probing for `National Semiconductor LM79' at 0x290... No
Probing for `Winbond W83781D' at 0x290... No
Probing for `Winbond W83782D' at 0x290... No
Lastly, we can probe the I2C/SMBus adapters for connected hardware
monitoring devices. This is the most risky part, and while it works
reasonably well on most systems, it has been reported to cause trouble
on some systems.
Do you want to probe the I2C/SMBus adapters now? (YES/no):
Found unknown SMBus adapter 8086:8d22 at 0000:00:1f.3.
Sorry, no supported PCI bus adapters found.
Module i2c-dev loaded successfully.
Next adapter: mga i2c (i2c-0)
Do you want to scan it? (yes/NO/selectively):
Next adapter: SMBus I801 adapter at 1fe0 (i2c-1)
Do you want to scan it? (YES/no/selectively):
Client found at address 0x48
Probing for `National Semiconductor LM75'... No
Probing for `National Semiconductor LM75A'... No
Probing for `Dallas Semiconductor DS75'... No
Probing for `National Semiconductor LM77'... No
Probing for `Analog Devices ADT7410/ADT7420'... No
Probing for `Analog Devices ADT7411'... No
Probing for `Maxim MAX6642'... No
Probing for `Texas Instruments TMP435'... No
Probing for `National Semiconductor LM73'... No
Probing for `National Semiconductor LM92'... No
Probing for `National Semiconductor LM76'... No
Probing for `Maxim MAX6633/MAX6634/MAX6635'... No
Probing for `NXP/Philips SA56004'... No
Probing for `SMSC EMC1023'... No
Probing for `SMSC EMC1043'... No
Probing for `SMSC EMC1053'... No
Probing for `SMSC EMC1063'... No
Now follows a summary of the probes I have just done.
Just press ENTER to continue:
Driver `to-be-written':
* ISA bus, address 0xcc0
Chip `IPMI BMC KCS' (confidence: 8)
Driver `coretemp':
* Chip `Intel digital thermal sensor' (confidence: 9)
Note: there is no driver for IPMI BMC KCS yet.
Check http://www.lm-sensors.org/wiki/Devices for updates.
Do you want to overwrite /etc/sysconfig/lm_sensors? (YES/no):
Unloading i2c-dev... OK
Output find /sys/ -iname 'fan'
# find /sys/ -iname '*fan*'
/sys/bus/platform/drivers/acpi-fan
/sys/kernel/slab/fanotify_event_info
/sys/kernel/slab/fanotify_perm_event_info
/sys/kernel/debug/tracing/events/syscalls/sys_enter_fanotify_init
/sys/kernel/debug/tracing/events/syscalls/sys_exit_fanotify_init
/sys/kernel/debug/tracing/events/syscalls/sys_enter_fanotify_mark
/sys/kernel/debug/tracing/events/syscalls/sys_exit_fanotify_mark
/sys/module/rcutree/parameters/rcu_fanout_leaf
English is not my native, so please don't judge my spelling errors. I am also not sure, if this is the correct site for this Question.
|
Your system has a correctly configured BMC with IPMI support, so you should be able to use ipmitool locally to extract all the monitoring information supported by your BMC:
yum install ipmitool
ipmitool sensor
(assuming the ipmi_si module is loaded, which should be the case on RHEL 7 on your setup). The interesting values are in the first two columns (sensor and value), and the fourth (health indicator):
CPU Temp | 45.000 | degrees C | ok | 0.000 | 0.000 | 0.000 | 91.000 | 96.000 | 96.000
System Temp | 37.000 | degrees C | ok | -10.000 | -5.000 | 0.000 | 80.000 | 85.000 | 90.000
Peripheral Temp | 43.000 | degrees C | ok | -10.000 | -5.000 | 0.000 | 80.000 | 85.000 | 90.000
MB_10G Temp | 50.000 | degrees C | ok | -5.000 | 0.000 | 5.000 | 95.000 | 100.000 | 105.000
DIMMA1 Temp | 42.000 | degrees C | ok | -5.000 | 0.000 | 5.000 | 80.000 | 85.000 | 90.000
DIMMA2 Temp | na | | na | na | na | na | na | na | na
DIMMB1 Temp | 42.000 | degrees C | ok | -5.000 | 0.000 | 5.000 | 80.000 | 85.000 | 90.000
DIMMB2 Temp | na | | na | na | na | na | na | na | na
FAN1 | na | | na | na | na | na | na | na | na
FAN2 | 3300.000 | RPM | ok | 300.000 | 500.000 | 700.000 | 25300.000 | 25400.000 | 25500.000
FAN3 | 900.000 | RPM | ok | 300.000 | 500.000 | 700.000 | 25300.000 | 25400.000 | 25500.000
FAN4 | na | | na | na | na | na | na | na | na
VCCP | 1.830 | Volts | ok | 1.420 | 1.460 | 1.570 | 2.020 | 2.130 | 2.170
VDIMM | 1.182 | Volts | ok | 0.948 | 0.975 | 1.047 | 1.344 | 1.425 | 1.443
12V | 12.000 | Volts | ok | 10.144 | 10.272 | 10.784 | 12.960 | 13.280 | 13.408
5VCC | 4.974 | Volts | ok | 4.246 | 4.298 | 4.480 | 5.390 | 5.546 | 5.598
3.3VCC | 3.333 | Volts | ok | 2.789 | 2.823 | 2.959 | 3.554 | 3.656 | 3.690
VBAT | 3.168 | Volts | ok | 2.385 | 2.472 | 2.588 | 3.487 | 3.574 | 3.690
5V Dual | 4.946 | Volts | ok | 4.244 | 4.298 | 4.487 | 5.378 | 5.540 | 5.594
3.3V AUX | 3.265 | Volts | ok | 2.789 | 2.823 | 2.959 | 3.554 | 3.656 | 3.690
Chassis Intru | 0x0 | discrete | 0x0000| na | na | na | na | na | na
| Monitoring CPU fan speed on Lenovo system x3650 m5 (8871) on RHEL7 |
1,624,839,071,000 |
I would like to set up some monitoring for a custom application. I want to be able to monitor when a service goes down or stops working and if this happens, find a way to receive these alerts via email. I have been researching and it looks like I can do the first part of this task using rsyslog. I wanted to confirm that this is possible.
So my question is, can you set up monitoring for an application using rsyslog? Say for instance using one of the local accounts local2?
If so, how does it determine the severity level of the logs from that application? i.e what constitutes a crit, alert or emerg? These all seem like the same severity to me.
Apologies if some of the terminology is off, I'm fairly new to Linux but any guidance would be appreciated.
Thanks
|
The "or stops working" part is not part of rsyslog's description, and because it's so abstract most monitoring software lets you run a script to evaluate if the service has stopped working so you might as well just make your own service or crontab script, the core of which might look like this;
curl -s -m 5 "$URL" >/dev/null
if [ "$?" != "0" ] ; then
echo -e "Subject:Panic\n\nPanic" | sendmail -r me@domain you@domain
fi
| Monitoring using rsyslog |
1,624,839,071,000 |
I am trying to install Zabbix on redhat 7 64bit. The zabbix server installed successfully with command yum install zabbix-server-mysql
But I am getting conflicts with php70u with php56u when I tried to install zabbix web console. I execute the following command : yum install zabbix-web-mysql
The following is the output of the command:
Resolving Dependencies
--> Running transaction check
---> Package zabbix-web-mysql.noarch 0:3.0.2-1.el7 will be installed
--> Processing Dependency: zabbix-web = 3.0.2-1.el7 for package: zabbix-web-mysql-3.0.2-1.el7.noarch
--> Processing Dependency: php-mysql for package: zabbix-web-mysql-3.0.2-1.el7.noarch
--> Running transaction check
---> Package php56u-mysqlnd.x86_64 0:5.6.21-1.ius.el7 will be installed
--> Processing Dependency: php56u-pdo(x86-64) = 5.6.21-1.ius.el7 for package: php56u-mysqlnd-5.6.21-1.ius.el7.x86_64
---> Package zabbix-web.noarch 0:3.0.2-1.el7 will be installed
--> Processing Dependency: php >= 5.4 for package: zabbix-web-3.0.2-1.el7.noarch
--> Processing Dependency: php-gd for package: zabbix-web-3.0.2-1.el7.noarch
--> Processing Dependency: php-mbstring for package: zabbix-web-3.0.2-1.el7.noarch
--> Processing Dependency: dejavu-sans-fonts for package: zabbix-web-3.0.2-1.el7.noarch
--> Processing Dependency: php-bcmath for package: zabbix-web-3.0.2-1.el7.noarch
--> Processing Dependency: php-ldap for package: zabbix-web-3.0.2-1.el7.noarch
--> Processing Dependency: php-xml for package: zabbix-web-3.0.2-1.el7.noarch
--> Running transaction check
---> Package dejavu-sans-fonts.noarch 0:2.33-6.el7 will be installed
--> Processing Dependency: dejavu-fonts-common = 2.33-6.el7 for package: dejavu-sans-fonts-2.33-6.el7.noarch
---> Package mod_php70u.x86_64 0:7.0.6-1.ius.el7 will be installed
--> Processing Dependency: php-common(x86-64) = 7.0.6-1.ius.el7 for package: mod_php70u-7.0.6-1.ius.el7.x86_64
---> Package php56u-pdo.x86_64 0:5.6.21-1.ius.el7 will be installed
--> Processing Dependency: php56u-common(x86-64) = 5.6.21-1.ius.el7 for package: php56u-pdo-5.6.21-1.ius.el7.x86_64
---> Package php70u-bcmath.x86_64 0:7.0.6-1.ius.el7 will be installed
---> Package php70u-gd.x86_64 0:7.0.6-1.ius.el7 will be installed
--> Processing Dependency: libwebp.so.4()(64bit) for package: php70u-gd-7.0.6-1.ius.el7.x86_64
--> Processing Dependency: libXpm.so.4()(64bit) for package: php70u-gd-7.0.6-1.ius.el7.x86_64
---> Package php70u-ldap.x86_64 0:7.0.6-1.ius.el7 will be installed
---> Package php70u-mbstring.x86_64 0:7.0.6-1.ius.el7 will be installed
---> Package php70u-xml.x86_64 0:7.0.6-1.ius.el7 will be installed
--> Running transaction check
---> Package dejavu-fonts-common.noarch 0:2.33-6.el7 will be installed
---> Package libXpm.x86_64 0:3.5.11-3.el7 will be installed
---> Package libwebp.x86_64 0:0.3.0-3.el7 will be installed
---> Package php56u-common.x86_64 0:5.6.21-1.ius.el7 will be installed
--> Processing Dependency: php56u-pecl-jsonc(x86-64) for package: php56u-common-5.6.21-1.ius.el7.x86_64
---> Package php70u-common.x86_64 0:7.0.6-1.ius.el7 will be installed
--> Running transaction check
---> Package php56u-pecl-jsonc.x86_64 0:1.3.9-2.ius.el7 will be installed
--> Processing Dependency: php56u-pear for package: php56u-pecl-jsonc-1.3.9-2.ius.el7.x86_64
--> Processing Dependency: php56u-pear for package: php56u-pecl-jsonc-1.3.9-2.ius.el7.x86_64
--> Running transaction check
---> Package php56u-pear.noarch 1:1.10.1-4.ius.el7 will be installed
--> Processing Dependency: php56u-xml for package: 1:php56u-pear-1.10.1-4.ius.el7.noarch
--> Processing Dependency: php56u-posix for package: 1:php56u-pear-1.10.1-4.ius.el7.noarch
--> Processing Dependency: php56u-cli for package: 1:php56u-pear-1.10.1-4.ius.el7.noarch
--> Running transaction check
---> Package php56u-cli.x86_64 0:5.6.21-1.ius.el7 will be installed
---> Package php56u-process.x86_64 0:5.6.21-1.ius.el7 will be installed
---> Package php56u-xml.x86_64 0:5.6.21-1.ius.el7 will be installed
--> Processing Conflict: php70u-xml-7.0.6-1.ius.el7.x86_64 conflicts php-xml < 7.0
--> Processing Conflict: php70u-common-7.0.6-1.ius.el7.x86_64 conflicts php56u-common
--> Processing Conflict: php70u-common-7.0.6-1.ius.el7.x86_64 conflicts php-common < 7.0
--> Finished Dependency Resolution
Error: php70u-xml conflicts with php56u-xml-5.6.21-1.ius.el7.x86_64
Error: php70u-common conflicts with php56u-common-5.6.21-1.ius.el7.x86_64
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
Here is the repolist that I have :
base-local CELGAE Infra Management REPO - RedHat EL7 DVD 4,620
*epel/x86_64 Extra Packages for Enterprise Linux 7 - x86_64 10,050
ius/x86_64 IUS Community Packages for Enterprise Linux 7 - x86_64 302
ius-debuginfo/x86_64 IUS Community Packages for Enterprise Linux 7 - x86_64 - Debug 55
ius-source IUS Community Packages for Enterprise Linux 7 - x86_64 - Source 0
optional-local CELGAE Infra Management REPO - RedHat EL7 OPTIONAL 8,602
updates-local CELGAE Infra Management REPO - RedHat EL7 UPDATES 10,706
zabbix/x86_64 Zabbix Official Repository - x86_64 40
zabbix-non-supported/x86_64 Zabbix Official Repository non-supported - x86_64 4
I followed the documentation : https://www.zabbix.com/documentation/3.0/manual/installation/install_from_packages
I disabled the repo ius but it does not help.
|
Finally after many hours of search and no one feedback, I was able to fix my issue. I am sharing here so it's might helps other in the future who are facing the same issue.
First, you need to install yum replace :
yum install yum-plugin-replace
After replace, php-common with your confict php version. In my case, it is php70u-common. Therefore, I ran the following commands :
yum replace php-common --replace-with php70u-common
Or you if it is for only php, you can do also:
yum replace php --replace-with php70u
After that, you can install the package you want. In my case, it was Zabbix.
| Zabbix web install Redhat conflicts php |
1,624,839,071,000 |
During weekend my Debian based proxy crashed due to insufficient free space. After reboot it was fine again, so on monday I went hunting for logs and/or explanations for saturday hang but I couldn't find anything.
Grepped any kind of stuff in /var/log/*, checked crontab, mail queue... The only thing I found was monitoring system echos in syslog, with free space getting critical minute after minute (filled up about 80GB in 30mins). No daemons error or so.
I would easily understand what is going on in realtime, but have no idea on how to further debug this kind of problem in the past. Any advice?
This is the first time this has happened in a year or so. Uptime was low, problem did not reoccur in next days.
Thank you
|
You can use atop to debug stuff like this. atop can run in realtime, but more critically for your case, it can show and analyse snapshots from the past captured in logs. It logs a lot of different metrics, so you likely won't be left thinking "damn, I wish I logged that" after the fact. :-)
On Debian, you can install it with apt-get install atop. You can then start and enable it on boot using your init manager. On systemd for example, it would be systemctl enable atop && systemctl start atop. Now atop will start logging -- typically this is to /var/log/atop/<date>.
You can view historic logs by using atop -r <log file>, going forward in time with t, and backwards with T. You can find more commands by pressing the ? key.
You should look for an app writing a lot to the disk. You can see this in the WRDISK column. You can also sort by disk usage by pressing D.
Obviously this can't go back to before atop even started logging, but you can have it running and logging in the background for next time, when you can investigate properly.
| How to detect which process filled up disk space |
1,624,839,071,000 |
On a linux server with dhcpd that acts as the internet gateway for all clients of the LAN: how can I monitor the internet usage based on IP/MAC address, and deny internet access if a certain bandwidth consumption has been exceeded?
|
On Linux, you could get this done with some scripting:
Create firewall rules with iptables so that all bandwidth for each client passes through a separate rule. The firewall subsystem in the kernel will count network packets and bytes that a particular rule matched. You can see the counters if you run iptables -vL. You might want to use the -n option too, for performance: iptables -vnL
Write a script that runs from cron and which checks how much data has been used by every client. Then if it's over a particular amount, have the script modify the firewall so that the client can not access the Internet anymore
Note that iptables' counters get reset when the firewall is cleared (i.e., after reboot, or when you do iptables -F. As such, you might want to have the script state its conclusions to some database or something.
| Monitor and limit internet bandwidth per network client |
1,624,839,071,000 |
I know I can use incron or inotify to monitor file creation
1- how can I monitor only the creation of .txt files using incron?
2- Is there any other ways or scripts I can use to monitor .txt creation without using incron or inotify?
|
Incron itself doesn't offer filtering on file names, you can only monitor a directory and all of its files and its subdirectories' files recursively. If you're only interested in some of the files, test the file name in the action.
/some/where IN_CREATE /home/user78050/bin/monitor-file-creation $#
where monitor-file-creation is something like
#!/bin/sh
case "$1" in
*.txt)
# do something
;;
esac
| How to monitor create a txt file without using incron? |
1,624,839,071,000 |
I would like my server to send a mail to me when someone connects remotely over ssh to my server.
who only gives me back the Username, TerminalID and Date. I cannot use only that, I need to check for the IP someone uses to connect to me.
So the triggering part would be an external IP.
How can I achieve that?
EDIT: who -h gives back the IP adresses of the ssh sessions. Thanks to Archemar
|
You can add some shell scripting to /etc/bashrc or /etc/bash.bashrc maybe bepending on your linux distribution. Those are executed when user logs in remotely via SSH. Just test if there is $SSH_CLENT variable to distinguish the ssh login.
There will be other usefull variables for your needs, like:
SSH_ASKPASS=/usr/lib/ssh/x11-ssh-askpass
SSH_CLIENT='127.0.0.1 57353 2217' ← ip address
SSH_CONNECTION='127.0.0.1 57353 127.0.0.1 2217'
USER=username
EDIT:
Of course, if the user is using GNU/bash. Other shells uses other files.
Check them in related manuals.
HTH, Cheers
| How to send a mail when someone remote connects to my server |
1,624,839,071,000 |
What's the most effective way to monitor SSH access in Gentoo Linux?
My Gentoo box is operating locally behind my broadband router. I have SSH port forwarding on the router and a DNS entry pointing to my router on the internet. Is there a way to silently record what external domain/IP the incoming connection to my Gentoo box comes from?
Similarly what's the best method of recording all network traffic to and from this box, again without being noisy about it?
|
User authorization events are typically logged by the system logging daemon in /var/log. The default locations vary between distros, but it is often /var/log/auth, /var/log/auth.log, /var/log/secure. I don't have a Gentoo system handy, but the default install used to feature syslog-ng and log these events to /var/log/auth.log.
There are a variety of ways to audit network traffic, the best one depends on the level of detail you need to retain and what sort of additional equipment you can use to accomplish the monitoring.
If you are concerned about the risk of compromise on a system, you should consider forwarding whatever auditing solution you choose to another system that is inaccessible (except for logging) from the one you are monitoring. Successful attackers would likely remove evidence of their breach from the local logging systems.
| How to transparently monitor SSH access/network traffic in Gentoo/general linux? |
1,624,839,071,000 |
Possible Duplicate:
Access history of a file
I know if a file is "being accessed" I can use lsof to see who (which process) is accessing it, but lsof is slow and heavy and I don't think I would be able to run it fast enough to see if a file is accessed or not.
So it there a way to watch a file, and see if it ever get accessed and if yes by who?
|
Assuming you're running Linux:
You can use the audit subsystem to monitor access to a particular file.
You can use the inotify subsystem to watch for activities on files. There is a nice API for inotify, which makes it more useful for somethings than the audit subsystem, but inotify does not provide you with any information about who made the change that triggered a notification.
| How can I monitor if anybody (any process) access is certain file [duplicate] |
1,624,839,071,000 |
I'm currently investigating why I didn't get notified about a high memory utilization on one RHEL server from dynatrace. When checking the graphs of memory usage, both sar and dynatrace show different results.
On SAR it is showing that the server is using 90% for about 11 hours, here's the screenshot of that:
And on the same day this is what dynatrace shows:
And as you can see they are both from May 3 2024. The metric they are using is memory usage %. I'm very confused about these two graphs, I'm not an expert on sar so maybe I'm missing something else, if somebody could help me to find out if I'm missing something that will be very appreciated!
|
The reason for displaying different graphs is that both tools just do not achieve the same calculus when estimating memory usage.
With regards to labels & values as taken from /proc/meminfo,
dynatrace is reported using the following formula :
memory_used = MemTotal - ( MemFree + Active + Inactive + KReclaimable )
sar considers that:
memory_used = MemTotal - ( Memfree + Buffers + Cached + Slab )
Memfree + Buffers + cached + Slab seems reported constant (which is not surprising since Linux tends to cache as much as possible), we could suspect high variations in Memfree (compensated by Cached) and or in Active, Inactive, Kreclaimable values.
Going further would require much more informations about the configuration of your system and the applications running around 8, 10 and 12.
| Is there a reason why sar would show different monitoring statistics on memory than other monitoring tools like dynatrace? |
1,624,839,071,000 |
I am looking for a counter provided by the Linux kernel counting the number of memory allocations performed by tasks on the system. I want to watch for high velocity changes in this counter, using Prometheus, in order to detect when some task on the machine does something stupid like allocating memory in loops.
I have found a bunch of different metrics that seem to be gauges, that is numbers representing the current state of the machine. Examples include nr_free_pages and kbhugused. These measure the current amount of something being available or used, but since one task allocating for example 1 page and then deallocating it again will result in an unchanged gauge these are of little use to me.
One thought I came across on IRC was if there was a counter for the number of times brk(2) was called, but I soon found that it was not the only system call used to allocate memory.
Right now I'm looking at pgalloc_normal in /proc/vmstat, but have yet to figure out exactly what it is a measure of.
Why do we want to look for huge rates of memory allocation you ask? Because memory allocation is costly. Not only do you have to switch into kernel space and back, the kernel also has a number of locks that can bring a system from having 2 CPUs with 80 execution threads processing data in parallel to just 1 thread allocating memory. This is a real world scenario we have encountered and want to watch for.
|
There’s no such counter ready-made for you as far as I’m aware, but there are a number of ways to track system calls. I think the most straightforward is to use perf:
sudo perf stat -e syscalls:sys_enter_mmap -e syscalls:sys_enter_brk -I 1000 -a
This will show the number of mmap and brk calls per second, every second.
You can track all system calls with this variant:
sudo perf stat -e 'syscalls:sys_enter_*' -I 1000 -a
You can also monitor specific processes, using -p and the relevant pid, instead of -a.
| Memory allocation counter for Linux |
1,624,839,071,000 |
I've come up with the following command to check changes in memory use:
free -s 1 -h | awk 'NR%4==2'
This shows output like this:
Mem: 125G 32G 82G 404M 10G 91G
Mem: 125G 32G 82G 404M 10G 91G
Mem: 125G 32G 82G 404M 10G 92G
I'm actually only interested in changes of memory use, so I tried piping this through uniq:
free -s 1 -h | awk 'NR%4==2' | uniq
However, when using this, no output is given at all. I assume uniq is waiting for its input to finish but of course it never does.
I'm on RedHat 7.6
|
This example from man stdbuf is what you are looking for. It applies for the general case you want to pipe a stream to uniq, similar to this. And the reason is that uniq needs to see more lines to decide if current line should be printed or not.
EXAMPLES
tail -f access.log | stdbuf -oL cut -d ' ' -f1 | uniq
This will immediately display unique entries from access.log
For your case:
free -s1 -h | stdbuf -oL awk 'NR%4==2' | uniq
Another way to do this is using one awk command, saving the line just printed and testing against this value before printing:
free -s1 -h | awk 'NR%4 == 2 {if ($0 != p) print; p = $0}'
Also, to monitor the output of a command, watch could be useful.
| Stream changes in available memory |
1,624,839,071,000 |
I am using Debian Buster and would like to find out which process does the most writes on a specific partition, just like iotop but limited to a single block device?
|
iotop cannot do that because it reads processes IO counters (/proc/PID/io) which are common for all block devices, including virtual filesystems like tmpfs.
What you'll need to do is block I/O tracing:
https://tunnelix.com/debugging-disk-issues-with-blktrace-blkparse-btrace-and-btt-in-linux-environment/
https://www.collabora.com/news-and-blog/blog/2017/03/28/linux-block-io-tracing/
https://www.linux.com/topic/networking/linux-block-io-tracing/
As far as I know there are no ready solutions for that.
| How can I see which process does the most writes on a specific partition? |
1,624,839,071,000 |
I'm running a Unix server with NTPD version 4.2.7. I have various clients using this as its main NTP server such as other unix servers, cameras, IOT devices, etc. I want to get a list of which IPs are using this NTP server to find out which clients would be affected if this UX server went down. The below is what I get when running ntpdc -c monlist, not what I expected, expected information on clients using this as their NTP server.
server# ntpdc -c monlist
***Server reports data not found
Thank you all.
|
There isn't a clean solution to this problem. NTP communication happens over UDP therefore it is stateless, hence you can't check established connections.
What you can do without much effort is cook a tcpdump/tshark filter to keep track of connections over time. What I mean by that is sniffing the network and observe any NTP traffic, specifically client to server traffic. This will give you an accurate list of NTP clients but isn't something you can query at any time - it is a process that needs to be kept running for a while (or idefinitely, depending on the purpose).
Conversely, chronyd (which also implements part of the NTP protocol) does keep track of connecting clients. If this is a viable alternative for your case, it will help you solve this problem.
| How to see which devices on a network use your unix NTP server? |
1,624,839,071,000 |
We have 763 Red Hat 7.2 Linux machines with systemd, systemctl and the presto service:
systemctl status presto.service
● presto.service - Perforce Server
Loaded: loaded (/etc/systemd/system/presto.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2019-06-25 18:30:22 UTC; 22min ago
We want a GUI to indicate all presto services, whether up or down. We can guarantee network-level remote access.
Please advise which applications, GUI or HTML GUI, we can download and install to show the status of the presto service on each server.
|
Use presto-admin server status on each node to generate a status report.
Query repeatedly each of your 763 nodes for that staus.
Sort the result on the node URL and throw away all but the latest status.
Turn that into HTML with a script.
Generate an HTML page to display which might look something like this
by perpetually refreshing an HTML page which draws on the status found from the server status reports: Green for Up, Yellow for No Report for X minutes, and Red for Down.
Also suggest you read up here on some of the availability monitoring tools available to monitor hundreds of nodes
| Wanted: GUI interface to show status of services on many servers |
1,624,839,071,000 |
What is the ping and traceroute command option to use in order to check the connectivity between 2 machines (machine A and machine B) using my machine (machine C) ? How do I specify the source machine IP? ping -S machineAIP machineBIP or ping -I machineAIP machineBIP don't seem to work from my debian 8 machine.
|
While ping allows you to set intermediate hosts, I think you can only do this if the intermediate hosts are willing to accept traffic intended for the destination (i.e., they behave as gateways). See https://superuser.com/questions/311849/how-can-i-ping-via-an-alternate-gateway
In this situation, I'd just use SSH:
ssh machineA ping machineB
| Is it possible to check the connectivity between 2 linux machines by pinging from a third machine? |
1,533,307,609,000 |
I am looking for a way to get total disk access of a program, ideally something like time(.) command that reports back disk read/write of the program.
|
I found the solution:
/usr/bin/time -v program args
will return number of blocks read and written to disk by the rpogram.
| How to measure IO/disk usage of a program |
1,533,307,609,000 |
I found that vmstat gives:
si: Amount of memory swapped in from disk (/s)
so: Amount of memory swapped to disk (/s)
(and here I understand that swapping and paging are being used interchangeably)
Is it possible to get those statistics for a specific process?
|
The simple answer is you can't. Writing and reading to swap is done by kswapd.
There already was answer how it is [generally operates][1] - https://serverfault.com/a/316636/252390
If you wan't to reduce value of Swap IN/Out you may check vm.swappiness sysctl param.
sysctl vm.swappiness
You may set lower value to decrease swap usage in system. Generally it is set to 60.
| How to monitor paging activity per process? |
1,533,307,609,000 |
If you run vmstat -s, it displays statistics about your system. I am wondering what it does to calculate the used memory statistic (highlighted below). This is not a statistic that I can find in /proc/meminfo.
user@machine:# vmstat -s
7483816 K total memory
**4740624 K used memory**
3619096 K active memory
800388 K inactive memory
2743192 K free memory
220624 K buffer memory
1989008 K swap cache
901116 K total swap
0 K used swap
901116 K free swap *snip*
How does vmstat get that data?
|
vmstat gets the virtual memory stat from /proc/meminfo and /proc/vmstat, and processor related info from /proc/stat:
% strace -fe open vmstat -s
...
open("/proc/meminfo", O_RDONLY) = 3
open("/proc/stat", O_RDONLY) = 4
open("/proc/vmstat", O_RDONLY) = 5
...
For used memory, from https://gitlab.com/procps-ng/procps/blob/master/proc/sysinfo.c#L772:
if (mem_used < 0)
mem_used = kb_main_total - kb_main_free;
kb_main_used = (unsigned long)mem_used;
| Where does vmstat get it's "used memory" statistic from? |
1,533,307,609,000 |
Just upgraded a cacti server to Stretch/Debian 9. Cacti was still working after the upgrade was finished.
After cleaning up the leftover Debian 8 PHP 5 packages, that left only PHP 7.0 installed, cacti stopped working, giving only a blank page when accessing the URL.
Calling /usr/share/cacti/site/index.php from the command line gives the error:
PHP Fatal error: Uncaught Error: Call to undefined function mysql_pconnect() in /usr/share/php/adodb/drivers/adodb-mysql.inc.php:480
What to do to make it work?
|
mysql_pconnect is obsolete by now, and not supported by PHP 7.0.
Changed the database_type in the cacti configuration file /etc/cacti/debian.php as in:
From:
$database_type = "mysql";
to:
$database_type = "mysqli";
Cacti is now working.
From: http://php.net/manual/en/function.mysql-pconnect.php
This extension was deprecated in PHP 5.5.0, and it was removed in PHP
7.0.0. Instead, the MySQLi or PDO_MySQL extension should be used.
| Cacti stopped working after upgrade to Stretch |
1,533,307,609,000 |
Are there any methods or bash tools that can observe some events (such as creating a folder) and then do another actions?
|
What you are looking for is inotify, there are programs inotifywait and inotifywatch, in package inotify-tools.
You can add event handlers to creates, reads, writes, deletes, etc.
To install: sudo apt-get install inotify-tools
see also package inotify-hookable
| Event dispatcher in bash (Ubuntu Gnu/Linux) [duplicate] |
1,533,307,609,000 |
This might be a shot in the dark, but I am wondering if there is a way that I can connect my system monitor on my computer to my headless box. It would be awesome to somehow use that interface display what is going on on my server, from my Desktop.
I am of course aware of tools like top and other web based monitors that I could use... but I think it would be cool to use Gnome's.
|
What is so awesome on gnome-system-monitor? I don't know about any way to do that, but feel free to hack it. It is open source.
This application is just "desktop toy". It does not have ambitions to monitor different hosts. If you are interested in monitoring server, there are different tools that do that and do that in better way (and you might even run them on local machine with monitoring the remote on ... cockpit for example).
| Connect gnome-system-monitor to another (headless) machine |
1,533,307,609,000 |
I am using CentOS 6.5 and Xen 4.2.4-30
xentop does not change MEM(%) at all.
NAME STATE CPU(sec) CPU(%) MEM(k) MEM(%) MAXMEM(k) MAXMEM(%) VCPUS NETS NETTX(k) NETRX(k) VBDS VBD_OO VBD_RD VBD_WR VBD_RSECT VBD_WSECT SSID
Domain-0 -----r 68 0.0 1048568 25.0 1048576 25.0 1 0 0 0 0 0 0 0 0 0 0
vm1 --b--- 7 0.0 1536000 36.6 1536000 36.6 1 1 49 0 1 0 6518 433 95640 4034 0
vm2 --b--- 8 0.0 1536000 36.6 1536000 36.6 1 1 55 5 1 0 6562 551 97336 5090 0
Is there anyway to get how much of the allocated memory for each VM like CPU(%) in xentop
|
I'm not familiar with Xen (ie. I have no practical experience with it) but I did find this thread which would seem to indicate that you can never get the "actual" memory utilizations from the guest VMs from Dom0 via xentop.
Monitoring domU real memory usage
There is this comment at the end of the thread:
This information is not available from domain0 by default but can be sent from each domU via xenstore. Look at the shell scripts in xenballoond for an example of how to do that. Basically, you need a shell script running in each domU to put the information (e.g. /proc/meminfo) into xenstore and a shell script in dom0 to read it and print it.
| xentop gives static info about memory usage |
1,533,307,609,000 |
Trying to setup memory usage monitoring for Nagios using the check_snmp_mem.pl from Nagios SNMP plugin.
I could not even get it working from the command line, I mean I go to /usr/lib/nagios/plugins and run the script, it gets a "No response from remote host" error.
[root@nagios plugins]# ./check_snmp_mem.pl -H rhel01 -C public -N -w 90,20 -c 99,30
Argument "v6.0.1" isn't numeric in numeric lt (<) at ./check_snmp_mem.pl line 319.
ERROR: Description table : No response from remote host "rhel01".
Any SNMP configurations required on the monitored server?
|
With help from another colleague, we worked out why it didn't work.
3 things:
First, we have agentaddress tcp:x.x.x.x:161 in snmpd.conf, just deleted the line
Second, iptables is blocking udp port 161, added rules to allow udp port 161
Third, something wrong with the script as you can see the error message about line 319, changed < to lt
| No response from remote host for Nagios check_snmp_mem.pl plugin |
1,533,307,609,000 |
Here is what I currently have online, as you can see there is no information about my debian server.
(While was installing I tried to follow next instructions)
What I have changed in default gmond.conf:
cluster {
name = "dspproc"
owner = "unspecified"
latlong = "unspecified"
url = "dspproc"
}
udp_send_channel {
mcast_join = 127.0.0.1
port = 8649
ttl = 1
}
udp_recv_channel {
mcast_join = 127.0.0.1
port = 8649
bind = 127.0.0.1
}
And this is what I changed in gmetad.conf:
data_source "dspproc" 10 127.0.0.1
authority "http://195.19.243.13/ganglia/"
trusted_hosts 127.0.0.1 195.19.243.13
case_sensitive_hostnames 0
My question is: what I do wrong , and how to make ganglia show info about current machine its installed on?
Update
Following this answer Changed to:
udp_send_channel {
host = 127.0.0.1
port = 8649
ttl = 1
}
/* You can specify as many udp_recv_channels as you like as well. */
udp_recv_channel {
host = 127.0.0.1 /* line 41 */
port = 8649
bind = 127.0.0.1
}
got this on restart:
Starting Ganglia Monitor Daemon: /etc/ganglia/gmond.conf:41: no such option 'host'
and still Hosts up: 0 in web ui.
Upadate 2:
So... when I read the answer again and went on to the link made next changes into configuration and all worked out!) Thank you noffle!
Now that block of gmod.conf looks like
udp_send_channel {
host = 127.0.0.1
port = 8649
ttl = 1
}
udp_recv_channel {
port = 8649
family = inet4
}
udp_recv_channel {
port = 8649
family = inet6
}
and all seems to work...
|
I seem to remember having a similar problem when setting up Ganglia many moons ago. This may not be the same issue, but for me it was that my box/network didn't like Ganglia's multicasting. Once I set it up to use unicasting, all was well.
From the Ganglia docs:
If only a host and port are specified then gmond will send unicast UDP messages to the hosts specified.
Perhaps try replacing the mcast_join = 127.0.0.1 with host = 127.0.0.1.
| How to configurate ganglia-monitor on a single debian machine? |
1,533,307,609,000 |
Is there a way to find out what is trying to mount this file?
Jul 13 14:27:24 myhost automount[13527]: lookup(file): lookup for
tmp_dir failed
Something is looking for "tmp_dir", and I've grepped a bunch of places but cannot find what script, program, etc... is looking for the file/dir and is causing automount to try and mount it up.
I see there are entries in /proc/mounts for tmp_dir, but looks like I cannot remove them since /proc/mounts is read-only (probably for good reason). Thoughts?
For a little background, we recently took down a file share that was called tmp_dir, and I think a programmer still has something pointing to tmp_dir, but he claims he cleaned everything up. I'm thinking maybe we did not umount tmp_dir properly before taking down the share, and autofs is still attempting to load it. The OS is SLES 11 SP1.
|
Here are a couple of ways to monitor accesses to particular files. I'm not completely sure how they'll interact with an automounter, but they probably will work.
Put a LoggedFS filesystem on the automount directory (/amnt or whatever), and configure it to look out for /amnt/tmp_dir. Start from the provided configuration file example and tweak the include/exclude rules according to this guide.
Get the Linux audit subsystem utilities (on any recent distribution, this should just be a matter of installing a package), and make the kernel look out for this file:
auditctl -a exit,always -w /amnt/tmp_dir
See also Determine which process is creating a file; my answer there has more explanations on LoggedFS and auditd.
| Automount lookup failed. How to determine what is trying to access the file? |
1,533,307,609,000 |
runsvdir: UNIX init scheme with service supervision from runit is a nice tool to re-run some service(s) if it dies. It monitor a directory for changes, inotify like. It execute scripts in directories forever.
I have a structure like this:
$ tree app
app
├── service
│ ├── run
│ └── supervise
├── replay
│ ├── run
│ └── supervise
└── run
├── run
└── supervise
What I would like to do is something like this, based on this in app/run/run and app/replay/run (I tried the solution in the link, but it fails):
su - user -c screen -S run<<EOF
[...]
# code
EOF
The code have to block/wait to avoid runsvdir to run multiple instances in //.
The code is run as root in a docker container. No systemd there.
I tested many solutions, have defuncts pids, of multiple processes in // that I want to avoid. I miss something maybe obvious.
Any idea?
debian 11 until next week ;)
runit 2.1.2-41
Note/Edit: it's not mandatory to create run/replay from init. It can be a shell script (bash). The screen's have to be run only one time.
|
Fixed like this after docker build .:
docker exec -d -u mevatlave cont screen -d -m -S run ./run
docker exec -d -u mevatlave cont screen -d -m -S replay ./replay.sh
docker exec -it -u mevatlave cont screen -x run
| How to spawn a user screen in a docker container? |
1,533,307,609,000 |
I found an interesting article that describes how to simulate network issues (like lost packets) on a linux server.
On an Ubuntu test VM, I checked which interface is used for internet connectivity, and it's called ens33.
Then I added a rule using tc to introduce packet loss:
$ sudo tc qdisc add dev ens33 root netem loss 30% 50%
And then I let ping run for a while, the result is as expected, some packets are lost:
$ ping www.google.com
...
97 packets transmitted, 84 received, 13% packet loss
While ping was running, I thought I could also monitor the ongoing packet loss using ip -s link show ens33, but it shows 0 dropped packets both for RX and TX.
What I'm trying to do is to monitor packet loss in realtime, while ping is running.
|
tc also accepts a -s parameter, with the same meaning: statistics.
Example as root applied on a veth link toward an LXC container with address 10.0.3.128:
# echo; tc qdisc del dev vethlzYQu1 root 2>/dev/null; \
ip neigh flush all; \
tc qdisc add dev vethlzYQu1 root netem loss 30% 50%; \
tc -s qdisc show dev vethlzYQu1 root; \
ping -q -c 10 10.0.3.128; \
tc -s qdisc show dev vethlzYQu1 root
qdisc netem 8010: root refcnt 5 limit 1000 loss 30% 50%
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
PING 10.0.3.128 (10.0.3.128) 56(84) bytes of data.
--- 10.0.3.128 ping statistics ---
10 packets transmitted, 8 received, 20% packet loss, time 9193ms
rtt min/avg/max/mdev = 0.030/125.218/1001.185/331.084 ms
qdisc netem 8010: root refcnt 5 limit 1000 loss 30% 50%
Sent 826 bytes 9 pkt (dropped 3, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
Here 9+3=12 packets should have been sent, 2 of the dropped packets were from the ping, and the other was probably an ARP request which was retried.
If you need to parse tc's output in shell, better use its JSON output along jq. Eg:
# tc -s -json qdisc show dev vethlzYQu1 root | jq '.[].drops'
3
| Monitoring packet loss simulated with tc |
1,533,307,609,000 |
For a while now and for some reason I find myself in the unpleasant situation of Ctrl-w not working anymore in any program, which means I need the mouse any time I want to close a tab.
I tried checking general Debian keyboard shortcuts, input method shortcuts etc. but didn't find any conflict so far. However, it's not like there's zero reaction. When I press Ctrl-w in Firefox for example, the vertical scrollbar on the right gets highlighted until I release the keys. In Kate, the cursor stops blinking for an instant.
So the strategy I came up with is somehow finding a log which tells me which program or process is being triggered by any keypress (or just shortcuts would do fine)...
I found out all keys pressed can be logged using programs like KeyLogger or logkeys ; but they don't seem to make a link to the action triggered by those keypresses...
Is there a way to track any activity triggered by the keys I press in Debian 10 AVL-MXE? Like tail but realtime (or non-realtime) monitoring/logging of anything I'm doing?
Or another way to solve my "Ctrl-w not working anymore" problem?
Thanks so much for any clue!
some system info:
Kernel : Linux 5.9.1-rt20avl1 (x86_64)
Version : #1 SMP PREEMPT_RT Sat Oct 31 12:21:58 EDT 2020
C Library : GNU C Library / (Debian GLIBC 2.28-10) 2.28
Distribution : Debian GNU/Linux 10 (buster)
|
I did not find the answer to my question, but I did find a simple solution to my problem. Instead of using Ctrl-w to close tabs, I now simply use Alt-f-c. Admitted, it's one key more to press, but it also allows me to close tabs with one hand from my keyboard. It also helped me realize that preconfigured Menu shortcuts are underrated, or at least that I didn't realize how handy they can be for repeated actions. Hope this is a useful thought :)
| How to track all shortcut keys pressed and the process they interact with? |
1,533,307,609,000 |
Some files on a Debian 9 server periodically overwritten to the original status after I modified it. I couldn't find what process/program is doing that job. Nothing is defined on crontab. Posibilly from a remote server (i.e. Ansible/Puppet) but I cound't find evidence.
I tried to use lsof and fuser but no process is using these files.
My question is how to setup a monitor to monitor these files and find out what process changed their contents.
|
Good question! Intrusion Detection Systems are used for things like this or at least they could be so maybe one of those (aide, tripwire, ...) already has such capability or one could ask about it there.
If you know the files being modified beforehand (or a list of files which are important to remain unmodified and the times and usernames that show that you modified them yourself) you could use something suggested here.
If you'd like to use auditd here you can find a guide to do so.
| How to Monitor what Changes a File |
1,603,985,792,000 |
My home server runs a couple of shell scripts regularly for maintenance tasks - mostly backup, but also other stuff. I would like to be alerted in case anything fails but also keep a log of when it works.
Currently my setup looks like this:
Cron calls one shell script which calls other scripts (just so the one won't get too complex). I decided to use one script with many tasks instead of individual cron items as I don't know how long each will take and I don't want them to interfere with one another.
My cron setup contains a MAILTO line. I never get any errors.
I don't have any logging. I just check from time to time whether the backup actually exists.
I know, I could implement into each script the functionality to log to a file (or syslog). Is there a way to define this from a central point so that I do not have to code this into every script individually?
Not sure how to achieve a better monitoring. I think a log analyzer system would be too much for this. Someone suggested running the scripts through Jenkins instead of shell/cron, but that seems to be even more effort.
What is a simple and good option?
|
I have implemented the following:
Enabled output to stdout for various steps or added custom output, e.g.:
echo "Starting backup..."
rsync whatever && echo "Backup successful" || echo "Backup failed"
Checking the return codes of each step of the script, either exiting the sub-script immediately or continuing, returning an error code at the end of the script
wrote a wrapper for my maintenance script which redirects all the outputs to a log file and if there are any errors within the maintenance script, I get a mail.
Example of maintenance script (does not exit if any individual step breaks, but returns an error in the end):
#!/bin/bash
RETURNCODE=0
echo "Execution started $(date)"
/root/do_something.sh || RETURNCODE=1
# (...)
exit $RETURNCODE
Example of wrapper script that calls the other script, this one is now in my crontab:
#!/bin/bash
# exit on any error (there should not be any in this script)
set -e
LOGFILE="/var/log/my.log"
# redirect STDOUT and STDERR to logfile...
if /root/maintenance.sh > $LOGFILE 2>&1; then
# the colon ":" means: do nothing
:
else
# on error, send me an email
mail -s "maintenance script failed" [email protected] < "$LOGFILE"
fi
| How to monitor cron maintenance scripts? |
1,603,985,792,000 |
OK, I've been googling for hours so I obviously have not been able to understand the answers to the various questions that have already been asked about this subject. I am hoping that, by asking the question again in a more specific way, I will be able to get an answer I can understand.
I have some application running in linux that communicates with an external device attached to a serial port. I want be able to capture and log the data sent in both directions between the application and the device, with timestamps at the beginning of each line in the file.
As a test case, I am using minicom as the application I want to monitor, connected via a null modem cable to another computer also running minicom. I have already confirmed that, if I type characters on either of the computers, characters appear in the other computer's terminal. So far, so good.
I then found this question:
How can I monitor serial port traffic?
In the answer to this question, the program jpnevulator is suggested. However, when reviewing the man page, I could not figure out the right way to use jpnevulator to get what I want. Here is what I tried to do:
First, I opened a terminal window and typed the following command:
$jpnevulator --tty=/dev/ttyS0 --pty --pass --read --ascii --timing-print --file=serial.log
I saw the output:
jpnevulator: slave pts device is /dev/pts/18
I then opened another terminal window and typed the following command:
minicom -D/dev/pts/18 -b115200
Minicom opened without complaint.
However, when I typed characters in either terminal (local and remote), nothing appeared in either terminal. jpnevulator only logged the data written to /dev/pts/18.
My expectation is that jpnevulator :
reads data from /dev/pts/18 and "passes" this data on to /dev/ttyS0 while also writing this data to the specified file.
reads data from /dev/ttyS0 and "passes" this data on to /dev/pts/18 while also writing this data to the specified file.
I am aware of the remark in the faq that says "Jpnevulator was never built to sit in between the kernel and your application. I'm sorry."
However the very same faq states in the second paragraph down: "Now with a little bit of luck some good news: A little while ago Eric Shattow suggested to use pseudo-terminal devices to sit in between the kernel and your application." That is the approach I am trying to take but I am having no success. What am I missing?
Thanks to all in advance.
Cheers,
Allan
p.s.
I was successfully able to capture the back and forth traffic using the socat method mentioned in the existing question I referenced but this method did not offer any way of timestamping the traffic. I was hoping that jpnevulator would have provided this for me.
|
I have answered my own question: I found another utility that better provides what I want:
https://github.com/geoffmeyers/interceptty
That package includes a perl script that post-processes the output of interceptty to provide a "pretty" output. I found it quite easy to modify the script to add a timestamp to each line.
Thanks to Geoff Meyers for providing this.
Allan
| How do I use jpnevulator to capture and log the serial traffic between an application and hardware serial port? |
1,603,985,792,000 |
Sorry if this is a repeat. I searched, but with no luck.
I'm using SNMPd on an openwrt/wr host with some ppp and tun connections. These connections get IDs in the if table, and will actually get a new ID whenever the tunnels reconnect.
Nagios (check_mk), when that happens, complains that an interface went down; oh, and a different one with the same name came up right afterward. In the meantime, it's iterating over so many interfaces that the reports are of 'interface 4933 down'; and an snmpwalk shows close to 4932 datapoints before it.
How are the helpful folks here handling a monitoring situation like that?
|
I figured it out, so here's an answer so I'm not denvercoder9.
Adding a new config to the snmpd.conf seems to have both trimmed the interface march as well as prevented complaints.
interface_replace_old yes
I think that's the one. It could be that simple.
Try it yourself if you run into the same problem, and let me know if I've got it wrong.
| Nagios/SNMP - devices alerting when ppp/tun connections cycle |
1,603,985,792,000 |
On the server for everyday there will be a file with name "xxxx_xxx_2016-11-08_0.log" on the lcoation /usr/logs the date changes every day in the file name but format is same, there is one file per day, the file content is written once in every 4 hours. It writes file content at 12 AM, 4AM, 8AM, 12 PM, 4 PM, 8 PM. A monitoring job need to go into the content around 3:30 AM, 730 AM, 1130AM , 330 PM, 730 PM go and check the file and see if it has any error as "maxretry, not synchronized" if this line is found please send an alert and create an email .Also if you see an error like "FCS Bad receipt" do the same as above.I am new to the scripting and Unix.Please help me with above requirement.
|
This is just a sample script that you could use to modify as per your needs.
FILE="xxxx_xxx_`date +"%Y-%m-%d"`_0.log"
grep -E "maxretry|not synchronized|FCS Bad receipt" $FILE > fcs_error.log
if [[ $(wc -l fcs_error.log | awk '{print $1}') -gt 0 ]]; then
mail -s "error found" mail_id <fcs_error.log
fi
Chek cron on how to schedule jobs
Use -n option of grep for printing line number. See grep for more details.
| Fcs synchronization error monitoring |
1,603,985,792,000 |
I have set up a very simple script so that I can test whether a process is running or not and if so then it will touch a file and everything will be fine. However if process is not running and file isn't touched then I want to be able to set up an alert.
pgrep "sleep" >/dev/null && touch monitor.log
This script is running in a crontab every minute. I need a way for it to alert if file has not been touched? Is this possible?
Thanks
|
This is a straightforward file modification time check; the complications mainly arise from the possibility of up to 86,400 alerts per day (usually over a long holiday weekend is when these sorts of thing break), and additional complications of whether or not the modification time checker (or cron, or the system...) are actually running, whether the host clock is correct (time skew on virts, BIOS clock coming up four years in the future, broken NTP, etc).
#!/bin/sh
# what we're checking for mtime changes straying from the current system time
MONITOR=foofile
THRESHOLD=60
# use mtime on this file to avoid frequent alert spam should the above stop
# being updated
LAST_ALERT=barfile
LAST_ALERT_THRESHOLD=60
NOW_MTIME=`date +%s`
absmtimedelta() {
delta=`expr $NOW_MTIME - $1`
# absolute delta, in the event the mtime is wrong on the other side of
# the current time
echo $delta | tr -d -
}
alertwithlesscronspam() {
msg=$1
if [ ! -f "$LAST_ALERT" ]; then
# party like it's
touch -t 199912312359 -- "$LAST_ALERT"
fi
# KLUGE this stat call is unportable, but that's shell for you
last_mtime=`stat -c '%Y' -- "$LAST_ALERT"`
last_abs_delta=`absmtimedelta $last_mtime`
if [ $last_abs_delta -gt $LAST_ALERT_THRESHOLD ]; then
# or here instead send smoke signals, carrier pigeon, whatever
echo $msg
touch -- "$LAST_ALERT"
exit 1
fi
}
if [ ! -r "$MONITOR" ]; then
alertwithlesscronspam "no file alert for '$MONITOR'"
fi
MONITOR_MTIME=`stat -c '%Y' -- "$MONITOR"`
ABS_DELTA=`absmtimedelta $MONITOR_MTIME`
if [ $ABS_DELTA -gt $THRESHOLD ]; then
alertwithlesscronspam "mtime alert for '$MONITOR': $ABS_DELTA > $THRESHOLD"
fi
Perhaps instead consider a standard monitoring framework, which may have support for file modification time checks or a plugin to do so, customizable alerting, metrics, better code than the above, etc.
| Creating alert notification if process stops touching file |
1,603,985,792,000 |
I have just attempted to get RICHPse version 3.4 running on a Solaris version 10 Sun M5000.
It seems to run on the global zone but not on the local zones.
On the local zones RICHPse just fails without issuing any errors.
I have run orca + RICHPse without root access and it seems to run fine on other servers without root. I just set the appropriate environment paths in start_orcallator and it works. I know RICHPse should also normally be placed in /opt (from pkgadd setup) but it seems to work even when located elsewhere (e.g. somewhere in /home and working with non-root ID)--i.e. just running from a copied RICHPse folder. So, I am thinking that is not the issue but something else with the Solaris local zone.
|
Commenting or removing the line:
#define USE_RAWDISK 1
in lib/orcallator.se seems to have fixed the problem for me.
| Does orca + RICHPse work on Solaris Containers or local zones? |
1,603,985,792,000 |
I need to monitor HP printer state (ink, paper, etc). Hplip has hp-info tool with debug-mode and very verbose output. But I can't find any documentation with explanation of its data. For example:
hp-info[31896]: debug: printer_status=1
hp-info[31896]: debug: device_status=2
hp-info[31896]: debug: cover_status=4
hp-info[31896]: debug: detected_error_state=64 (0x40)
hp-info[31896]: debug: Printer status=1000
|
There is no complete documentation for hp-info command. All that you
can find is in hp-info - -help (or) man hp-info command. HPLIP is a
open source project and you can find the complete source code of hplip
@ http://hplipopensource.com/hplip-web/gethplip.html. You can explore
through the source code to get more info on the data listed by hp-info
command. Most of it is contained in codes.py & models.dat file of the
source.
Source
| Where can I find hplip debug documentation? |
1,359,274,529,000 |
I am about to develop a piece of software and I want to ascertain the impact it has on my system. The main things I am look for are load times, memory and CPU usage and shutdown time, although I would like to get as much information as possible. I know I can use my distro's system monitor to get some of this stuff, but I need precise data as I am going to be doing some before and after tests during my project. Is there anything out there (preferably open source) that will suffice?
|
It's looking that you need software profilers eg:
for memory I use valgrind massif with linuxtools on eclipse.
If you are not restricted by system/gpl try dtrace on solaris/sunos if not try systemtap(eclipse + linuxtools) or gprof if you have source code.
| How can I test the system footprint of applications? |
1,359,274,529,000 |
I'm looking for a process monitor that produce an easy to parse output to stdout. Is there any tool like that in unix? Something like htop or top, but mean to be consumed by another program.
To be more specific, let's say i want to create a gui program for process monitoring. So, I need to get a real time process information (maybe every second). Do I need to call ps every second, or maybe there's a better alternative?
|
Sounds like ps... It can be configured to output specific information on specific processes (or all processes).
If you don’t mind making your program OS-specific, you could also parse whatever ps parses on your system, e.g. /proc on a Linux system.
| Is there any parseable process monitor? |
1,359,274,529,000 |
Munin has chart 'interrupts and context switches on the system'.
As far as this is a monitoring tool I assume these values are important for the server performance.
So, the question is: how can I know for each particular server the values are OK or too high?
I assume any Linux at least
|
Monitor the trends over time and look for anomalies. The "normal" values differ depending on the type of application load and in turn, what the application does on a regular basis.
| How many interrupts and context switches good for the server? |
1,359,274,529,000 |
I know how to use inotify to monitor filesystem events under linux. I'm wondering if there is any utility that is similar to inotify which can be used to monitor non-filesystem events.
For example, I'd like to register event handlers which can be triggered by things like the startup or shutdown of certain executables, the receipt of connections or disconnections from other hosts, mounts or unmounts of filesystems, the login or logout of certain users, etc.
The syslog facility is not sufficient for this purpose, because (for example) the starting and stopping of arbitrary executables are not logged anywhere. The same is true for arbitrary mounts and unmounts.
I know that I can write programs to read information from the /proc filesystem and execute code based upon conditions that it finds. I also know that I can write programs to monitor wtmp and other such resources and to similarly execute code based upon what is found. However, I'm wondering if there is some sort of facility like inotify which could be used to encapsulate these kinds of non-filesystem monitoring tasks underneath a standard interface.
Thank you for any suggestions.
|
I believe that you can do at least some of what you're looking for With Sysdig Chisels. Sysdig is an open-source tool that enables you to monitor Linux system calls. The chisels enable you to write scripts to perform actions based on the observed system calls.
Take a look at the user guide
| Monitoring non-filesystem events similarly to inotify? |
1,359,274,529,000 |
I have installed XYMon server successfully, but I can't get the client information to show. It took me a while to figure out, but if you add a host to the server that doesn't have the client, it still shows some things like conn/ssh/info, but nothing like cpu/disk/mem.
I have found it to be REALLY hard to find any good documentation, troubleshooting steps or anything. The installation of the client seems extremely simple.
On Ubuntu 14.04 >>
apt-get install xymon-client
Only one question during installation
IP of XYMon-Server
But I can't get anything to show in the server!
The closest I have come to debugging is
On Server :
@xymon:/var/log/xymon$ cat alert.log
2016-07-20 21:31:52 -> Could not connect to Xymon [email protected]:1984 (Connection refused)
@xymon:/var/log/xymon$ cat xymonlaunch.log
2016-07-20 22:00:27 Cannot open env file /usr/local/xymon/server/etc/hobbitserver.cfg - No such file or directory
2016-07-20 22:00:27 Loading hostnames
2016-07-20 22:00:27 Loading saved state
2016-07-20 22:00:27 Setting up network listener on 0.0.0.0:1984
2016-07-20 22:00:27 Setting up signal handlers
2016-07-20 22:00:27 Setting up xymond channels
2016-07-20 22:00:27 Setting up logfiles
2016-07-20 22:10:27 Cannot open env file /usr/local/xymon/server/etc/hobbitserver.cfg - No such file or directory
On Both Client & Server I found this Error :
@xymon:/var/log/xymon$ cat xymonclient.log
No LSB modules are available. - Repeated for ever...
There was nothing more useful in client logs.
Install XYMON Config :
sudo apt-get install -y xymon
sudo cp /etc/apache2/conf.d/xymon /etc/apache2/conf-available/xymon.conf
sudo ln -s /etc/apache2/conf-available/xymon.conf /etc/apache2/conf-enabled/
sudo ln -s /etc/apache2/mods-available/authz_groupfile.load /etc/apache2/mods-enabled/
sudo ln -s /etc/apache2/mods-available/rewrite.load /etc/apache2/mods-enabled/
sudo ln -s /etc/apache2/mods-available/cgi.load /etc/apache2/mods-enabled/
sudo ln -s /var/lib/xymon /var/www/html/xymon
sudo nano /etc/apache2/conf-available/xymon.conf
Replace the below 2 lines with the bottom ONE line. (All instances)
#Order allow,deny
#Allow from localhost ::1/128
Require all granted
|
Based on This page from XYMon regarding Clients not reporting I figured it out. While it gives the problem, I couldn't get the solutions there to work.
On Client :
cat /etc/default/xymon-client | grep CLIENTHOSTNAME
Must Match on Server :
/etc/xymon/hosts.cfg
1.2.3.4 CLIENTHOSTNAME
If it doesn't match EXACTLY then >>
"Xymon only cares about the hosts that are in the hosts.cfg file, and discards status-reports from unknown hosts"
| XYMon-Client status not showing in XYMon Server |
1,359,274,529,000 |
I have nagios client which gave sudden error after an upgrade. I reinstalled nagios-plugin and nrpe agent again but unable solve error. This version uses nrpe under xinetd.
# /usr/local/nagios/libexec/check_nrpe -H localhost
CHECK_NRPE: Error - Could not complete SSL handshake.
# netstat -plan | grep :5666
tcp 0 0 :::5666 :::* LISTEN 20265/xinetd
Nagios-server-IP 10.10.3.30
# cat /etc/xinetd.d/nrpe | grep -i only_from
only_from = 127.0.0.1 10.10.3.30
# cat /etc/xinetd.d/nrpe
# default: on
# description: NRPE (Nagios Remote Plugin Executor)
service nrpe
{
flags = REUSE
socket_type = stream
port = 5666
wait = no
user = nagios
group = nagios
server = /usr/local/nagios/bin/nrpe
server_args = -c /usr/local/nagios/etc/nrpe.cfg --inetd
log_on_failure += USERID
disable = no
only_from = 127.0.0.1 10.10.3.30
}
Unable to telnet from client to server
# telnet 10.10.3.30 5666
Trying 10.10.3.30...
Connected to 10.10.3.30.
Escape character is '^]'.
Connection closed by foreign host.
|
NRPE hasn't been updated in a few years (September 2013), this is what it does on the server side
SSL_library_init();
SSLeay_add_ssl_algorithms();
meth=SSLv23_server_method();
...
SSL_CTX_set_options(ctx,SSL_OP_NO_SSLv2 | SSL_OP_NO_SSLv3);
...
SL_CTX_set_cipher_list(ctx,"ADH");
dh=get_dh512();
and the client (check_nrpe on Nagios server)
SSL_library_init();
SSLeay_add_ssl_algorithms();
meth=SSLv23_client_method();
...
SSL_CTX_set_options(ctx,SSL_OP_NO_SSLv2 | SSL_OP_NO_SSLv3);
...
SSL_CTX_set_cipher_list(ctx,"ADH");
The SSLv23_xxx functions used to be the most compatible way to connect to any SSLv2 or SSLv3.x system. Both of the above are deprecated. While the code disables protocol versions 2.0 and 3.0 (leaving only TLS), the client connection will start with an SSLv2 ClientHello.
DH primes smaller than 1024 are now deemed insecure (though it might be 768 in some versions of OpenSSL).
This explains why you cannot connect to yourself, the OpenSSL client (check_nrpe) will reject a short DH key. (ADH is used since it does not need a certificate and is hence "anonymous", not a good plan in an untrusted network but acceptable for this this purpose.)
I suspect you may also be running into a second problem connecting to the new system. What has probably happened is that the client (NSCA server) has updated to using a recent OpenSSL, SSLv2 has recently been disabled by default, though some distros have been doing that for years. In most cases there should still be an overlap in protocol/ciphersuite but it's possible and common for the TLS server (nrpe daemon in this case) with SSLv2 disabled to reject an SSLv2 ClientHello handshake packet, even though the client indicates SSLv3 or higher within the handshake (technical details here: https://security.stackexchange.com/questions/59367/what-exactly-determines-what-version-of-ssl-tls-is-used-when-accessing-a-site)
To fix this you may need to downgrade the OpenSSL on the updated system, or install a parallel older OpenSSL version which does not have these (sensible!) precautions. Neither sound like good options...
If you built nrpe yourself, should be suffient you will need to replace the call to get_dh512() with get_dh1024() and recompile it with a new static 1024-bit key -- to do this you will need to create a new 1024-bit prime by either modifying the hard-coded 512 in configure (line 6748) and rerunning it, or using
opensl dhparam -C 1024 | awk '/^-----/{exit} {print}' > include/dh.h
and then make to rebuild.
You will probably also need to replace the call to SSLv23_client_method() with TLSv1_client_method() also so that an SSLv2 "compatible" ClientHello is not attempted and recompile check_nrpe on the Nagios server. Since you run the risk of breaking connectivity to other clients then you might need a second check_nrpe_new binary for "upgraded" servers, and use this in command/check_command of your templates (I don't think this should happen though).
(As a last resort you may be able to do something unpretty with socat or stunnel to bridge any mismatching, here's one way to do that: https://security.stackexchange.com/a/108850/18555 .)
A more precise answer will require the OpenSSL version (openssl version -a, or the relevant package manager output) and your distribution versions.
| nagios SSL handshake |
1,359,274,529,000 |
I remember long time ago I had some antivirus on windows which had wile accesses monitor, and could tell you if any process accesses a file.I need to monitor every file in my home folder(or any other) what application does the actions.
|
If you are using Linux, this sounds like a job for fatrace (which uses the fanotify API). Here is some sample output:
sh(28980): C /bin/bash
cron(28974): CW /tmp/tmpf807Y78 (deleted)
cron(28974): C /lib/x86_64-linux-gnu/security/pam_unix.so
cron(28974): C /lib/x86_64-linux-gnu/libcrypt-2.13.so
cron(28974): C /lib/x86_64-linux-gnu/security/pam_deny.so
This tells us, for example, that cron did a close-write on /tmp/tmpf807Y78.
If fatrace turns out not to be adequate for your use case, you could
look up alternative clients of the fanotify API.
| file creation and acces monitoring app [duplicate] |
1,359,274,529,000 |
Ok, so I want to monitoring running programs on Debian. For example I have a running several program on my instance and I can get output of netstat -plnt and see what is program and their ports. Example:
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 65/sshd
tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 656/mysqld
tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN 631/redis-server
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1023/nginx
And I want receive a notification on email/slack when new program will be running. Maybe anybody know some utilities or program that can do it?
|
#! /bin/bash
while :; do
running=$(netstat -plnt)
if [ "$running" != "$newrunning" -a -n "$firstrun" ]; then
diff -u <(echo "$newrunning") <(echo "$running") | mail -s "New listeners!" [email protected]
fi
newrunning=$(netstat -plnt)
firstrun=1
sleep 1
done
This script (must be run under root obviously) will notify you of any new/removed applications which open listening ports.
| How to monitor running programs with open ports? |
1,359,274,529,000 |
So I'm pretty new to Linux and I really can't figure it how to.
So I wanted to make a useful output for our monitoring Tool with speedtest-cli. We have to monitor download and upload speeds for multiple locations.
I made following script that breaks the output with awk and gives me the desired number (in this case only the number itself without text in front and behind the number)
SP=$(speedtest-cli 2>&1)
if [ $? -eq 0 ]
then
Down=$(echo $SP | gawk '{split($0,a,":"); print a[3]}' | \
gawk '{split($0,a," "); print a[1]}')
fi
echo "$Down"
This script works as I want it to. But, I really would like a solution to return only the digits. So is it possible to search for the line "Download: 90.00 Mbit/s"
and take the 90.00 and give that to output?
EDIT:
for anyone interested the script I wrote down there. it outputs <WAN_IP>,<Download>,<Upload> and if there is no connection it outputs 0.0.0.0,0,0
#!/bin/sh
SP=$(speedtest-cli 2>&1)
if [ $? -eq 0 ]
then
From=$(echo "$SP" | grep -oE "\b([0-9]{1,3}\.){3}[0-9]{1,3}\b")
Down=$(echo "$SP" | gawk '{if (match($0,/Download: ([[:digit:]]+\.[[:digit:]]+) Mbit\/s/,a)>0) print a[1]}')
Up=$(echo "$SP" | gawk '{if (match ($0,/Upload: ([[:digit:]]+\.[[:digit:]]+) Mbit\/s/,a)>0) print a[1]}')
else
From="0.0.0.0"
Down="0"
Up="0"
fi
echo "$From,$Down,$Up"
|
Since you explicitly use gawk, you can use the match() function to look for the pattern "Download: number Mbit/s" and extract the actual value of that pattern found in your string as follows:
gawk '{if (match($0,/Download: ([[:digit:]]+\.[[:digit:]]+) Mbit\/s/,a)>0) print a[1]}'
This will
determine if the pattern was found in the first place, and
put all ( ... )-enclosed sub-groups of the RegExp in the elements of the array a
from where you can then simply use entry 1 (since there is only one such sub-group in the RegExp)
.
| Speedtest-CLI output for monitoring |
1,359,274,529,000 |
What is the meaning of this error?
[root@db2 zabbix]# zabbix_sender -z zabbix -s zabbix -k mysql[Threads_running] -o 100 -vv
zabbix_sender [55944]: DEBUG: answer [{"response":"success","info":"processed: 0; failed: 1; total: 1; seconds spent: 0.000015"}]
info from server: "processed: 0; failed: 1; total: 1; seconds spent: 0.000015"
sent: 1; skipped: 0; total: 1
[root@db2 zabbix]#
Also this link i explin completly my config and issues.
|
This error means that server did not accept the value. Common reasons:
incorrect hostname
incorrect item key
item not in the server configuration cache yet
Note that hostnames and item keys are case sensitive.
| zabbix_sender error |
1,359,274,529,000 |
I am looking for some solution to remove or list all movies that are dowloaded on our network storage. We are running short on free space and I noticed that a lot of people started downloading movies and music and then store it on network drive. Is there any better solution than writing a script that will run in cron and send me by mail all files larger than 500MB, for example?
|
The best solutions are to either:
Upgrade your storage so you have more space.
or:
Talk to people and get them to quit putting things on the disk and not deleting them.
or, if 1 is not an option and 2 fails:
Set up usage quotas for everyone, and enforce them. THis will require everyone to authenticate as themselves however, but they should become more space conscious because they will run out of space before the NAS system does.
Barring those two options, there are all kinds of tricks you can pull with find to get to only list stuff you actually care about. In particular, you can match on particular file owners (or all owners except a set you list), file size, and last modification time (technically you can match on access time too, but you should avoid that as access time is unreliable).
Expanding on the command in the comments, you might try:
find /mnt/storage -regex ".*\.\(mp4\|mov\|avi\|mpg\|mp3\|wav\)" -type f -mtime +28 -exec rm {} \;
That will additionally only touch things that are actually files (the -type f clause excludes directories, symlinks, sockets, etc.) that are more than 28 days old (the -mtime 28 clause does this). In place of the -exec clause, you could just have it print the results and handle things yourself (though of course you can fine tune it to ignore 'work' files instead, and then just not have to deal with it until you update what constitutes a 'work' file).
| How to monitor/remove dowloaded movies and music from network storage? [closed] |
1,359,274,529,000 |
As part of smooth functioning on my application I want to check a count of a particular process and send a email to multiple people if it exceeds certain limit.I have written script for counting process but I dont know for the email part.
Code for processes count
#!/bin/sh
NOP=`ps -ef | grep -I nagios.cfg | grep -v grep |wc -l`
if [ $NOP -gt 2 ]
then
(
echo "More parent processes are running on the server"
)
fi
|
mail command is pretty simple:
echo "More parent processes are running on the server" | mail -s "subject" [email protected] [email protected]
And your script can be optimized to one line:
[ "$(pgrep -c nagios.cfg)" -gt 2 ] && echo "More parent processes are running on the server" | mail -s "subject" [email protected] [email protected]
| Count the number of processes and send a email to multiple people if it exceeds certain limit |
1,359,274,529,000 |
I am new to SNMP and I can not find any clear article if we can add a data node in SNMP. And also how does SNMP collect data?
I want to monitor the following resources, which can be obtained from SAR report. So, please tell me how to add this under SNMP or at least how does SNMP collect data, so that I will try to figure out if I can add these:
RAM and SWAP Used (without buffer/cache) & Total
Load (1min, 5min, 15min)
Iowait
cpu idle
pagein/outs
Swap in/outs
I/O read blocks/bytes/sectors per second
I/O write blocks/bytes/sectors per second
I/O requests per second
Network Interface Speed
I can find a few in SNMP tree, but not all. Can I add any data node under SNMP is it possible?
We need this as a part of monitoring few 100s of servers in cacti.
|
The Net-SNMP package supplied with RedHat is actually a very flexible monitoring agent, which will get values for all of the metrics you listed by default out of the box. However, it's old: the SNMP protocol itself has been around for over two decades, with significant improvements made over that span. The learning curve for it is fairly steep, as well. Which is directly related to the 26+ years of development made on the protocol base. The Net-SNMP project was pretty much there for all of that (first as a Carnegie-Mellon implementation and then as "ucd-snmp" from the University of California at Davis, which led to the current "Net-SNMP" code fork), so there is a fair bit of information to get a handle on, but they have great documentation.
http://net-snmp.sourceforge.net/docs/man/
is the basic manual pages for the distribution. The Wiki has good "quick setup" guides and can be found at
http://net-snmp.sourceforge.net/wiki/
So I'd start there to get up and running quickly. But read on...
Net-SNMP collects it's data from the Linux kernel, using various sources ( the /proc filesystem and the lm-sensors packages to name a few). It can also be extended to report on just about anything you want, but that's going to take some significant investment of time and knowledge in order to do so.
In order to understand what is happening behind the scenes, the first concept you need to take a look at is the structure and availability of SNMP MIBs (Management Information Bases), which control what information you can query. I wouldn't spend a lot of time on it, but knowing which MIBs are available on your system and the structure of some of the most common MIB objects like TABLEs, STRINGs, INTs and INFORMs will allow you to select appropriate objects for your monitoring needs.
The second piece you need is an understanding of the Net-SNMP configuration file: snmpd.conf. This is a complex piece of of configuration, so read the man pages thoroughly to understand why things are set up the way they are in the defaults.
Also, from a default implementation, you will need to select the protocol version you'll be supporting/querying with. Please DON'T use version 1. Your choice, really, is between versions 2c and 3 for support of rudimentary security and 64-bit counter support.
Good Luck! Your adventure awaits!
| How to use SNMP to get any information that we need in Redhat? |
1,359,274,529,000 |
I want to monitor PHP-FPM in Zabbix. Tell me, please, how to do it?
Are there any templates to try it ?
|
There are some, like https://github.com/jizhang/zabbix-templates/tree/master/php-fpm .
Looking at https://github.com/jizhang/zabbix-templates/blob/master/php-fpm/php-fpm-check.sh , it parses the status page. In case of errors, arbitrary negative values are used.
| How to monitor PHP-FPM in zabbix? |
1,359,274,529,000 |
So, I found that nload and iftop displays the amount of data that passed and how fast. Now, is there anything like those that can save/load statistics?
I want to measure the amount of data I download over a month/year. Suggestions will really help.
Thanks!
|
File "/proc/net/dev" has information about trannsmited and recieved packages for every interface your computer has. I guess you could make a script to time the packages and get the bandwith since you have data in its raw form.
| Historical bandwidth data in a console? (save/load statistics) |
1,359,274,529,000 |
I have couple of machines as shown below which are running Ubuntu 12.04 and I need to find out the process name along with its pid whose CPU usage is greater than 70%.
Below are the machines as an example -
machineA
machineB
machineC
machineD
I need to have my shell script which can run periodically every 15 minutes and check whether any of the above machines has CPU usage greater than 70%. If there are any machines which are having CPU usage as greater than 70%, then send out an email with the machine name and the process name along with it's id.
I will be running my shell script from machineX and I have passwordless ssh key setup for user david from machineX to all the above machines.
What is the best way to do all these kind of monitoring?
I have below command which can get me PID, %CPU and COMMAND name of the process whose CPU usage is greater than 70%.
ps aux --sort=-%cpu | awk 'NR==1{print $2,$3,$11}NR>1{if($3>=70) print $2,$3,$11}'
Not sure how to fully automate this process?
|
You should probably try to use an already existing monitoring solution for this. This is pretty much exactly what they're designed to do, monitor for conditions and send out alerts (SMS or email). You might want to check out nagios or zabbix for a free monitoring solution.
I haven't used it but it looks like Cacti supports alerting on thresholds
collectd can also alert, but I would mainly only use collectd for a historical collection of performance statistics where I didn't need to alert on anything.
Bottom line is that doing this yourself is going to waste your time, effort, and introduce the possibility of error in your monitoring mechanism. It's a common problem with a variety of pre-made solutions.
| How to monitor bunch of machines for CPU usage from another machine? |
1,359,274,529,000 |
I have written a script to check if our three URLs are up.And if they are down I will need to send a message stating the url's are down and not active.
The problem is, I did something wrong and now in any scenarios my output always shows the "URLs are up"
FYi.. we use nginx and and hence why i has grep the output for "http 302 found"
if curl -k --head $URL1 | grep "302 Found" && curl -k --head $URL1 | grep "302 Found" && curl -k --head $URL1 | grep "302 Found"
then
echo "All The URLs are up!"
else
echo " all url is down "
fi
|
Give this a try.
#!/bin/bash
for URL in <url1> <url2> <url3>
do
STATUS=$(curl -s -o /dev/null -w "%{http_code}\n" $URL)
if [ $STATUS == 302 ] ; then
echo "$URL is up, returned $STATUS"
else
echo "$URL is not up, returned $STATUS"
fi
done
| script to check if the URL are up and running [closed] |
1,359,274,529,000 |
I wonder is it possible to run ganglias gmetad and ganglia-monitor not demonized, under my own user (me, sudo) on debian? Because while gmond.conf contains something like demonize option I see no such in gmetad.conf...
|
Looking at man gmetad, you'll probably find
-d, --debug=INT
Debug level. If greater than zero, daemon will stay in foreground. (default='0')
so using commandline argument, e.g. gmetad -d 1, should do the trick.
| Is it possible to run ganglias gmetad and ganglia-monitor not demonized on debian? |
1,359,274,529,000 |
The psad monitoring tool keeps on sending lots of mail to my localhost admin account. I use my ubuntu server as a NAT router, and psad warned me to enable logging in iptables. After I did so, it sarted filling my mailbox with loads of messages. Within few days the size of mail box has grown to 3.4 GB.
How can I completely turn off mailing on psad?
|
Please see the documentation: http://cipherdyne.org/psad/docs/config.html
You could set 'EMAIL_ADDRESSES' to a blackhole address (eg a receive address that just discards what it gets), or consider tuning the following:
'EMAIL_ALERT_DANGER_LEVEL'
'PSAD_EMAIL_LIMIT'
'EMAIL_LIMIT_STATUS_MSG'
Those are described in the URL I provided above. There are further email alerts for DShield as well, if you have that enabled - if so, they are also described in that URL.
| Turning off mailing in psad |
1,359,274,529,000 |
I'm facing an issue with my MT7601U network adapter on Linux. Although I've successfully entered monitor mode, I'm encountering difficulties capturing handshakes and discovering devices in the network/s. Strangely, only WPS attacks seem to work reliably.
Here are the key details:
Adapter: Ralink MT7601U
Distribution: parrotOS 6.0 (lorikeet) 64-bit Kernel 6.8.0-xxxxx
Problem Summary:
Monitor mode is enabled.
Unable to capture handshakes or detect devices in the network, except for WPS attacks which is very different kind of wifi attack.
Attempts to Resolve:
Successfully entered monitor mode using the iw command (sudo iw dev <interface> set monitor control).
Tried various tools like Aircrack-ng, Wireshark, and tcpdump for packet capture, but no success in capturing handshakes.
Even after ensuring monitor mode is active, no devices are being discovered in the network, except during WPS attacks.
Observations:
Checked system logs (dmesg, /var/log/syslog) but found no relevant error messages.
Updated drivers through the package manager and also tried downloading from the manufacturer's website, but the issue persists.
I'm reaching out to the community for guidance on troubleshooting steps or potential solutions to this problem. Any advice or insights from users familiar with the MT7601U adapter or similar issues with other adapters would be greatly appreciated.
Thank you for your assistance!
|
I solved it by running the following commands sudo airmon-ng check kill
Before using sudo airmon-ng start <NETWORK INTERFACE>
everything works great and I finally can enjoy my new wifi adapter.
Thanks for anyone who was concerned ♥️
If anybody have explanation for it it will be great!
| MT7601U Adapter in Linux - Monitor Mode Enabled, but Unable to Capture Handshake or Discover Devices |
1,359,274,529,000 |
I once had this package called specto, that would monitor a website for changes. Now, it seems to be missing in Debian 11:
▶sudo apt install specto
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
E: Unable to locate package specto
I'm curious why it is no longer available, and if there's an AppImage for it (or some other way I can still install it).
|
It’s no longer available because it depends on the obsolete gnome-python library (and more generally, the obsolete Python 2 ecosystem); it was removed from Debian in January 2018 and isn’t available in either Debian 10 or Debian 11. This is actually a long time after Specto’s own author declared it obsolete, in March 2013!
I doubt there’s an AppImage for it. You could always run it in a Debian 9 container, but even if you managed to get the notifications hooked up to your main desktop environment, I’m not sure the old notification system used by Specto would work any more.
When Specto’s author declared it obsolete, the idea — at least for monitoring web sites — was to use Liferea (or another similar syndication-feed-based tool), but even that idea is largely obsolete nowadays (unfortunately).
| Specto Missing from Debian 11 |
1,655,405,075,000 |
OS: Ubuntu 22.04 LTS.
Many posts deal with file monitoring. One in particular is of interest and based on inotifywait, but I don't know how to modify it for my purpose.
Objective: to monitor $HOME/{Documents/,Downloads/,Archive/} for link files *.lnk as they are created. Those files are created every time I use Word in Wine to create, save, open or do anything with a document. Dozens *.lnk files can be created in mere minutes. This issue is killing me.
I am willing to learn but can't translate generic examples into what I need for lack of knowledge. I know how to run a script in a plain file, but if there's anything special I should know in this regard, please let me know. Tx in advance.
|
You need to write this small script in a file using your terminal. I assume you are using the bash shell since you are beginning, and on Ubuntu. Let us know if it is otherwise.
$ touch notify_links
$ chmod u+x notify_links
$ cat notify_script
#!/usr/bin/env bash
inotifywait -mr -e moved_to,create "$HOME"/{Documents,Downloads,Archive} |
while read directory action file; do
if [[ "$file" =~ .lnk$ ]]; then
echo rm -f "$file"
fi
done
Run this script. To do so, just issue (in terminal) the following command notify_links in terminal.
Once satisfied by what you see appearing on terminal display, remove the echo in the script line: echo rm -f "$file" to leave only rm -f "$file".
EDIT 1 per @ilkkachu's comment in order to specialize monitoring to three directories/folders instead of the complete $HOME subtree.
EDIT 2 per @Paul_Pedant's comment, in order to run this automatically every 10 seconds as soon as your boot process is finished, edit your /etc/crontab file with crontab -e to include:
* * * * * $USER for i in $(seq 5); do /usr/bin/find $HOME -name "*.lnk" -delete; sleep 10; done
EDIT 3 for faster result and lesser resource usage, you'll want to search only the directories that you mentionned in OP. The following will search their subtrees:
* * * * * $USER for i in $(seq 5); do /usr/bin/find "$HOME"/{Documents,Downloads,Archive} -name "*.lnk" -delete; sleep 10; done
In order to prevent find from recursing down the subtrees, add the following option -maxdepth 1 before -name "*.lnk" in the find command.
| Monitor several directories for specific files' creation, every 10s |
1,655,405,075,000 |
I have a process that gets sizes for all disks on server, writes it into a file like this
# cat disksize
DISK# ACTUAL WARNING CRITICAL
disk1 12 20 30
disk2 45 60 75
first row of file is for reference, showing what each column is for. Below is monitoring script, but I'm not sure if it will work for nagios, as some of these sizes may result in OK, some maybe in warning. Anyone give any insight on this, please
# cat check-disk_size
#!/usr/bin/env bash
LOGFILE='disksize'
cat ${LOGFILE} | while
read disk_name actual warning critical
do
if [ $actual -ge $warning ]; then
echo "WARNING: $disk_name has reached standard warning limit, Current actual: ${actual}"
exit 1
elif [ $actual - ge $critical ]; then
echo "WARNING: $disk_name has reached standard critical limit, Current actual: ${actual}"
exit 2
else
echo "OK: $disk_name is under optical limit, Current actual: ${actual}"
exit 0
done
|
Your script (as written) will not do what you expect. The biggest problem is that you could exit 0 prematurely from the loop, missing possible Critical disk entries that follow. Less dangerously, the script could exit 1 with a Warning when Critical problems exist. Nagios will base the status of this check on the exit code, so your script could give confusing results simply based on the ordering of the entries in the file.
I would recommend restructuring the script so that it returns exactly what you expect, given the data in the file. Should it roll up the worst alert? Should it count how many alerts are in the file? The safest idea would be to roll up the worst alert, so that every disk has to be below the Warning threshold in order for the Nagios alert to be "OK", but your environment may dictate other requirements.
Here's one possibility that rolls up the worst alert:
awk '
BEGIN {
warn=0
crit=0
}
{
if ($2 > $3) ++warn
if ($2 > $4) ++crit
}
END {
if (crit) {
print "CRITICAL: one or more disks have reached the standard crtical limit"
exit 2
} else if (warn) {
print "WARNING: one or more disks have reached the standard warning limit"
exit 1
} else {
print "OK: all disks are under their limits"
exit 0
}
}
' < file
It's just an example to demonstrate the idea.
| Is it alright to add a disk size monitoring script for NAGIOS, using loop? |
1,655,405,075,000 |
I would like to modify iptables after VPN connection. What is the best approach to achieve that ?
I already tried through systemd with a custom unit in NetworkManager dispatcher.d directory but it seems to only work at boot, not if VPN is stopped and restarted - if I wrote it right :
[Unit]
Description=iptables setup
Requires=nordvpnd.service
After=nordvpn.service
[Service]
Type=simple
ExecStart=/usr/local/bin/setup_iptables.sh
[Install]
WantedBy=multi-user.target
I could monitor tun0, if up and configured, parsing the result of :
ip address ls dev tun0
but that would mean delay between update. Ex. test every 5 sec. could mean at least 4 sec. when iptables is not configured as it should.
My last idea would be to run setup_iptables.sh every time /usr/bin/nordvpn is called, but how to achieve that ?
Thanks
|
Does /usr/bin/nordvpn return immediately after it is called or does it return after completing the VPN connection?
In the latter case, just rename /usr/bin/nordvpn to /usr/bin/nordvpn.original and create a script /usr/bin/nordvpn that first calls /usr/bin/nordvpn.original and then /usr/local/bin/setup_iptables.sh (this is called a "wrapper script").
If /usr/bin/nordvpn returns immediately, it's harder, because you need in your wrapper script to somehow check if VPN connection has been established (maybe by pinging something particular?) and if yes, then run /usr/local/bin/setup_iptables.sh.
| run a script after a binary was called |
1,655,405,075,000 |
I am getting SSL Handshake errors with NRPE after enabling SSL. It worked perfectly fine without SSL doing check_nrpe. The allowed host is correct and when run without SSL enabled it shows the proper version. Both are running 4.3 on CentOS Linux release 7.9.2009 (Core) I did not compile NRPE or nagios from source I installed via Yum.
Here are the configs I feel are important to this issue.
here is the error I'm getting logged... It says wrong version but both are running same version of NRPE.
I am using a real purchased wildcard cert... Same cert on both sides. Cert matches the domain name of the server.
nrpe --version
NRPE - Nagios Remote Plugin Executor
Version: 4.0.3
Same version on both for openssl
openssl version
OpenSSL 1.0.2k-fips 26 Jan 2017
When I run ./check_nrpe -H hostname.domain.com I get
CHECK_NRPE: (ssl_err != 5) Error - Could not complete SSL handshake with 10.1.1.125: 1
On the other server it logs:
Jan 5 12:48:54 nagiostest2 nrpe[3575]: Error: (ERR_get_error_line_data = 336130315), Could not complete SSL handshake with 10.1.1.64: wrong version number
Jan 5 12:51:11 nagiostest2 nrpe[3692]: CONN_CHECK_PEER: checking if host is allowed: 10.1.1.64 port 16075
Jan 5 12:51:11 nagiostest2 nrpe[3692]: is_an_allowed_host (AF_INET): is host >10.1.1.64< an allowed host >10.1.1.64<
Jan 5 12:51:11 nagiostest2 nrpe[3692]: is_an_allowed_host (AF_INET): is host >10.1.1.64< an allowed host >10.1.1.64<
Jan 5 12:51:11 nagiostest2 nrpe[3692]: is_an_allowed_host (AF_INET): host is in allowed host list!
Jan 5 12:51:11 nagiostest2 nrpe[3692]: Error: (ERR_get_error_line_data = 336105671), Could not complete SSL handshake with 10.1.1.64: peer did not return a certificate
Here is the important portions of my nrpe.cfg
debug=1
ssl_cipher_list=ALL:!aNULL:!eNULL:!SSLv2:!LOW:!EXP:!RC4:!MD5:@STRENGTH
ssl_version=TLSv1.1+
#ssl_cipher_list=ALL:!MD5:@STRENGTH
#ssl_cipher_list=ALL:!MD5:@STRENGTH:@SECLEVEL=0
ssl_cipher_list=ALL:!aNULL:!eNULL:!SSLv2:!LOW:!EXP:!RC4:!MD5:@STRENGTH
# SSL Certificate and Private Key Files
ssl_cacert_file=/etc/nagios/ssl/ca.crt
ssl_cert_file=/etc/nagios/ssl/star.mydomain.com.crt
ssl_privatekey_file=/etc/nagios/ssl/star.mydomain.com.key
# SSL USE CLIENT CERTS
# This options determines client certificate usage.
# Values: 0 = Don't ask for or require client certificates (default)
# 1 = Ask for client certificates
# 2 = Require client certificates
ssl_client_certs=2
# Enables all SSL Logging
ssl_logging=0xff
Thank you for any help ahead of time!
|
I'm assuming the nrpe.cfg is from the node being called up (10.1.1.125) and if that's the case, as Steffen said above, you have configured it to require a certificate from anyone calling it. Presumably this should be included when you run check_nrpe, and looking at the help text for 4.0.3 (which is the one I have) there's a -C flag for this. So presumably, you either need to include it with your calls, or re-configure the NRPE node being called up.
| NRPE Could not complete SSL handshake - Peer did not return a ceritificate |
1,655,405,075,000 |
I am using Linux Mint 20.
% inxi -S
System: Host: ismail-i5 Kernel: 5.4.0-56-generic x86_64 bits: 64 Desktop: Cinnamon 4.6.7 Distro: Linux Mint 20 Ulyana
I have 3 HDD and 1 SSD.
% lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 232.9G 0 disk
├─sda1 8:1 0 512M 0 part /boot/efi
└─sda2 8:2 0 232.4G 0 part /
sdb 8:16 0 3.7T 0 disk
└─sdb1 8:17 0 3.7T 0 part /media/ismail/WDPurple
sdc 8:32 0 3.7T 0 disk
└─sdc1 8:33 0 3.7T 0 part /media/ismail/WDRed
sdd 8:48 0 3.7T 0 disk
└─sdd1 8:49 0 3.7T 0 part /media/ismail/Toshiba
I want to have an early warning if any of my storage devices fail.
I know I can use the following to check my HDD health.
$ sudo smartctl -a /dev/sdX
And if I see that Reallocated_Sector_Ct or Current_Pending_Sector has value anything other than 0 then I should be concern and take necessary steps.
However, I do not want to always run this command and check the output. Is there any solution which I can use to have a warning when my storage devices start having problems.
|
Is there any solution which I can use to have a warning when my storage devices start having problems.
There are two types of people in the world: those who make backups, and those who haven't started making backups.
Jokes aside, HDDs often die abruptly due to mechanical failures with no advance warnings or info but in a majority of cases information still can be retrieved. SSDs on the other hand most often die such a way information extraction is impossible.
Speaking of Reallocated Sectors Count: positive values do not necessarily indicate your storage is dying. SSDs often have them and continue to function for years. A more important metric is whether and how fast this parameter is growing. You may have a few reallocated sectors and enjoy a very long drive life span.
Current Pending Sector holds a single value and it's pretty much useless to make any estimates.
I personally run smartctl -t long /dev/sda every few months but this test is not without its major pitfalls. Imagine you have an impending mechanical failure and you continue recklessly performing this test. Now the failing recording heads may start physically scratching the surface of your platters and actually destroying the remaining information which could have been salvaged if you hadn't started the test.
To get notifications about your drives statuses make sure you've got smartd.service enabled.
I have it running with these options:
/usr/sbin/smartd -n -q never -s /var/lib/smartmontools
It uses mail to notify you about changes in your drives statuses.
| early warning for HDD issues |
1,655,405,075,000 |
In my scenario, my host has a network interface with multiple IP addresses. I want to get the traffic of every IP address. I want to get received and sent packets and bytes, and error packets. My interface is:
qg-6108c4a2-94@if209: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether fa:16:3e:9e:58:d2 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 173.20.12.8/24 brd 173.20.12.255 scope global qg-6108c4a2-94
valid_lft forever preferred_lft forever
inet 173.20.12.7/32 brd 173.20.12.7 scope global qg-6108c4a2-94
valid_lft forever preferred_lft forever
inet 173.20.12.11/32 brd 173.20.12.11 scope global qg-6108c4a2-94
valid_lft forever preferred_lft forever
inet 173.20.12.13/32 brd 173.20.12.13 scope global qg-6108c4a2-94
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe9e:58d2/64 scope link
valid_lft forever preferred_lft forever
|
You won't be able to get the information you want. The IPs are part of the same single interface, and as such, they are only counted at the physical level. I should note that "ifconfig" is insufficient and obsolete. The ip command will show you the summary at the physical level.
% ip -s link show br1000
10: br1000: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:10:18:aa:a8:20 brd ff:ff:ff:ff:ff:ff
RX: bytes packets errors dropped overrun mcast
15661372003 108647365 0 0 0 0
TX: bytes packets errors dropped carrier collsns
278081828962 149463091 0 0 0 0
You would need to use iptables or nftables to track what you're wanting to track.
# iptables example - probably incorrect, I don't have a dev machine to test this.
iptables -A INPUT -i br1000 -d 10.100.0.1/24 -j LOG
iptables -A OUTPUT -i br1000 -s 10.100.0.1/24 -j LOG
# nftables example - probably incorrect, I don't have a dev machine to test this.
nft add rule ip filter INPUT iifname "br1000" ip daddr 10.100.0.1 counter log
nft add rule ip filter OUTPUT iiname "br1000" ip saddr 10.100.0.1 counter log
There are other supplementary software that can help you as well - the first one that comes to mind is vnstat. But your mileage may vary.
| How in Linux can I get traffic of every ip address on a single interface, with multiple IP addresses? |
1,655,405,075,000 |
In a presentation, I saw a command line tool mentioned that is able to record the running services (and the listening ports opened by them) as a baseline and later check against that baseline and report deviations.
Unfortunately, I don't remember the name of the tool, just, that it was written in Go (IIRC). Somehow I seem to use the wrong search terms and thus can't find it via a google search.
What is the name of this tool?
|
The name of the tool is goss!
From its README:
What is Goss?
Goss is a YAML based serverspec alternative tool for validating a server’s configuration. It eases the process of writing tests by allowing the user to generate tests from the current system state. Once the test suite is written they can be executed, waited-on, or served as a health endpoint.
| Record and later check running services |
1,655,405,075,000 |
If I'm using SSH to remote into a machine and I'm running a command like: sudo iptables -L -v -n how do I get a packet hit count that excludes the packet traffic I'm creating by using SSH?
I mean I could just avoid that by logging in to the machine directly, but I doubt that is always the case...or do I just have to add a rule to catch the specific type of traffic I want to monitor?
|
Your best bet is to add a separate rule for your SSH connections, so that they do not mix with the other flows. Something like this:
-A INPUT -s <your client IP address> -p tcp --dport 22 -j ACCEPT
You can also remove the IP address match if you want to separate all SSH traffic, add a match for a particular interface, etc.
| Hide / ignore the packets you are creating in iptables firewall when connected with ssh into a Linux machine? |
1,655,405,075,000 |
I'm looking for a monitoring service which look for suspicious writing on disk.
Let's say that I have a PHP/Ruby/Python website running on my server, and there is a vulnerability on my website. An attacker can upload/modify any file owned by the apache/nginx user.
Is there any service that can say : "Hey a modification has been made on index.php. It's weird because this file was never modified since its creation."
Or "Hey, there is a new file at /img/reverse.php what is a .php file is doing on /img".
|
Set up a regular backup scheme to another machine over the network via ssh using rsync to replicate the file tree(s) of interest. Run it with a listing of changes. Keep that listing in a dated filename on the target machine. The first time would list all the files. Scan the listings after that for what has changed. Then you can review these listings regularly or write an AI program to automate it.
| Monitoring suspicions writing on disk |
1,655,405,075,000 |
Is it possible to have a automatic screenshot capture of Zabbix CPU & Memory Graph
Is there any 3rd party tool which can be implemented with Zabbix for capturing the screenshot.
As a status mail it should be delivered via On daily bases morning and evening 8.00 clock
My exact aim is to collect daily status from Zabbix instead of creating any script.
|
Images (which are plain PNG) can be grabbed with scripts - and there are scripts that do so. You finish your question with "instead of creating any script", but I assume that a script that collects data from Zabbix (instead of doing its own data collection) would be acceptable. A couple of community efforts to generate reports:
Zabbix PDF report generaton
Zabbix Extras
Note that if you really want to achieve this without any scripts, that is not possible.
| Automatically screenshot capture of Zabbix |
1,655,405,075,000 |
I have 2 servers behind NAT, both have same public IP and NRPE is listening on non-standard ports.
I would like to monitor them both using my central icinga server, but I can't find where can I specify alternative nrpe port, icinga is trying default port which isn't open on target public IP. How can I do that?
|
Note that I assume that your NAT-ing device is already configured forwarding traffic to your NRPE servers. And that you are running Icinga 1.
On your Icinga server, you probably have some /etc/nagios-plugins/config/check_nrpe.cfg file existing, that gets loaded by your Icinga daemon. When you define a check_nrpe check, this is where Icinga finds your command definition.
The default check_nrpe command definition does not allow for dynamic ports. If you want to set a custom port running NRPE commands, you'll want to add a new command somewhere, or change the existing one (and potentially, all references to it) so it allows for this port to be defined.
On paper, you may have something like this:
define command {
command_name check_nrpe
command_line /usr/lib/nagios/plugins/check_nrpe -H '$HOSTADDRESS$' -c '$ARG1$' -t 30
}
define command {
command_name check_nrpe5667
command_line /usr/lib/nagios/plugins/check_nrpe -H '$HOSTADDRESS$' -p 5667 -c '$ARG1$' -t 30
}
define command {
command_name check_nrpe_dynport
command_line /usr/lib/nagios/plugins/check_nrpe -H '$HOSTADDRESS$' -p '$ARG1$' -c '$ARG2$' -t 30
}
Now instead of defining your check command as check_nrpe!my_remote_check, you would use check_nrpe5667!my_remote_check, or check_nrpe_dynport!5667!my_remote_check.
| How to specify nrpe port in icinga / nagios |
1,655,405,075,000 |
My goal was to add a box between two routers I have so that I can monitor and analyze network traffic, use it as a syslog server for both routers, and mail alerts when appropriate. Although using an old repeater hub would most likely accomplish what I want easier, I've not been able to find one for purchase.
Based on tips from the Wireshark wiki, I set up a Linux box as a bridge by adding the br interface, setting eth0/1 to 0.0.0.0 ip addresses, and bringing up the interfaces anew. But I quickly realized in the process that the configuration does not give me any network interfaces I can use for the logging service, and I am not sure I can run snort or other monitoring tools against a br0 interface. I can test the latter, but before I spend time doing that:
Am I missing something in my networking understanding about setting up a bridge that would in fact allow me to also assign addresses to the eth0/1 interfaces? (If I'm interpreting this stack exchange post correctly, I believe the answer is no.)
If in fact I cannot configure this box to accomplish my goal while configured as a bridge, are there ways to accomplish this other than setting the box as a router?
Or, is setting it up as a router overall the best approach if I cannot find a repeater hub (and I don't have a switch capable of port mirroring)?
|
First, you don't set eth0 & eth1 to 0.0.0.0, you instead do not assign an IP address at all. (But maybe your 0.0.0.0 is treated as no IP, not sure—never tried). You then assign an IP address to the bridge.
# ip addr ls
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
⋮
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master lan-br state UP group default qlen 1000
link/ether XX:XX:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff
3: lan-br: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether XX:XX:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff
inet 192.168.XX.XX/24 brd 192.168.XX.XX scope global lan-br
valid_lft forever preferred_lft forever
inet6 fe80::XXXX:XXXX:XXXX:XXXX/64 scope link
valid_lft forever preferred_lft forever
# brctl show
bridge name bridge id STP enabled interfaces
lan-br 8000.XXXXXXXXXXXX no eth0
My bridge currently only has one device on it (eth0), it exists to bridge virtual machines (and none are currently running). Yours would of course have both eth0 and eth1.
You should be able to have snort monitor br, eth0, or eth1. The traffic is flowing across all three. A quick test with tcpdump -n -i br should confirm that for you.
| Using a Linux box in a bridge or router configuration for network/securing monitoring and analysis |
1,655,405,075,000 |
I am trying to create a daemon to monitor a users's bash_history file for manual modifications. In other words, if a user opens the file and modifies it, the daemon will notify this action for safety measures, but when the history updates itself, nothing happens.
The solution I tried is using inotifywait:
while true; do
inotifywait -e close_write,move,delete ~/.bash_history && notify
done
where notify is a script that will do a specific notification procedure.
I believe this would work fine for most of files, but in this case it doesn't, since notify is executed every time the history updates.
Is it possible this way, or should I use another application?
|
That is not possible with inotify. There is no configuration that could probe if a file is altered by an user or a process, neither if it will monitor only "append to files".
And EVEN if there exists one "append to file" inotify event, one user could inject bash_history data with echo >> creating bogus entries and losing all the meaning of your monitoring.
You could harden your history files by following this advice, and i think this is the best you could do:
Secured bash history usage
| How do I monitor bash_history file for manual modifications? |
1,655,405,075,000 |
This is a script in bash. Here
Which capture a string "ERROR: your TCP- connection is dead."
from a Live dynamic log file. Here the name of a log file "TcpRcpt.log"
which contain sample of a log data below.
SAT Mar 26 19:55:37 2016 TCPRcpt-0297--ERROR: your TCP- connection is dead.
SAT Mar 26 19:55:37 2016 TCPRcpt-0297--RUNNING
SAT Mar 26 19:55:37 2016 TCPRcpt-0297--RUNNING
SAT Mar 26 19:55:37 2016 TCPRcpt-0298--ERROR: your TCP- connection is dead.
Now, issue step
logtail TcpRcpt.log | grep -m 1 "ERROR: your TCP- connection is dead." | sed 's/.*TCPRcpt-/ PID /;s/ -//' >> LOGFILE.LOG
It always gives 0 value, even though it does not capture the string.
Which makes if condition to send notification every time, when loop
execute.
Now my question: Is there any other alternative? which notify via
email when it capture the string every time?
|
'It always gives 0 value' because that's the exit code of the last command in the pipeline - sed, which succeed (returns 0) even when no replace happens.
Setting pipefail option will make the result of the pipe set to the result of the first failed command in the pipeline, if any. Modify your script by setting that option before calling logtail :
...
set -o pipefail
logtail TcpRcpt.log | grep -m 1 "ERROR: your TCP- connection is dead." | sed 's/.*TCPRcpt-/ PID /;s/ -//' >> LOGFILE.LOG
...
| Capture specific string and send notification every time |
1,655,405,075,000 |
One of my site was infested with malware once and since then I am seeing every alternate day the header.php files are getting reverted back to older versions (possibly by a script) and a malicious script is inserted somewhere in the file:
<script>var a=''; setTimeout(10); var default_keyword = encodeURIComponent(document.title); var se_referrer = encodeURIComponent(document.referrer); var host = encodeURIComponent(window.location.host); var base = "(>>>> KEEPS CHANGING >>>>>)http://www.theorchardnursinghome.co.uk/js/jquery.min.php"; var n_url = base + "?default_keyword=" + default_keyword + "&se_referrer=" + se_referrer + "&source=" + host; var f_url = base + "?c_utt=snt2014&c_utm=" + encodeURIComponent(n_url); if (default_keyword !== null && default_keyword !== '' && se_referrer !== null && se_referrer !== ''){document.write('<script type="text/javascript" src="' + f_url + '">' + '<' + '/script>');}</script>
I have cleansed the server many a times but in vein.
I am fully aware that the best option is to reinstall the entire site but I am afraid it's a risky affair as too much customizations have been done over the past and it will take a whole lot of effort to recreate the entire setup.
How do I encounter this issue? How to identify the script(s) responsible by running one more bash script maybe? Running inotify isn't a possibility as Hostgator doesn't allow us to install.
|
To start with, make sure your .php and .html files are NOT writable by the uid that the web server runs as. The web server needs read and execute permissions on the files and directories but (with possibly a few exceptions like upload directories) it does not need write access to the data it is supposed to be serving.
Then grep all the files in your web site (e.g. grep -ir pattern /var/www/) for something specific to this malware. that URL it reinserts is a good choice: grep -ir 'http://www.theorchardnursinghome.co.uk' /var/www/
Unfortunately, it's possible that most of the payload of the attack is encoded with base64 or similar so the grep may not find anything.
failing that, grepping for the files it is modifying may work - e.g. grep for header.php - if it writes to header.php, there's a reasonable chance that filename will be (hopefully unencoded) somewhere in the attack script.
grep your web log files for the same things.
If your attackers managed to gain root on your server then there are endless possibilities for them to hide what they're doing. But check, at least, system crontabs including root's crontab.
BTW, this probably belongs on Server Fault rather than here. Or maybe on Webmasters
| Files getting reverted to older version |
1,655,405,075,000 |
I'm writing a program that tracks user activities, and basically tries to automate things that can be.
I'm currently trying to monitor programs that user uses often from command line. But just knowing is not enough, I need data along the lines of when such command were executed, working directory etc.
My current solution is a set of python scripts, that on start up, go through PATH and make a dummy python script for every program found in PATH. When the user tries to use those commands,those scripts are called instead(by prepending the path to script, to the PATH) each of such script passes it's name and args to another script that logs the information and then calls whatever was originally typed on the terminal( by altering the PATH). Its a rather messy solution. I'm sure there's some straight forward way of doing this.
Also, It's be great If I could get the data on commands being executed in real ( or almost real ) time.
|
Did you look at auditd? If not see this slideshow about Linux audit system. It does provide most of the facilities that you are describing and in a much more standard and fool proof way.
http://people.redhat.com/sgrubb/audit/audit_ids_2011.pdf
| How can I track commands executed in a terminal ( without bash_history ) |
1,655,405,075,000 |
I need a network analysis tool that can find if there are some issues regarding:
packet transmission
sessions
any other stuff is welcome.
I'm connected to the appliance I need to check with a cross cable, so I have to check the traffic client-side. It would be great if it could have a gui and/or multiplatform.
|
Wireshark might be what you're looking for. To analyse packet loss you should isolate the session/stream and append "and tcp.analysis.lost_segment" to the automatically generated filter. If you see packets there then it's likely there's packet loss.
| What tool to use to monitor network issues? [closed] |
1,655,405,075,000 |
Does IPTraf includes its own activity in the report? If it does how do I exclude it?
|
I don't believe that tools such as IPTraf introduce any of their own activity in the way that you think that would warrant it needing to exclude it.
In general most of these types of tools make use of a mode that network devices provide called Promiscuous Mode. This mode allows applications to essentially see every packet as it flows through the NIC so that they can be inspected.
If you look at the Wikipedia page I provided IPTraf is listed as an application that makes use of this feature of the NIC.
excerpt from Wikipedia page
In IEEE 802 networks such as Ethernet, token ring, and IEEE 802.11,
and in FDDI, each frame includes a destination Media Access Control
address (MAC address). In non-promiscuous mode, when a NIC receives a
frame, it normally drops it unless the frame is addressed to that
NIC's MAC address or is a broadcast or multicast frame. In promiscuous
mode, however, the card allows all frames through, thus allowing the
computer to read frames intended for other machines or network
devices.
| Does IPTraf includes its own activity in the report? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.