date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,537,537,682,000 |
In an init.d file i wrote the below. Now i need to make the bin run in a specific directory. How do i tell it which directory to use?
mono --debug /path/bin &
|
cd /my/directory
mono --debug /path/bin &
| How do i run a process in a specific directory? |
1,537,537,682,000 |
I'm building a custom initrd so that I can boot diskless nodes with a tmpfs root rather than NFS root (a script in the initrd copies the contents of the root device to a tmpfs filesystem, then changes the value of NEWROOT). All seemed well and good in my test environment, I generated a custom initrd, booted it, / mounted from none as tmpfs but had all the files the NFS root provided, great stuff.
Then I started moving towards the real environment, and when the initrd boots it chokes trying to mount the NFS root (which is still a normal NFS boot at that point) complaining mount.nfs4: No such device.
I generate my initrd using:
dracut -v -m "nfs network base" --include rd.live.overlay/ / initrd-tmpfs.img
I confirmed that the initrd-tmpfs.img is being loaded (based on the early part of the PXE boot where it lists the initrd its loading).
When the initrd fails it drops into the emergency shell, and has a few interesting things:
ip addr show lists my IP from DHCP, and I can ping the NFS server
echo $netroot lists the NFS boot paramaters nfs4:[Server IP]:[root location]:[nfs options]
mount -t nfs4 [Server IP]:[root location] /sysroot results: mount.nfs4: No such device (very familiar)
modprobe nfs results: modprobe:FATAL: Module nfs not found obviously a problem
grep nfs /usr/lib/dracut/modules.txt does match
grep nfs /usr/lib/modules/[kernel version]/modules.order matches a few times, mentioning: kernel/fs/nfs.ko | kernel/fs/nfsv3.ko | kernel/fs/nfsv4.ko however none of these exist
Environment: RHEL 7
Looks like my question answered itself just as I finished writing it, I'll post the answer
|
The last debugging step I did clued me in, so I figured I'd post the answer for the sake of others. grep nfs /usr/lib/modules/[kernel version]/modules.order matched kernel/fs/nfs.ko | kernel/fs/nfsv3.ko | kernel/fs/nfsv4.ko, but they didn't exist.
Well *.ko represents a kernel driver, and initrd has an option --add-drivers, so
dracut -v -m "nfs network base" --include rd.live.overlay/ / initrd-tmpfs.img
became:
dracut -v -m "nfs network base" --add-drivers "nfs nfsv4" \
--include rd.live.overlay/ / initrd-tmpfs.img
Then lsinitrd | grep nfs listed nfs.ko and nfsv4.ko, the root device got itself mounted, copied and happy days, there's a diskless server booted over NFS that has a tmpfs / directory, great stuff for a HA diskless cluster.
The difference in drivers between my test environment and target would be a result of /etc/dracut.conf or /etc/dracut.conf.d/, which can specify drivers to be included, but I didn't look into them too much (I would rather specify the drivers when running the command for the sake of my sanity).
| initrd built with NFS module cannot mount NFS root |
1,537,537,682,000 |
On my CentOS 5 workstation I get a few seconds wait after dm-raid45 has been loaded ("Initializing Driver" or something like that).
This seems to be part of the initrd. After that the system boots up.
What is going on during those few seconds and what can I do to avoid this wait-time? I have currently no raid installed.
Update 2011-12-14: Problem is still there - deleted previously described "dead" ends from my question.
I hunted down the source of the message to these lines in the init script located in the initrd:
echo Waiting for driver initialization.
stabilized --hash --interval 1000 /proc/scsi/scsi
mkblkdevs
echo Scanning and configuring dmraid supported devices
So stabilized seems to be the line causing the delay. What the heck is that? I did not find any man-page for this and no binary with that name.
|
stabilize might take too long due to the --interval 1000 parameter (1000 indicates that there would be 10 checks performed with 1000ms (1s) intervals, which adds up to 9s). From what I've read here, it is a builtin command of nash. This long interval value looks like a workaround for hardware initialization issues described in the above mentioned bug. Try to change the value to 250 and see if your system boots properly.
| Avoid pause during dm-raid initialization |
1,537,537,682,000 |
I'm looking for help in getting a basic initrd environment up and running. My goal is to enhance my knowledge on how to create a basic Linux environment. Ideally, I would like to move into Embedded Linux systems later on and this seems the best starting point.
I've yet to find a good basic how-to on this subject, as such I've mostly ended up following a number of half finished or incomplete tutorials on the subject.
Below are links to the how-to's for your reference on what I've done so far.
http://web.archive.org/web/20120601223451/http://blog.nasirabed.com/2012/01/minimal-linux-filesystem.html
http://revcode.wordpress.com/2012/02/25/booting-a-minimal-busybox-based-linux-distro/
At the moment, when I boot the environment I get a GRUB prompt, I've tried adding a grub.cfg file to it, but it just gets ignored when the system boots and goes straight to the grub prompt.
To boot the initrd environment presently, I have to provide it with the following commands:
set root=(hd0,msdos1)
linux /boot/bzImage
initrd /boot/rootfs.cpio.gz
boot
This boots the mini OS, but gives an error about not being able to locate an init file (which is part of my rootfs.cpio.gz file in the root of its structure)
.
How I can go about fixing the problems with this initrd environment?
|
When linux kernel boots into the initramfs filsystem, it doesn't run /sbin/init, but /init. The solution is to symlink the /sbin/init to /init.
UPDATE:
I tried to recreate your problems and I discovered that you probably compiled 64-bit busybox and 32-bit linux kernel. Therefore, the linux kernel doesn't know how to execute the /init program because it's 64-bit. Recompile linux with 64 bit option enabled and replace the old version with it. Also, you'll need to symlink the /init to /sbin/init as I told you before
| Linux initrd environment setup - Failed to execute /init |
1,537,537,682,000 |
I am trying to move a Debian squeeze installation to VMWare VSphere 5.5 environment. However when booting the new machine after replication, the initrd/busybox gives an error, that it cannot mount the root-partition (invalid argument). However the driver for sda was loaded successfully and correctly detected all partitions previously (see screenshot below).
Following things have been done:
New/Blank VMWare-machine has been booted with GRML, Partitions created and data rsynced from the remote host
DiskIDs replaced with /dev/sda in udev/fstab/grub, initramfs updated
Grub bootloader installed
Upon reboot grub loads correctly, linux-image and initrd are correctly loaded and executed.
The kernel indicates, that it has found sda and partitions (sda1,sda2,...)
Init error message: mount failed, invalid argument
In busybox mount /dev/sda1 /mnt also fails with "invalid argument"
cat /dev/sda1 gives data, so hdd partition can be accessed
dmesg does not indicate any error when trying to mount
I also tried following things:
manually loading xfs and ext2 drivers before mount
using the VMware converter (same result)
Screenshot after failed boot:
Does anyone has some clues or ideas?
|
The BusyBox version number has a ”+deb6u11” suffix. That suggests Debian version 6, or “squeeze”. That’s rather old.
Perhaps GRML and VMWare Converter are creating a XFS filesystem (or other filesystem type) that includes some newer features that cannot be handled by the Squeeze kernel?
| Debian: Boot fails when mounting sda with "invalid argument" |
1,537,537,682,000 |
I read from arch wiki:
In case your root filesystem is on LVM, you will need to enable the
appropriate mkinitcpio hooks, otherwise your system might not boot.
However, both my initrd and initramfs are on my root filesystem. How does the kernel loads these files if it does not have the modules to read from lvm? Isn't it a chicken and egg problem?
Also, does the kernel use both the initrd and initramfs schemes, or only one? If both, how do these work together?
|
Isn't it a chicken and egg problem?
In a way, sure.
How does the kernel loads these files
It doesn't. A (fully modular) kernel is indeed incapable of doing so, in fact unable to access any disk at all, until you load the appropriate modules (ahci, scsi, etc.)
You could also ask how the kernel loads the kernel... this is not possible, so there has to be something else.
Like the bootloader, which loads both kernel and initrd/initramfs for you (if applicable - it's possible to put both into one file.)
| initrd and initramfs confusions |
1,537,537,682,000 |
So, I'm trying to split my initrd into two initrd's. There's some pretty significant scripts that run in the initrd, and we wanted to split the initrd so we could rev the just the logic or just the kernel portion.
As a single initrd, it boots fine. But, when I split it into two, I get an error:
RAMDISK: incomplete write (-28 != 8388608)
The grub menu entry's initrd looks like:
initrd /initramfs-scripts.img /initramfs-kernel.img
I can't find any documentation on using two initrd's. All I have found so far are this: stackexchange question, and this: grub bug. But, it doesn't give me an idea about what I'm doing wrong.
|
I'm not sure if this qualifies as a complete answer, but there's some weird behavior with pygrub and initramfs's. It seems to append a few bytes to the end. The bytes are zeroed, so cpio wouldn't care about them. However, we encrypt the initramfs, so the decryption algorithm does.
| Booting grub-2.00 with 2 initrds, crashes with RAMDISK: incomplete write |
1,537,537,682,000 |
First, the issue that I'm having is being unable to run VirtualBox on Kali 2.0.
I set up a usb live with persistence running Kali 2.0, which at the time had the 4.6.0-kali1-amd64 kernel. I have since updated/upgraded/dist-upgraded etc with all of the recommended sources. As a part of this, the new headers/kernels that have been installed are 4.9.0-kali4-amd64. However, even after boot, the kernel is 4.6.0, as confirmed by uname -r and the error thrown by vbox. I know normally grub needs to be config'd, though there is no grub bootloader in the usb live boot.
The error thrown by virtualbox says that no suitable driver was found for the 4.6.0 kernel, and also that the system is not set up to dynamically create drivers (though I believe that this is due to the fact that it is making the driver for the 4.9.0, but this is not the running kernel).
|
Due to a bug in either the way my live system was installed or the way live-tools handles the mounted partition, live-update-initramfs does not work in this particular case, as it looks to /lib/live/mount/medium/ as the root of the usb live device, though this was not the mountpoint (and there are 3 partitions needed from the usb device).
Instead of messing with mounting/unmounting etc. I was able to simply create a initrd.img file (it was missing) using update-initramfs, and moving this to the live folder manually from my non-live linux dist:
/usr/sbin/update-initramfs.orig.initramfs-tools -c -k 4.9.0-kali4-amd64
This creates the image. The vmlinuz-4.9.0-kali4-amd64 was already available. From within my non-live dist, with my usb inserted:
I first moved the initrd.img and vmlinuz from the /live folder on my usb to my desktop (for backup).
I then copied the initrd.img-4.9.0-kali4-amd64 and vmlinuz from my usb's persistence rw root folder to the /live folder.
I renamed these to initrd.img and vmlinuz and rebooted. Voilà
-Big thank you Jeff S. for your contribution.
| How to change the boot kernel of a usb live w/ persistent running Kali |
1,537,537,682,000 |
I need to know what config and data files are pulled in to make the initrd.img-xxx when update-initramfs (mkinitramfs) is executed.
I am having a video driver problem that I have narrowed down to the generation of the initrd.img-xxx after kernel updates. I only get low-resolution single screen VESA, I should have two screen 1080p.
Debian 12 Bookworm, but it's an old install that has been upgraded from earlier versions of Debian. I still have a working fallback kernel from 2 months ago, so I set it as manually installed and held back from upgrades for now.
I created a fresh installation of Debian on a spare drive with its own EFI boot sector and grub and it has no issues. I have, as best as I can query, the same graphics drivers and firmware installed in both installs, and I purged all of them from the old install and reinstalled with apt to get fresh configs if any. I also purged and reinstalled the kernel metapackage and initram tools.
I have two identical kernel builds installed in both the old and new installs. I copied the initrd.img-123 from the new install to the old install. The old install boots correctly with correct graphics using the initrd.img-123 from the new install.
The initrd.img of the new and old install are of different file type when listed by file initrd.img-XXX and they don't unpack the same when attempting to decompress. The new is making zstd files and the old system are appearing as CPIO. (The older fallback kernel also appears to have a CPIO initrd.img but doesn't have problems.)
I have mounted both root partitions and done diff -r on /boot and /etc and cleaned up the most obvious differences with apt-get purging old packages and some manual house keeping. But there is still a lot of noise due to heirloom configurations and settings, much of which I would like to keep if this doesn't drag on too long.
|
If you run update-initramfs with a "verbose" option, e.g. update-initramfs -u -v, it will display the name of every file it adds to the initramfs, and every hook script it executes.
| What files are pulled in by update-initramfs? |
1,537,537,682,000 |
I have problem with my linux. I always had small problem with my HHD but my PC always could works correctly but yesterday its stuck and I decided to reboot it. After rebooting I get this error.
[FAILED] Failed to start File System 6-53ec-49bb-8b46-0913583825fb.
[DEPEND] Dependency failed for /sysroot.
[DEPEND] Dependency failed for Initrd Root File System.
However, I can boot my Windows, which is on the same HDD as Linux.
Please help me fix my computer.
I would really appreciate your help.
|
I am very grateful to all of you for your help, but nevertheless, I solved this issue in a different way.
In my case, the solution to this problem was simply booting from LiveUSB and then running the command fsck -f /dev/sd##. This completely solved my problem.
I also want to add a solution for another problem that I encountered while solving this problem. The problem was that for some reason the recorded USB was not recognized by my computer and I had to write the Linux image to another USB, after which the problem was solved.
| How to repair linux after failed to start file system |
1,537,537,682,000 |
I have an old kernel ( 2.4.37.9 ) and I want to override or substitute the root=XXXXX parameter to send to the kernel inside the initrd script.
I already made some attempt to do that but it seems that at the end of initrd grub alway pass to kernel the root parameter define inside the menu.lst file, while I'm tryng to define a dynamic value ( ex. hda1 or hdc1 ) depending of the layout of th mother board.
title Linux-2.4.37.9_CCL_20130122 with INITRD
root (hd0,0)
kernel /boot/vmlinuz-2.4.37.9_CCL_20130122 ro root=XXXXXX console=ttyS0,9600 console=tty0 apm=off
initrd /boot/initrd-CCL.img.gz
Any suggestions ?
|
This is not the more polite solution to this problem, but it works and mybe should be helpfull to some other so I brifly describe here how I solved my problem to have a DOM capable to change it boot device automatically.
Inside the linuxrc, the script of initrd, I detect wich device is available and based on that result I set the default startup option used by grub
my linuxrx i something like this
#!/bin/ash
restart=0
mntdev=""
target="hda"
echo "--> check fdisk ${target} "
mount -t ext2 /dev/${target}1 /mnt/tmp
if [ -f /mnt/tmp/etc/slackware-release ]; then
echo "Found $target "
mntdev="/dev/${target}1"
olddef=$( cat /mnt/tmp/boot/grub/default )
if [ $olddef -ne 0 ]; then
echo "0" > /mnt/tmp/boot/grub/default
restart=1
fi
fi
umount /mnt/tmp
# ================================
if [ -z $dskroot ]; then
target="hdc"
echo "--> check fdisk ${target} "
mount -t ext2 /dev/${target}1 /mnt/tmp
if [ -f /mnt/tmp/etc/slackware-release ]; then
echo "Found $target "
mntdev="/dev/${target}1"
olddef=$( cat /mnt/tmp/boot/grub/default )
if [ $olddef -ne 1 ]; then
echo "1" > /mnt/tmp/boot/grub/default
restart=1
fi
fi
umount /mnt/tmp
fi
# ================================
if [ $restart -eq 1 ]; then
echo "Changed grub default : Rebooting PC "
echo "===================================="
sleep 2
mount -t ext2 $mntdev /mnt/tmp
chroot /mnt/tmp <<EOF
/sbin/reboot -f
EOF
fi
And inside the grub menu I reserver the first two entry , the 0 for device hda and 1 for device hdc
default saved
title Linux-2.4.37.9_CCL_20130122 with INITRD hda pos 0
root (hd0,0)
kernel /boot/vmlinuz-2.4.37.9_CCL_20130122 ro root=/dev/hda1 console=ttyS0,9600 console=tty0 apm=off
initrd /boot/initrd-CCL.img.gz
title Linux-2.4.37.9_CCL_20130122 with INITRD hdc pos 1
root (hd0,0)
kernel /boot/vmlinuz-2.4.37.9_CCL_20130122 ro root=/dev/hdc1 console=ttyS0,9600 console=tty0 apm=off
initrd /boot/initrd-CCL.img.gz
| Kernel/grub : how override root parameter inside initrd script |
1,537,537,682,000 |
I booted liveUSB stick based on Linux Mint 20.2. After outputting initrd line (one after vmlinuz) I got empty screen with "decoding failed, system halted" (tried twice, error got reproduced). That happened in 1 of 4 cases only: legacy boot with large (>4TB) SATA drive attached; UEFI booted with drive (one 4Tb and one >4TB) and legacy booted w/out that drive (only one 4TB attached) (4th: UEFI w/out disk I have not tried). Web search for support of large drives found: https://superuser.com/questions/1005475/trying-to-understand-linux-support-for-4tb-hard-disk-drive-on-legacy-bios where:
All that said, since the new disk is a non-boot disk, you needn't
really be concerned with these issues.
For "decoding failed, system halted" I've read a number of found links: https://askubuntu.com/questions/1269855/usb-installer-initramfs-unpacking-failed-decoding-failed, https://forums.linuxmint.com/viewtopic.php?t=328925, https://bugs.launchpad.net/ubuntu/+source/ubuntu-meta/+bug/1870260, https://askubuntu.com/questions/1355231/decoding-failed-system-halted, https://www.quora.com/Now-I-am-booting-Ubuntu-20-10-with-flash-card-When-booting-it-is-saying-Decoding-failed-system-halted-What-should-I-do, https://www.reddit.com/r/linux4noobs/comments/q7ahdx/decoding_failed_system_halted_problem/.
I do not see how it applies to my issue, they talk about bugs in initrd for compression, drive failing on hardware level, randomly occurring issue. My guess it that somehow in BIOS/legacy mode initrd cannot identify large SATA disk. How can I check that?
Added 1:
Another issue today:
64 bit relocation outside of kernel!
--- system halted
again after loading initrd.lz... output supports PSU failing hypothesis of the answer. System booted with no power to harddrives, when I connected one - boom that new one.
|
"Decoding failed, system halted" seems to imply that the initramfs decompression routine detected an error. If that is true, then the error happened very early in the boot process, before the kernel even attempts to detect any SATA controllers.
If initramfs was successfully unpacked, the system would drop into initramfs-based emergency mode on SATA access error instead of halting.
Instead, you should check for causes like this:
a poorly plugged-in SATA connector or a bad cable might cause data errors that come and go as you move cables around when disconnecting/re-connecting disks. (But that should not affect booting from a live USB...)
a power supply that's old and starting to fail might no longer be up to the task of spinning up all the disks simultaneously (so disconnecting any disk may help as it reduces the load). The RAM memory or the USB stick might get a slight undervoltage at boot time, just enough to cause data corruption on reading the initramfs file but unfortunately not enough to trigger undervoltage detection.
a fault in the "disliked" HDD might cause it to draw an abnormal amount of current at start-up, causing an undervoltage event to the rest of the system, resulting in data corruption reading the USB stick.
| decoding failed, system halted during legacy boot (possibly due to large SATA drive attached) |
1,537,537,682,000 |
I have initrd image compressed with xz. This is how I created it from image file initrd:
e2image -ar initrd - | xz -9 --check=crc32 > initrd.xz
now I need same image compressed using zstd algorithm. What command/parameters do I have to use, for the kernel to be able boot from this initrd image?
I have CONFIG_RD_ZSTD=y enabled in my kernel.
|
The equivalent with zstd shall be :
e2image -ar initrd - | zstd -19 --check > initrd.zst
| create initrd image compressed with zstd |
1,537,537,682,000 |
I'm trying to mount the rootfs / of a Debian Buster system as overlayfs because I'm interested in using tmpfs for the /upper directory. My idea is to use this to preserve the root filesystem integrity by making it fake-writable. I know there are a few packages intended to do this, like fsprotect and bilibop-lockfs, however I thin the former one is maybe a little outdated and the latter one seems to be more promising, but both use aufs and I'd like to learn about initrd and this early user space and the Linux booting process, maybe in a future I'll consider to try bilibop-lockfs.
Anyway ... my script is based on the current raspi-config script; as you can see I'm basically adding the very same script as an initramfs module and rebuilding, then this module is being triggered when boot=overlay is passed as a kernel command line parameter. This script apparently does the work of mounting the rootfs as an overlayfs, however ... I'm having problems with the following; as you can see in the df -h output, it shows the size is just 3.9G
Filesystem Size Used Avail Use% Mounted on
udev 3.8G 0 3.8G 0% /dev
tmpfs 781M 17M 764M 3% /run
overlay 3.9G 1.2G 2.7G 30% /
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/mmcblk1p2 236M 96M 123M 44% /boot
/dev/mmcblk1p1 511M 5.2M 506M 2% /boot/efi
/dev/mmcblk0p1 58G 811M 54G 2% /data
tmpfs 781M 0 781M 0% /run/user/1001
And some programs are having problems with this size because when they are running a while, they start to print "no left space on the device" in the journal logs. My question is ... what's specifying this size? I cannot see anything about the size in the overlay script. Could I set a bigger size to give a wider margin for those programs?
Thank you all.
|
Well, I didn't realized that you can choose a specific size in the mountoptions when you are mounting a tmpfs; from the tmpfs manpage:
Mount options
The tmpfs filesystem supports the following mount options:
size=bytes
Specify an upper limit on the size of the filesystem. The
size is given in bytes, and rounded up to entire pages.
The size may have a k, m, or g suffix for Ki, Mi, Gi
(binary kilo (kibi), binary mega (mebi), and binary giga
(gibi)).
The size may also have a % suffix to limit this instance
to a percentage of physical RAM.
The default, when neither size nor nr_blocks is specified,
is size=50%.
So replacing the line 86 into my script with this:
mount -t tmpfs -o size=100% tmpfs /upper
The system doesn't report problems with the free space anymore.
| How to control the OverlayFS size |
1,537,537,682,000 |
My PC boots to the grub command line.
$ ls
(hd0) (hd1) (hd1,gpt6) (hd1,gpt5) (hd1,gpt4) (hd1,gpt3) (hd1,gpt2) (hd1,gpt1)
(lvm/fedora-swap) (lvm/fedora-home) (lvm/fedora-root)
$ ls (hd1,gpt5)/
./ ../ lost+found/ efi/ extlinux/ grub2/ vmlinuz-4.10.12-200.fc25.x86_64
vmlinuz-4.10.10-200.fc25.x86_64 System.map-4.10.10-200.fc25.x86_64
config-4.10.10-200.fc25.x86_64 elf-memtest86+-5.0
System.map-4.10.12-200.fc25.x86_64 memtest86+-5.01
config-4.10.10-200.fc25.x86_64 .vmlinuz-4.10.12-200.fc25.x86_64.hmac
initramfs-4.10.12-200.fc25.x86_64.img vmlinuz-4.10.13-200.fc25.x86_64
System.map-4.10.13-200.fc25.x86_64 config-4.10.13-200.fc25.x86_64
.vmlinuz-4.10.13-200.fc25.x86_64.hmac
initramfs-4.13.12-200.fc25.x86_64.img .vmlinuz-4.10.10-200.fc25.x86_64.hmac
initramfs-4.10.10-200.fc25.x86_64.img
I've tried:
$ set root=(lvm/fedora-root)
$ linuxefi (hd1,gpt5)/vmlinuz-4.10.13-200.fc25.x86_64 root=/dev/sda5
$ initrd initramfs-4.13.12-200.fc25.x86_64.img
$ boot
After this, I get:
[FAILED] Failed to start Switch Root.
See 'systemctl status initrd-switch-root.service' for detailes.
Generating "/run/initramfs/rdsosreport.txt"
Entering emergency mode. Exit the shell to continue.
Type "journalctl" to view system logs.
You might want to save "/run/initramfs/rdsosreport.txt" to a USB stick or /boot after mounting them and attach it to a bug report.
|
I was also able to reproduce "Failed to start Switch Root" with kernel vmlinuz-4.2.3-300.fc23.x86_64 using the following commands.
grub> linux /vmlinuz-4.2.3-300.fc23.x86_64 root=/dev/sda1
grub> initrd /initramfs-4.2.3-300.fc23.x86_64.img
grub> boot
At the GRUB splash screen, when I press e to edit, the following is displayed on my system.
linux16 /vmlinuz-4.2.3-300.fc23.x86_64 root=/dev/mapper/fedora-root rord.lvm.lv=fedora/root rd.lvm.lv=fedora/swap rhgb LANG=en_US.UTF-8
initrd16 /initramfs-4.2.3-300.fc23.x86_64.img
I made note of these parameters, and then press c again to return to the GRUB command line. I adjusted the commands to be similar to what was listed at the edit screen.
grub> linux16 /vmlinuz-4.2.3-300.fc23.x86_64 root=/dev/mapper/fedora-root rord.lvm.lv=fedora/root rd.lvm.lv=fedora/swap rhgb LANG=en_US.UTF-8
grub> initrd16 /initramfs-4.2.3-300.fc23.x86_64.img
grub> boot
During boot, "Failed to start Switch Root" was no longer displayed. Does this work for you as well?
| Can't find initrd file on GRUB console |
1,537,537,682,000 |
I am currently working on a custom initrd based on the CentOS 6.7 (2.6) kernel with the following modules loaded.
The initrd is designed to backup files off a old RHEL system into memory, unmount the disk, wipe the disk and then finally dd a prebuilt CentOS system onto the disk.
The CentOS system was built on VMware then the vmdk was exported and converted into a raw format with qemu-img.
From testing the whole process works awesomely and once the dd is complete the system can be rebooted and start up fine.
The current blocker is that once the dd operation has completed, I can't mount the LVM disk to copy files back.
As you can see in the modules list the LVM drivers are there and are loaded, if I run a fisk -l it shows sda1 as the boot partition (Non-LVM) and sda2 as a LVM partition. When running pvscan -vvv it sees /dev/sda2 but says No label detected.
|
Since you wipe the disk, the running kernel will not know about the partitions available.
You can run partprobe (as comes with the parted partitioning utility) to reload the correct partitioning info in your running kernel.
If you don't have partprobe (small discs not requiring parted?), you can use hdparm -z /dev/yourdrive as mentioned by @ko-dos
| LVM devices not showing in initrd but working on boot [duplicate] |
1,537,537,682,000 |
I have a Java program running as a daemon (thanks to YAJSW, a wrapper for java).
The thing is that this java application writes several lines of console text, (simply imagine a Hello World App). If I run it from the console of course I can read those lines.
But when it runs as a daemon, where do those lines of text go?
|
I've found how to do it in the Yajsw help,
thanks @Gilles for the guiding
It's enough just to specify in wrapper.conf
wrapper.logfile= <path and filename >
Thanks anyway!
| How to log the output of a daemon application? |
1,635,820,837,000 |
I am following https://github.com/openzfs/zfs/wiki/Ubuntu-18.04-Root-on-ZFS but doing it for Ubuntu 20.04.
When I get to:
update-initramfs -u -k all
nothing happens: It returns after 0.1 second and normally update-initramfs takes several seconds on my machine.
update-grub also complains about a missing initrd:
# update-grub
Sourcing file `/etc/default/grub'
Sourcing file `/etc/default/grub.d/init-select.cfg'
Generating grub configuration file ...
Found linux image: vmlinuz-5.4.0-29-generic in rpool/ROOT/ubuntu
Warning: Couldn't find any valid initrd for dataset rpool/ROOT/ubuntu.
Warning: didn't find any valid initrd or kernel.
Found Ubuntu 20.04 LTS (20.04) on /dev/sda5
done
And when booting I get a grub prompt (no menu).
It seems there is some crucial step missing. Something that tells update-initramfs which initrd to build.
I have tested that the machine can boot on UEFI (the normal, unencrypted ext4 Ubuntu can install just fine with UEFI).
|
There is no initial initramfs, so updating none does nothing.
The solution was to create a new one:
update-initramfs -c -k all
-c being the magic change.
| Ubuntu 20.04 on zfs on root on LUKS on UEFI |
1,635,820,837,000 |
I am trying to create my own PID 1 init script, to be called from the boot cmdline with init=/myscript. How can I make it work on a real filesystem, with any kernel?
When it runs in an initrd, it works fine and can mount things, etc. - but when I use it on my filesystem without an initrd, it fails to mount things, because:
mount: only root can do that (effective UID is 1000)
When I strace any command that fails, it inevitably issues geteuid32() and that returns 1000. Why? How can I run as euid 0?
|
There's no special treatment for init on initrd, so there must be some other issue.
If run as root, the euid will match the owner of the binary if the setuid bit is set.
Check the ownership on /bin/mount.
| Why does a shebang script run as init= have an euid of 0 when run from an initrd, but not otherwise? |
1,635,820,837,000 |
When trying to mount ext2 I get this error:
Creating 4 MTD partitions on "MPC8313RDB Flash Map Info":
0x000000000000-0x000000100000 : "U-Boot"
0x000000100000-0x000000300000 : "Kernel"
0x000000300000-0x000000700000 : "JFFS2"
0x000000700000-0x000000800000 : "dtb"
List of all partitions:
1f00 1024 mtdblock0 (driver?)
1f01 2048 mtdblock1 (driver?)
1f02 4096 mtdblock2 (driver?)
1f03 1024 mtdblock3 (driver?)
No filesystem could mount root, tried: ext2
Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(1,0)
For some reason u-boot is not able to pass boot parameters to the kernel so I specified them directly by modifying boot_command_line in init/main.c
These are my arguments:
root=/dev/ram0 rw rootfstype=ext2 ramdisk_size=30000 ramdisk_blocksize=1024 console=ttyS0,115200
I thought (and still think) the problem is that the kernel does not have enough information about the initrd so I went insize powerpc/boot/of.c and manually set
loader_info
if (a1 && a2 && a2 != 0xdeadbeef) {
//loader_info.initrd_addr = a1;
//loader_info.initrd_size = a2;
loader_info.initrd_addr= 0x07c15000;
loader_info.initrd_size= 0x00386815;
}
I chose those values because that is the size and location u-boot reports
Loading Ramdisk to 07c15000, end 07f9b815 ... OK
If I do not specify rootfstype then it defaults to yaffs2 and this is the output:
yaffs: dev is 1048576 name is "ram0" rw
yaffs: passed flags ""
yaffs: dev is 1048576 name is "ram0" rw
yaffs: passed flags ""
yaffs: mtd is read only, setting superblock read only
------------[ cut here ]------------
WARNING: at mm/page_alloc.c:2544
Modules linked in:
CPU: 0 PID: 1 Comm: swapper Not tainted 3.16.62 #116
task: c782c000 ti: c781a000 task.ti: c781a000
NIP: c006dcc0 LR: c006d754 CTR: 00000000
REGS: c781b890 TRAP: 0700 Not tainted (3.16.62)
MSR: 00029032 <EE,ME,IR,DR,RI> CR: 22002244 XER: 00000000
GPR00: c006d754 c781b940 c782c000 00000000 00000001 00000000 c781b8a8 00000041
GPR08: c02a46ab 00000000 00000001 00000000 22002242 00000000 c00041a0 00000000
GPR16: 00000000 00000000 00000000 00000041 00024050 c02a45ec c02bbb40 c02bbb3c
GPR24: 00000000 00000014 00000000 00000000 c02a45e8 c02b10e0 00004050 00000001
NIP [c006dcc0] __alloc_pages_nodemask+0x660/0x86c
LR [c006d754] __alloc_pages_nodemask+0xf4/0x86c
Call Trace:
[c781b940] [c006d754] __alloc_pages_nodemask+0xf4/0x86c (unreliable)
[c781ba10] [c007f51c] kmalloc_order+0x18/0x4c
[c781ba20] [c01128bc] yaffs_tags_marshall_read+0x22c/0x264
[c781bae0] [c0110650] yaffs2_checkpt_find_block+0x90/0x1a8
[c781bb50] [c011125c] yaffs2_checkpt_rd+0x200/0x228
[c781bbe0] [c0114dcc] yaffs2_rd_checkpt_validity_marker+0x24/0xa4
[c781bc10] [c0115b68] yaffs2_checkpt_restore+0x68/0x714
[c781bc80] [c010fe90] yaffs_guts_initialise+0x46c/0x868
[c781bcb0] [c0108810] yaffs_internal_read_super.isra.16+0x420/0x83c
[c781bd50] [c0108c48] yaffs2_internal_read_super_mtd+0x1c/0x3c
[c781bd60] [c00a6224] mount_bdev+0x194/0x1c0
[c781bdb0] [c00a6c60] mount_fs+0x20/0xb8
[c781bdd0] [c00be984] vfs_kern_mount+0x54/0x120
[c781bdf0] [c00c1a30] do_mount+0x1f0/0xb60
[c781be50] [c00c2770] SyS_mount+0xac/0x120
[c781be90] [c0275e94] mount_block_root+0x130/0x2a0
[c781bee0] [c027635c] prepare_namespace+0x1b8/0x200
[c781bf00] [c0275b48] kernel_init_freeable+0x1a8/0x1bc
[c781bf30] [c00041b8] kernel_init+0x18/0x120
[c781bf40] [c000e310] ret_from_kernel_thread+0x5c/0x64
Instruction dump:
2f890000 40beff90 89210030 2f890000 419efe3c 4bffff80 73ca0200 4082fab4
3d00c02a 390846ab 89480001 694a0001 <0f0a0000> 2f8a0000 419efa98 39400001
---[ end trace fbbfd1e0d42ac49d ]---
VFS: Mounted root (yaffs2 filesystem) readonly on device 1:0.
devtmpfs: error mounting -2
Freeing unused kernel memory: 112K (c0275000 - c0291000)
What is the source of this problem?
|
For some reason my U-boot is not letting my kernel know where the initrd is being loaded into ram so I manually set initrd_start and initrd_end in setup-common.c. I mapped the memory location in ram that the ramdisk was loaded to a virtual address space of the kernel. I had to remap because the PAGE_OFFSET was larger than the address of the ramdisk.
void __init check_for_initrd(void)
{
#ifdef CONFIG_BLK_DEV_INITRD
initrd_start= (int)ioremap(0x07c15000 ,(0x07f9b815-0x07c15000) );
initrd_end= initrd_start + (0x07f9b815 - 0x07c15000);
printk("PAGE OFFSET: %lx\n", PAGE_OFFSET);
DBG(" -> check_for_initrd() initrd_start=0x%lx initrd_end=0x%lx\n",
initrd_start, initrd_end);
| Kernel can't find initrd? |
1,635,820,837,000 |
I have a problem with a Linux system that won't boot. The bootloader happily loads the kernel and initrd, but then the initrd script whines and complains and moans that it can't find the root device.
How to I force the initrd script to give me a shell prompt so I can actually investigate what's going on?
I tried unpacking initrd and making the /init shell script launch bash -i. But that didn't work at all; I see the Bash prompt appear, but the keyboard doesn't work. (Bash complains something about "cannot set progress group" and "inappropriate ioctl for device".)
In case it matters: OpenSUSE 13.1, which uses the old mkinitrd system. (Apparently newer versions use Dracut.) From what I can tell, /init is a small script that executes everything in /boot (a series of numbered Bash scripts).
There's a script named /boot/91-shell.sh, which contains a comment which suggests that passing shell=1 on the kernel command line will give me a shell prompt; it does not.
There's also a comment in /boot-02-start.sh which claims that passing linuxrc=trace will give me debug output. It does, but it's useless; all I see is the endless device polling loop at the end of the script scrolling past, obliterating all previous output.
I really, really need to get in there and see what's actually happening with my own eyes to know where the problem is. (To be fair, I am trying to make the system boot in a slightly strange way, so problems are not unexpected here.)
|
This is an Apple-specific issue. If I boot just about any Linux system on the MacBook Air I have to play with, the keyboard refuses to function. On any PC-based system, this works perfectly. So nothing to do with Linux not starting the right init binary; it's some kind of hardware driver issue.
| Shell prompt from initrd |
1,635,820,837,000 |
I'm stuck with this kernel panic.
What I want is to embed an initramfs into the kernel xip Image, but linux panic and tell me to pass a valid "root=" rootfs value. But WHY does Linux look for this input??
(The only reason I don't give any real .cpio is I can't build one because of errors like "can't find #include "). But the default initramfs should do the job no?
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE=""
CONFIG_BLOCK=y
CONFIG_BLK_DEV=y
# CONFIG_BLK_DEV_NULL_BLK is not set
# CONFIG_BLK_DEV_COW_COMMON is not set
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_LOOP_MIN_COUNT=8
# CONFIG_BLK_DEV_CRYPTOLOOP is not set
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_COUNT=1
CONFIG_BLK_DEV_RAM_SIZE=4096
Linux shouldn't care about any "root=" args??? No??
full linux .config: http://pastebin.com/gWGCEeCw
full UART output: http://pastebin.com/Mk3c9su8
Thanks for reading this.
EDIT: This is what happen when I specifies "root=/dev/ram0" :
[ 0.580000] brd: module loaded
[ 0.630000] loop: module loaded
[ 0.650000] F2FS-fs (ram0): Magic Mismatch, valid(0xf2f52010) - read(0x0)
[ 0.650000] F2FS-fs (ram0): Can't find valid F2FS filesystem in 1th superblock
[ 0.650000] F2FS-fs (ram0): Magic Mismatch, valid(0xf2f52010) - read(0x0)
[ 0.670000] F2FS-fs (ram0): Can't find valid F2FS filesystem in 2th superblock
[ 0.680000] F2FS-fs (ram0): Magic Mismatch, valid(0xf2f52010) - read(0x0)
[ 0.680000] F2FS-fs (ram0): Can't find valid F2FS filesystem in 1th superblock
[ 0.690000] F2FS-fs (ram0): Magic Mismatch, valid(0xf2f52010) - read(0x0)
[ 0.690000] F2FS-fs (ram0): Can't find valid F2FS filesystem in 2th superblock
[ 0.700000] List of all partitions:
[ 0.700000] 0100 4096 ram0 [ 0.710000] (driver?)
[ 0.710000] No filesystem could mount root, tried: [ 0.720000] f2fs
[ 0.720000]
[ 0.720000] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(1,0)
[ 0.720000] ---[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(1,0)
[ 1.520000] random: fast init done
I'm not sure to understand the implication of "No filesystem could mount root", of course YOU (Linux) need to create it!!! What's happening?
|
The problem was double:
I don't know why, but the default .cpio wasn't working.
The "stm32 minimal blablabla" cpio I found on the net wasn't working.
I tried to run an entire system build with buildroot, that works out-of-the-box, and try to use there rootfs.cpio, and it works!!
If I found the reason, I'll post it here.
For now I investigate, because for me the external RAM of this board start at 0xD0000000, but buildroot make a system which start at 0x90000000.
Both system are working... don't know how lol.
| uCLinux (linux 4.9 nommu) VFS: Cannot open root device "(null)" |
1,635,820,837,000 |
When I run run apt-get dist-upgrade, I get
update-initramfs: deferring update (trigger activated)
Processing triggers for initramfs-tools (0.142) ...
update-initramfs: Generating /boot/initrd.img-6.0.0-6-amd64
zstd: error 25 : Write error : No space left on device (cannot write compressed block)
E: mkinitramfs failure zstd -q -9 -T0 25
update-initramfs: failed for /boot/initrd.img-6.0.0-6-amd64 with 1.
dpkg: error processing package initramfs-tools (--configure):
installed initramfs-tools package post-installation script subprocess returned error exit status 1
Errors were encountered while processing:
initramfs-tools
E: Sub-process /usr/bin/dpkg returned an error code (1)
That file that it says it failed to generate, is here, /boot/initrd.img-6.0.0-6-amd64, you can see it's 70M.
$ exa -l /boot/initrd.img-6.0.0-6-amd64
.rw-r--r-- 73M root 22 Dec 10:51 /boot/initrd.img-6.0.0-6-amd64
It says it failed to generate this file, but the file is there. Moreover, if I jump onto boot I can see that there is still space for 69 MB,
# dd if=/dev/zero of=zero bs=1MB
dd: error writing 'zero': No space left on device
70+0 records in
69+0 records out
69255168 bytes (69 MB, 66 MiB) copied, 0.0888701 s, 779 MB/s
Why am I getting an error that there is no space on disk, and that /boot/initrd.img-6.0.0-6-amd64 failed to generate when,
it's there
there is 69 MB remaining on disk.
I can reproduce this error with this
update-initramfs -u -k 6.0.0-6-amd64
which is actually calling this under the hood to generate the error,
mkinitramfs -o /boot/initrd.img-6.0.0-6-amd64.new 6.0.0-6-amd64
|
.rw-r--r-- 73M root 22 Dec 10:51 /boot/initrd.img-6.0.0-6-amd64
[...]
It says it failed to generate this file, but the file is there. Moreover, if I jump onto boot I can see that there is still space for 69 MB
which is actually calling this under the hood to generate the error,
mkinitramfs -o /boot/initrd.img-6.0.0-6-amd64.new 6.0.0-6-amd64
mkinitramfs tries to write to the temporary file with a .new extension, which doesn't exist. If it needs 73MB and you only have 69MB then it makes sense to fail.
| Why is my boot partition full? |
1,635,820,837,000 |
I saw this question and I did pretty much the same but I enabled initrd in configuration to use temporary root file system and no other modification but I still get this error on:
qemu -kernel linux-3.16.1/arch/x86/boot/bzImage
any suggestion about what cause this error or how can i fix it ?
|
The boot fails for the same reason as in the mentioned question - just booting a kernel without anything else doesn't do much good. You must provide a disk. Or an initrd image. But just enabling initrd doesn't give you an initrd image magically. You need to prepare one and provide it to qemu like so:
qemu-system-i386 -kernel <your kernel> -initrd <your initrd image>
It's quite likely that you need to provide a disk as well.
There are a dozen or so ways to create and use disks for qemu, so here I explain just a very simple approach (see here for more).
First create a file, e.g.
qemu-img create -f raw mydisk.img 1G
which will create a 1 GiB-disk.
You can use this like so:
qemu <other options> -hda mydisk.img
If your initrd expects something (like a usable system) on your disk, you need to fill it first by mounting it to the local host, e.g.:
losetup /dev/loop0 mydisk.img
you can treat /dev/loop0 like any other block device, i.e. you can run fdisk on it etc. Once you have created partitions and filesystems you can mount them and put there what you need.
An alternative approach is to use an installation ISO image and attch that as a CD-ROM, e.g.
qemu <other options> -hda mydisk.img -hdb myiso.img -boot d
This will boot you into the system on the virtual CD-ROM, from there you can modify your disk as you like.
| qemu can't run linux kernel |
1,635,820,837,000 |
I am using ubuntu kernel 4.xx with corresponding ubuntu initrd.img, and it works. But, I want to use a custom initramfs inspired by lfs (linux from scratch) initramfs. The kernel extracts, and runs my init script successfully including mounting sysfs. But /sys doesn't expose any trace to available storage (two disks exist), and therefore it's not possible to initialize the kernel root.
What is the problem?
Does ubuntu add-on to the kernel (/ubuntu directory) dictate any special policy for initrd?
|
On the working system, look at the device(s) in sysfs, and their device symlink. This points to the parent device - which may in turn have its own parent device, and so on. Write yourself a list of the device and all its parent devices. Then you can check all of them in the initramfs. You might be missing more requirements than just the two disk devices.
Secondly, when you make your list of devices, look at the driver/module for each one and write down what it is. This tells you which kernel module is recognizing the device.
udev is supposed to be loading the kernel modules for you.
Unfortunately, the LFS initramfs takes systemd-udev and tries to run it without systemd. This is unfortunate because using systemd would let systemd-udev log any errors it encountered to the systemd journal. You could then check the journal for errors. I do not know whether udev error logging works in the LFS initramfs.
does ubuntu add-on to the kernel (/ubuntu directory) dictates any special policy for initrd?
No.
| kernel sysfs doesn't recognize storage kobjects [closed] |
1,635,820,837,000 |
I have latest Kubuntu. I have installed mysql.
I was looking into the /etc/init.
I see the following:
In /etc/init/mysql.conf
description "MySQL Server" [18/40]
author "Mario Limonciello <[email protected]>"
start on runlevel [2345]
stop on starting rc RUNLEVEL=[016]
If I understand this correctly mysql should start on level 2 and be up in all levels 2 up to 5.
Then I did the following:
Linux:/etc$ ls rc0.d/
K10unattended-upgrades K20kerneloops README S20sendsigs S30urandom S31umountnfs.sh S40umountfs S48cryptdisks S59cryptdisks-early S60umountroot S90halt
Linux:/etc$ ls rc1.d/
K20kerneloops K20saned README S30killprocs S70dns-clean S70pppd-dns S90single
Linux:/etc$ ls rc2.d/
README S20kerneloops S50rsync S50saned S70dns-clean S70pppd-dns S75sudo S99grub-common S99ondemand S99rc.local
Linux:/etc$ ls rc3.d/
README S20kerneloops S50rsync S50saned S70dns-clean S70pppd-dns S75sudo S99grub-common S99ondemand S99rc.local
Linux:/etc$ ls rc4.d/
README S20kerneloops S50rsync S50saned S70dns-clean S70pppd-dns S75sudo S99grub-common S99ondemand S99rc.local
Linux:/etc$ ls rc5.d/
README S20kerneloops S50rsync S50saned S70dns-clean S70pppd-dns S75sudo S99grub-common S99ondemand S99rc.local
I was expecting that the mysqld would be listed in one of those directories.
I mean the services have the .conf files in the /etc/init and for each runtime level there is a link to the service executable to start/stop.
But why there is nothing for mysql?
Please note that mysql is up and running:
Linux:/etc$ ps -ef|grep mysql
mysql 994 1 0 21:24 ? 00:00:08 /usr/sbin/mysqld
jim 4396 4223 0 23:44 pts/8 00:00:00 grep --color=auto mysql
|
Ubuntu uses Upstart for its Init, which doesn't use /etc/rcX.d the way SysVInit does. More information: http://upstart.ubuntu.com/
| How are the services exactly starting in (K)Ubuntu? |
1,635,820,837,000 |
Linux xd 5.10.0-16-amd64 #1 SMP Debian 5.10.127-1 (2022-06-30) x86_64 GNU/Linux
Edit 2022-07-20
This issue has had comes and goes, at the moment, I am having a lot of trouble when Xen boot lands on initramfs, please help.
The wiki suggests to mount a thumb drive or root ( mount -o remount,rw/root ).
Apparently no disks are visible. The complete config files and dmesg and etc can be found at the Xen list, please:
https://lists.xenproject.org/archives/html/xen-users/2022-07/threads.html#00041
https://lists.xenproject.org/archives/html/xen-users/2022-07/msg00057.html
On my debian11, I have installed xen-hypervisor-4.14-amd64, xen-hypervisor-common, xen-system-amd64, and xen-utils-4.14
But I cannot boot into Xen, please help.
Theoretically, after installing Xen, according to https://wiki.debian.org/Xen , I should only run:
dpkg-divert --divert /etc/grub.d/08_linux_xen --rename /etc/grub.d/20_linux_xen
update-grub
Unfortunately, boot breaks and grub mailing list had a very deep debate on it last March-April - apparently multiboot2 will NOT do the trick:
https://www.mail-archive.com/[email protected]/msg32020.html
Apparently all problems lie on initrd. DO I need to rebuild initrd once I installed Xen? And how do I do it?
I need some help, please. Is this inconclusive? Is there a work-around I could use (some partial boot on Xen and manually loading on the prompt the rest of what is needed for Xen to work)? How should I proceed? I will try to register this on grub, debian xen packages and Xen as requests/bugs - any further ideas on actions, please?
The discussion on grub mailing list points to this:
https://wiki.debian.org/DebianInstaller/NetbootFirmware#The_Solution:_Add_Firmware_to_Initramfs
This seems very complicated and risky… is this the way forward?
Did anyone patch grub2 to be able to support what’s needed for Xen, please?
Note: I’m using pure Xen, NOT Eve.
/etc/default/grub file contents:
# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.
# For full documentation of the options in this file, see:
# info -f grub -n 'Simple configuration'
GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
GRUB_CMDLINE_LINUX=""
# Uncomment to enable BadRAM filtering, modify to suit your needs
# This works with Linux (no patch required) and with any kernel that obtains
# the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...)
#GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef"
# Uncomment to disable graphical terminal (grub-pc only)
#GRUB_TERMINAL=console
# The resolution used on graphical terminal
# note that you can use only modes which your graphic card supports via VBE
# you can see them in real GRUB with the command `vbeinfo'
#GRUB_GFXMODE=640x480
# Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux
#GRUB_DISABLE_LINUX_UUID=true
# Uncomment to disable generation of recovery mode menu entries
#GRUB_DISABLE_RECOVERY="true"
# Uncomment to get a beep at grub start
#GRUB_INIT_TUNE="480 440 1"
/boot/grub/grub.cfg file contents:
#
# DO NOT EDIT THIS FILE
#
# It is automatically generated by grub-mkconfig using templates
# from /etc/grub.d and settings from /etc/default/grub
#
### BEGIN /etc/grub.d/00_header ###
if [ -s $prefix/grubenv ]; then
set have_grubenv=true
load_env
fi
if [ "${next_entry}" ] ; then
set default="${next_entry}"
set next_entry=
save_env next_entry
set boot_once=true
else
set default="Debian GNU/Linux, with Xen hypervisor"
fi
if [ x"${feature_menuentry_id}" = xy ]; then
menuentry_id_option="--id"
else
menuentry_id_option=""
fi
export menuentry_id_option
if [ "${prev_saved_entry}" ]; then
set saved_entry="${prev_saved_entry}"
save_env saved_entry
set prev_saved_entry=
save_env prev_saved_entry
set boot_once=true
fi
function savedefault {
if [ -z "${boot_once}" ]; then
saved_entry="${chosen}"
save_env saved_entry
fi
}
function load_video {
if [ x$feature_all_video_module = xy ]; then
insmod all_video
else
insmod efi_gop
insmod efi_uga
insmod ieee1275_fb
insmod vbe
insmod vga
insmod video_bochs
insmod video_cirrus
fi
}
if [ x$feature_default_font_path = xy ] ; then
font=unicode
else
insmod part_gpt
insmod ext2
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root 0123abcd
else
search --no-floppy --fs-uuid --set=root 0123abcd
fi
font="/usr/share/grub/unicode.pf2"
fi
if loadfont $font ; then
set gfxmode=auto
load_video
insmod gfxterm
set locale_dir=$prefix/locale
set lang=en_GB
insmod gettext
fi
terminal_output gfxterm
if [ "${recordfail}" = 1 ] ; then
set timeout=30
else
if [ x$feature_timeout_style = xy ] ; then
set timeout_style=menu
set timeout=5
# Fallback normal timeout code in case the timeout_style feature is
# unavailable.
else
set timeout=5
fi
fi
### END /etc/grub.d/00_header ###
### BEGIN /etc/grub.d/05_debian_theme ###
insmod part_gpt
insmod ext2
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root 0123abcd
else
search --no-floppy --fs-uuid --set=root 0123abcd
fi
insmod png
if background_image /usr/share/desktop-base/homeworld-theme/grub/grub-4x3.png; then
set color_normal=white/black
set color_highlight=black/white
else
set menu_color_normal=cyan/blue
set menu_color_highlight=white/blue
fi
### END /etc/grub.d/05_debian_theme ###
### BEGIN /etc/grub.d/08_linux_xen ###
menuentry 'Debian GNU/Linux, with Xen hypervisor' --class debian --class gnu-linux --class gnu --class os --class xen $menuentry_id_option 'xen-gnulinux-simple-0123abcd' {
insmod part_gpt
insmod ext2
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root 0123abcd
else
search --no-floppy --fs-uuid --set=root 0123abcd
fi
echo 'Loading Xen 4.14-amd64 ...'
if [ "$grub_platform" = "pc" -o "$grub_platform" = "" ]; then
xen_rm_opts=
else
xen_rm_opts="no-real-mode edd=off"
fi
multiboot2 /xen-4.14-amd64.gz placeholder ${xen_rm_opts}
echo 'Loading Linux 5.10.0-16-amd64 ...'
module2 /vmlinuz-5.10.0-16-amd64 placeholder root=UUID=0123abcd ro quiet
echo 'Loading initial ramdisk ...'
module2 --nounzip /initrd.img-5.10.0-16-amd64
}
submenu 'Advanced options for Debian GNU/Linux (with Xen hypervisor)' $menuentry_id_option 'gnulinux-advanced-0123abcd' {
submenu 'Xen hypervisor, version 4.14-amd64' $menuentry_id_option 'xen-hypervisor-4.14-amd64-0123abcd' {
menuentry 'Debian GNU/Linux, with Xen 4.14-amd64 and Linux 5.10.0-16-amd64' --class debian --class gnu-linux --class gnu --class os --class xen $menuentry_id_option 'xen-gnulinux-5.10.0-16-amd64-advanced-0123abcd' {
insmod part_gpt
insmod ext2
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root 0123abcd
else
search --no-floppy --fs-uuid --set=root 0123abcd
fi
echo 'Loading Xen 4.14-amd64 ...'
if [ "$grub_platform" = "pc" -o "$grub_platform" = "" ]; then
xen_rm_opts=
else
xen_rm_opts="no-real-mode edd=off"
fi
multiboot2 /xen-4.14-amd64.gz placeholder ${xen_rm_opts}
echo 'Loading Linux 5.10.0-16-amd64 ...'
module2 /vmlinuz-5.10.0-16-amd64 placeholder root=UUID=0123abcd ro quiet
echo 'Loading initial ramdisk ...'
module2 --nounzip /initrd.img-5.10.0-16-amd64
}
menuentry 'Debian GNU/Linux, with Xen 4.14-amd64 and Linux 5.10.0-16-amd64 (recovery mode)' --class debian --class gnu-linux --class gnu --class os --class xen $menuentry_id_option 'xen-gnulinux-5.10.0-16-amd64-recovery-0123abcd' {
insmod part_gpt
insmod ext2
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root 0123abcd
else
search --no-floppy --fs-uuid --set=root 0123abcd
fi
echo 'Loading Xen 4.14-amd64 ...'
if [ "$grub_platform" = "pc" -o "$grub_platform" = "" ]; then
xen_rm_opts=
else
xen_rm_opts="no-real-mode edd=off"
fi
multiboot2 /xen-4.14-amd64.gz placeholder ${xen_rm_opts}
echo 'Loading Linux 5.10.0-16-amd64 ...'
module2 /vmlinuz-5.10.0-16-amd64 placeholder root=UUID=0123abcd ro single
echo 'Loading initial ramdisk ...'
module2 --nounzip /initrd.img-5.10.0-16-amd64
}
}
submenu 'Xen hypervisor, version 4.14-amd64.efi' $menuentry_id_option 'xen-hypervisor-4.14-amd64.efi-0123abcd' {
menuentry 'Debian GNU/Linux, with Xen 4.14-amd64.efi and Linux 5.10.0-16-amd64' --class debian --class gnu-linux --class gnu --class os --class xen $menuentry_id_option 'xen-gnulinux-5.10.0-16-amd64-advanced-0123abcd' {
insmod part_gpt
insmod ext2
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root 0123abcd
else
search --no-floppy --fs-uuid --set=root 0123abcd
fi
echo 'Loading Xen 4.14-amd64.efi ...'
if [ "$grub_platform" = "pc" -o "$grub_platform" = "" ]; then
xen_rm_opts=
else
xen_rm_opts="no-real-mode edd=off"
fi
multiboot2 /xen-4.14-amd64.efi placeholder ${xen_rm_opts}
echo 'Loading Linux 5.10.0-16-amd64 ...'
module2 /vmlinuz-5.10.0-16-amd64 placeholder root=UUID=0123abcd ro quiet
echo 'Loading initial ramdisk ...'
module2 --nounzip /initrd.img-5.10.0-16-amd64
}
menuentry 'Debian GNU/Linux, with Xen 4.14-amd64.efi and Linux 5.10.0-16-amd64 (recovery mode)' --class debian --class gnu-linux --class gnu --class os --class xen $menuentry_id_option 'xen-gnulinux-5.10.0-16-amd64-recovery-0123abcd' {
insmod part_gpt
insmod ext2
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root 0123abcd
else
search --no-floppy --fs-uuid --set=root 0123abcd
fi
echo 'Loading Xen 4.14-amd64.efi ...'
if [ "$grub_platform" = "pc" -o "$grub_platform" = "" ]; then
xen_rm_opts=
else
xen_rm_opts="no-real-mode edd=off"
fi
multiboot2 /xen-4.14-amd64.efi placeholder ${xen_rm_opts}
echo 'Loading Linux 5.10.0-16-amd64 ...'
module2 /vmlinuz-5.10.0-16-amd64 placeholder root=UUID=0123abcd ro single
echo 'Loading initial ramdisk ...'
module2 --nounzip /initrd.img-5.10.0-16-amd64
}
}
}
### END /etc/grub.d/08_linux_xen ###
### BEGIN /etc/grub.d/10_linux ###
function gfxmode {
set gfxpayload="${1}"
}
set linux_gfx_mode=
export linux_gfx_mode
menuentry 'Debian GNU/Linux' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-0123abcd' {
load_video
insmod gzio
if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
insmod part_gpt
insmod ext2
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root 0123abcd
else
search --no-floppy --fs-uuid --set=root 0123abcd
fi
echo 'Loading Linux 5.10.0-16-amd64 ...'
linux /vmlinuz-5.10.0-16-amd64 root=UUID=0123abcd ro quiet
echo 'Loading initial ramdisk ...'
initrd /initrd.img-5.10.0-16-amd64
}
submenu 'Advanced options for Debian GNU/Linux' $menuentry_id_option 'gnulinux-advanced-0123abcd' {
menuentry 'Debian GNU/Linux, with Linux 5.10.0-16-amd64' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-5.10.0-16-amd64-advanced-0123abcd' {
load_video
insmod gzio
if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
insmod part_gpt
insmod ext2
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root 0123abcd
else
search --no-floppy --fs-uuid --set=root 0123abcd
fi
echo 'Loading Linux 5.10.0-16-amd64 ...'
linux /vmlinuz-5.10.0-16-amd64 root=UUID=0123abcd ro quiet
echo 'Loading initial ramdisk ...'
initrd /initrd.img-5.10.0-16-amd64
}
menuentry 'Debian GNU/Linux, with Linux 5.10.0-16-amd64 (recovery mode)' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-5.10.0-16-amd64-recovery-0123abcd' {
load_video
insmod gzio
if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
insmod part_gpt
insmod ext2
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root 0123abcd
else
search --no-floppy --fs-uuid --set=root 0123abcd
fi
echo 'Loading Linux 5.10.0-16-amd64 ...'
linux /vmlinuz-5.10.0-16-amd64 root=UUID=0123abcd ro single
echo 'Loading initial ramdisk ...'
initrd /initrd.img-5.10.0-16-amd64
}
}
### END /etc/grub.d/10_linux ###
### BEGIN /etc/grub.d/30_os-prober ###
### END /etc/grub.d/30_os-prober ###
### BEGIN /etc/grub.d/30_uefi-firmware ###
menuentry 'System setup' $menuentry_id_option 'uefi-firmware' {
fwsetup
}
### END /etc/grub.d/30_uefi-firmware ###
### BEGIN /etc/grub.d/40_custom ###
# This file provides an easy way to add custom menu entries. Simply type the
# menu entries you want to add after this comment. Be careful not to change
# the 'exec tail' line above.
### END /etc/grub.d/40_custom ###
### BEGIN /etc/grub.d/41_custom ###
if [ -f ${config_directory}/custom.cfg ]; then
source ${config_directory}/custom.cfg
elif [ -z "${config_directory}" -a -f $prefix/custom.cfg ]; then
source $prefix/custom.cfg;
fi
### END /etc/grub.d/41_custom ###
|
Unfortunately Xen is not compatible with this hardware.
For traceability, this was discussed on the xen-devel here:
https://xen.markmail.org/message/7jgv47pk5rsea4ef?q=+list:com%2Exensource%2Elists%2Exen-devel&page=6
This firmware has a wider than 16bit segment number and nvme calls to it during boot, and this is the root problem.
Xen is not compatible with this kind of hardware yet.
As it was reported on xen-devel, it is always possible that a good soul implements this new functionality (maybe a new hypercall sun-function). Or all hardware evolve enough for them to implement.
I will follow people’s advice and migrate to KVM with Quemu as unfortunately Xen is not compatible with that hardware.
I have also asked a 16 but PCI firmware update to Acer:
https://community.acer.com/en/discussion/669340/acer-aspire-5-a517-52g-firmware-w-16-bit-pci-segment-size/p1?new=1
| Boot into Xen on debian11 (initrd trouble) |
1,306,327,420,000 |
I'm currently facing a problem on a linux box where as root I have commands returning error because inotify watch limit has been reached.
# tail -f /var/log/messages
[...]
tail: cannot watch '/var/log/messages': No space left on device
# inotifywatch -v /var/log/messages
Establishing watches...
Failed to watch /var/log/messages; upper limit on inotify watches reached!
Please increase the amount of inotify watches allowed per user via '/proc/sys/fs/inotify/max_user_watches'.`
I googled a bit and every solution I found is to increase the limit with:
sudo sysctl fs.inotify.max_user_watches=<some random high number>
But I was unable to find any information of the consequences of raising that value. I guess the default kernel value was set for a reason but it seems to be inadequate for particular usages. (e.g., when using Dropbox with a large number of folder, or software that monitors a lot of files)
So here are my questions:
Is it safe to raise that value and what would be the consequences of a too high value?
Is there a way to find out what are the currently set watches and which process set them to be able to determine if the reached limit is not caused by a faulty software?
|
Is it safe to raise that value and what would be the consequences of a too high value?
Yes, it's safe to raise that value and below are the possible costs [source]:
Each used inotify watch takes up 540 bytes (32-bit system), or 1 kB (double - on 64-bit) [sources: 1, 2]
This comes out of kernel memory, which is unswappable.
Assuming you set the max at 524288 and all were used (improbable), you'd be using approximately 256MB/512MB of 32-bit/64-bit kernel memory.
Note that your application will also use additional memory to keep track of the inotify handles, file/directory paths, etc. -- how much depends on its design.
To check the max number of inotify watches:
cat /proc/sys/fs/inotify/max_user_watches
To set max number of inotify watches
Temporarily:
Run sudo sysctl fs.inotify.max_user_watches= with your preferred value at the end.
Permanently (more detailed info):
put fs.inotify.max_user_watches=524288 into your sysctl settings. Depending on your system they might be in one of the following places:
Debian/RedHat: /etc/sysctl.conf
Arch: put a new file into /etc/sysctl.d/, e.g. /etc/sysctl.d/40-max-user-watches.conf
you may wish to reload the sysctl settings to avoid a reboot: sysctl -p (Debian/RedHat) or sysctl --system (Arch)
Check to see if the max number of inotify watches have been reached:
Use tail with the -f (follow) option on any old file, e.g. tail -f /var/log/dmesg:
- If all is well, it will show the last 10 lines and pause; abort with Ctrl-C
- If you are out of watches, it will fail with this somewhat cryptic error:
tail: cannot watch '/var/log/dmsg': No space left on device
To see what's using up inotify watches
find /proc/*/fd -lname anon_inode:inotify |
cut -d/ -f3 |
xargs -I '{}' -- ps --no-headers -o '%p %U %c' -p '{}' |
uniq -c |
sort -nr
The first column indicates the number of inotify fds (not the number of watches though) and the second shows the PID of that process [sources: 1, 2].
| Kernel inotify watch limit reached |
1,306,327,420,000 |
After a recent upgrade to Fedora 15, I'm finding that a number of tools are failing with errors along the lines of:
tail: inotify resources exhausted
tail: inotify cannot be used, reverting to polling
It's not just tail that's reporting problems with inotify, either. Is there any way to interrogate the kernel to find out what process or processes are consuming the inotify resources? The current inotify-related sysctl settings look like this:
fs.inotify.max_user_instances = 128
fs.inotify.max_user_watches = 8192
fs.inotify.max_queued_events = 16384
|
It seems that if the process creates inotify instance via inotify_init(), the resulting file that represents filedescriptor in the /proc filesystem is a symlink to (non-existing) 'anon_inode:inotify' file.
$ cd /proc/5317/fd
$ ls -l
total 0
lrwx------ 1 puzel users 64 Jun 24 10:36 0 -> /dev/pts/25
lrwx------ 1 puzel users 64 Jun 24 10:36 1 -> /dev/pts/25
lrwx------ 1 puzel users 64 Jun 24 10:36 2 -> /dev/pts/25
lr-x------ 1 puzel users 64 Jun 24 10:36 3 -> anon_inode:inotify
lr-x------ 1 puzel users 64 Jun 24 10:36 4 -> anon_inode:inotify
Unless I misunderstood the concept, the following command should show you list of processes (their representation in /proc), sorted by number of inotify instances they use.
$ for foo in /proc/*/fd/*; do readlink -f $foo; done | grep inotify | sort | uniq -c | sort -nr
Finding the culprits
Via the comments below @markkcowan mentioned this:
$ find /proc/*/fd/* -type l -lname 'anon_inode:inotify' -exec sh -c 'cat $(dirname {})/../cmdline; echo ""' \; 2>/dev/null
| Who's consuming my inotify resources? |
1,306,327,420,000 |
I have seen this answer:
You should consider using inotifywait, as an example:
inotifywait -m /path -e create -e moved_to |
while read path action file; do
echo "The file '$file' appeared in directory '$path' via '$action'"
# do something with the file
done
The above script watches a directory for creation of files of any type. My question is how to modify the inotifywait command to report only when a file of a certain type/extension is created (or moved into the directory). For example, it should report when any .xml file is created.
What I tried:
I have run the inotifywait --help command, and have read the command line options. It has --exclude <pattern> and --excludei <pattern> options to EXCLUDE files of certain types (by using regular expressions), but I need a way to INCLUDE just the files of a certain type/extension.
|
how do I modify the inotifywait command to report only when a file of
certain type/extension is created
Please note that this is untested code since I don't have access to inotify right now. But something akin to this ought to work with bash:
inotifywait -m /path -e create -e moved_to |
while read -r directory action file; do
if [[ "$file" =~ .*xml$ ]]; then # Does the file end with .xml?
echo "xml file" # If so, do your thing here!
fi
done
Alternatively, without bash,
inotifywait -m /path -e create -e moved_to |
while read -r directory action file; do
case "$file" in
(*.xml)
echo "xml file" # Do your thing here!
;;
esac
fi
done
With newer versions of inotifywait you can directly create a pattern match for files:
inotifywait -m /path -e create -e moved_to --include '.*\.xml$' |
while read -r directory action file; do
echo "xml file" # Do your thing here!
done
| How to use inotifywait to watch a directory for creation of files of a specific extension |
1,306,327,420,000 |
According to Wikipedia,
inotify is a Linux kernel subsystem which notices changes to the file system. It replaced the previous dnotify.
Programs that sync files (such as crashplan, dropbox, git) recomend in user guides that the user increase max_user_watches (1, 2, 3).
From what I understand about inotify, the OS is "told" that a file has been changed, instead of requiring the OS to "go looking" for changes.
I assume that there is an "inotify" file created in every directory. Is this correct? Is there a way to interact with inotify from the command line?
Resources
Why are inotify events different on an NFS mount?
Inotifywait for large number of files in a directory
|
Inotify is an internal kernel facility. There is no “inotify file”. There are dedicated system calls inotify_init, inotify_add_watch and inotify_rm_watch that allow processes to register themselves to be notified when certain filesystem events happen. When the event happens, the process receives a description of the event through the file descriptor returned by inotify_init.
The OS isn't “told” that a file has been changed: it knows, because it's doing the changing. It's the application that's told that a file has been changed instead of having to go looking.
The program inotifywait provides a simple way to use inotify from the command line.
| How does inotify work? |
1,306,327,420,000 |
I have written a shell script to monitor a directory using the inotifywait utility of inotifyt-tools. I want that script to run continuously in the background, but I also want to be able to stop it when desired.
To make it run continuously, i used while true; like this:
while true;
do #a set of commands that use the inotifywait utility
end
I have saved it in a file in /bin and made it executable. To make it run in background, i used nohup <script-name> & and closed the terminal.
I don't know how do I stop this script. I have looked at the answers here and a very closely related question here.
UPDATE 1:
On the basis of the answer of @InfectedRoot below, I have been able to solve my problem using the following strategy.
First use
ps -aux | grep script_name
and use sudo kill -9 <pid> to kill the processes.
I then had to pgrep inotifywait and use sudo kill -9 <pid> again for the id returned.
This works but i think this is a messy approach, I am looking for a better answer.
UPDATE 2:
The answer consists of killing 2 processes. This is important because running the script on the command line initiates 2 processes, 1 the script itself and 2, the inotify process.
|
To improve, use killall, and also combine the commands:
ps -aux | grep script_name
killall script_name inotifywait
Or do everything in one line:
killall `ps -aux | grep script_name | grep -v grep | awk '{ print $1 }'` && killall inotifywait
| Killing a shell script running in background |
1,306,327,420,000 |
I have written a small 'daemon' in bash that will switch to the headphones if they are detected, and if not, switch to an external USB speaker with PulseAudio.
What I'm looking for is some way to get notification of changes on the file /proc/asound/card0/codec#0, just like inotifywait does on real files (considering files under /proc to be as "pseudo-files").
I find my code a bit insane, because it runs sleep 1 with awk for the whole day, that is 86400 times a day :)
while sleep 1; do
_1=${_2:-}
_2=$(awk '/Pin-ctls/{n++;if(n==4)print}' '/proc/asound/card0/codec#0')
[[ ${_1:-} = $_2 ]] ||
if [[ $_2 =~ OUT ]]; then
use_speakers
else
use_internal
fi
done
What I'm looking for is something like (this example doesn't work):
codec=/proc/asound/card0/codec#0
while inotifywait $codec; do
if [[ $(awk '/Pin-ctls/{n++;if(n==4)print}' $codec) =~ OUT ]]; then
use_speakers
else
use_internal
fi
done
This way the commands inside the loop would be run only when there are real changes on the $codec file.
|
What I'm looking for is some way to get notification of changes on the file [in proc]
You can't, because they aren't files. This is not quite a duplicate question, but the answer here explains why.
/proc is a kernel interface. There are no real files there, hence they can't change. Reading from the handles is a request and the data in the file when you read it is a reply to that.
The only way you could simulate something like this would be to read the file at intervals and compare the content to see if the reply from the kernel has changed -- looks like you've already done that.
If you stat procfs files, the atime and the mtime will be the same: for some files it is whenever the stat call was, for others a time from during system boot. In the first case, it will always seem to have changed, in the second, it will never seem to have changed.
| Notify of changes on a file under /proc |
1,306,327,420,000 |
Thanks sshfs magic, I can mount my home dir from a remote server with
sshfs user@server:/home/user ~/remote
Optimistically, I thought I'd set a local inotify-hook on ~/remote/logFile (in the sshfs mount) so a local program can react to remote log changes.
cd ~/remote
touch logFile # create remote file
inotifywait logFile & # set up local inotify-hook
ssh user@server -x touch /home/user/logFile # touch file from remote
Nothing happens. inotifywait is silent unless I touch the file locally. Writing to a named pipe fails similarly.
Why is this?How can I bridge this gap?
I could run inotifywait on the remote, hack up a file system change serialisation strategy and maintain a connection to the local, but then I'm basically reimplementing SSHFS. And it completely kills the abstraction.
|
THe SSHFS filesystem is built on top of the SFTP protocol. SFTP provides only facilities to manipulate files in “classical” ways; the client makes a request to the server (list a directory, upload a file, etc.), and the server responds. There is no facility in this protocol for the server to spontaneously notify the client that something has happened.
This makes it impossible to provide a facility such as inotify inside SSHFS. It would be possible to extend SSHFS with proprietary extensions, or to supplement it with a full-fledged SSH connection; but I don't know of any such extension to SSHFS.
Named pipes can't be implemented on top of SSHFS for the same reason. NFS, the classical networked filesystem, doesn't have any facility to support cross-machines named pipes either. On a networked filesystem, a named pipe creates an independent point of communication on each of the machines where it is mounted (in addition to the server).
FAM (the inotify analogue in SGI IRIX, which has been ported to Linux) provides a daemon which allows notifications to be sent over the network. Linux has rather deprecated FAM since inotify came onto the scene, so I don't know if getting FAM to run would be easier than rolling your own application-specific notification system. You'd need to set up some port forwarding over SSH or establish a VPN in order to secure the network link for FAM and NFS.
If you elect to roll your own, assuming that you're ok with giving the clients shell access, it's fairly easy to run an inotify monitor on behalf of a client: have the client open an SSH connection, and run the inotifywait command on the server, parsing its output on the client. You can set up a master connection to make it faster to open many connections from the same client to the same server.
| How do I use inotify or named pipes over SSHFS? |
1,306,327,420,000 |
I'm trying to monitor my /tmp folder for changes using inotifywatch:
sudo inotifywatch -v -r /tmp
After creating couple of files (touch /tmp/test-1 /tmp/test-2), I'm terminating inotifywatch (by Ctrl-C which shows me the following statistics:
Establishing watches...
Setting up watch(es) on /tmp
OK, /tmp is now being watched.
Total of 39 watches.
Finished establishing watches, now collecting statistics.
total attrib close_write open create filename
8 2 2 2 2 /tmp/
The output only prints the statistics, but not the files I expected (as in here or here). I tried different types of access (via cat, mktemp, etc.), but it's the same thing.
Did I miss something?
It's because I'm on VPS and something has been restricted?
OS: Debian 7.3 (inotify-tools) on VPS
|
This is due to the way you're using inotifywatch, and the way the tool itself works. When you run inotifywatch -r /tmp, you start watching /tmp and all the files that are already in it. When you create a file inside /tmp, the directory metadata is updated to contain the new file's inode number, which means that the change happens on /tmp, not /tmp/test-1. Additionally, since /tmp/test-1 wasn't there when inotifywatch started, there is no inotify watch placed on it. It means that any event which occurs on a file created after the watches have been placed will not be detected. You might understand it better if you see it yourself:
$ inotifywatch -rv /tmp &
Total of n watches.
$ cat /sys/kernel/debug/tracing/trace | grep inotifywatch | wc -l
n
If you have enabled the tracing mechanism on inotify_add_watch(2), the last command will give you the number of watches set up by inotifywatch. This number should the same as the one given by inotifywatch itself. Now, create a file inside /tmp and check again:
$ inotifywatch -rv /tmp &
Total of n watches.
$ touch /tmp/test1.txt
$ cat /sys/kernel/debug/tracing/trace | grep inotifywatch | wc -l
n
The number won't have increased, which means the new file isn't watched. Note that the behaviour is different if you create a directory instead :
$ inotifywatch -rv /tmp &
Total of n watches.
$ mkdir /tmp/test1
$ cat /sys/kernel/debug/tracing/trace | grep inotifywatch | wc -l
n + 1
This is due to the way the -r switch behaves:
-r, --recursive: [...] If new directories are created within watched directories they will automatically be watched.
Edit: I got a little confused between your two examples, but in the first case, the watches are correctly placed because the user calls inotifywatch on ~/* (which is expanded, see don_crissti's comment here). The home directory is also watched because ~/.* contains ~/.. Theoretically, it should also contain ~/.., which, combined with the -r switch, should result in watching the whole system.
However, it is possible to get the name of the file triggering a create event in a watched directory, yet I'm guessing inotifywatch does not retrieve this information (it is saved a little deeper than the directory name). inotify-tools provides another tool, called inotifywait, which can behave pretty much like inotify-watch, and provides more output options (including %f, which is what you're looking for here) :
inotifywait -m --format "%e %f" /tmp
From the man page:
--format <fmt> Output in a user-specified format, using printf-like syntax. [...] The following conversions are supported:
%f: when an event occurs within a directory, this will be replaced with the name of the file which caused the event to occur.
%e: replaced with the Event(s) which occurred, comma-separated.
Besides, the -m option (monitor) will keep inotifywait running after the first event, which will reproduce a behaviour quite similar to inotifywatch's.
| Why doesn't inotifywatch detect changes on added files? |
1,306,327,420,000 |
When I have mutt opened, I don't see new emails until I press a key, for example arrow down. Then new emails appear.
Is there a way for mutt do recognize that new email has arrived, and display the email automatically, without me having to press a key every few minutes?
I am using maildir format (locally stored emails). What would be the best way? Should mutt check every n seconds, or should it be notified by the OS, perhaps using inotify ?
|
I believe I found a solution to this on the Mutt wiki.
How to make mutt check for new mail more often? What's the difference between $timeout
and $mail_check?
After every keyboard input mutt updates the status of all folders. To receive "New mail
in ..." notifications even without needing to press a key, set $timeout == time to wait
for idle mutt (no key pressed) before the status is updated again as if a key were
pressed. To avoid too frequent folder access (bad connections via NFS or IMAP), set
$mail_check == minium time between 2 scans for new mail (external changes to folders)
in case of high keyboard activity.
$mail_check < $timeout : scan on next update $timeout < $mail_check : update before scan
This means $mail_check < $timeout is more useful, because by the time mutt will update,
it will also scan for external changes to incorporate them in the update.
How to get informed about new mail?
When new mail arrives, an automatic (no key pressed) "New mail in ..." notification is
shown at the screen bottom. This happens only in the index menu. For manual checking,
you can use the buffy-list function which works in the pager, index and folder browser.
It prints a list of folders with new mail. However, it will display an up-to-date list
only when the index menu is focused. Additionally, you can invoke check-new in the
folder browser which updates the display ('N' flag for folders with new mail) and also
buffy-lists folder list.
I find this confusing and badly explained, but I tried it by adding set timeout=30 to my ~/.muttrc and it seems to work! The inbox view updates not long after my IMAP daemon reports having downloaded new mail. I hope this works for you too!
| mutt: automatically show new mesages |
1,306,327,420,000 |
I want to be notified when a specific filename is created. I'm looking at inotify. The IN_CREATE flag is available for monitoring a directory for any changes within it, but I'd prefer not to monitor the entire directory since there may be a good deal of activity in that directory besides the file I'm interested in. Can this be done?
|
You cannot have the kernel only inform you of a change to a certain path. The reasons are a bit subtle:
In Linux, a file object exists independently of any name(s) it may have. Files' names are actually attributes of their containing directory, and a single file may be called by multiple names (see, hardlinking).
The kernel has to have something to attach inotify objects to; it cannot attach an object to a pathname since a pathname isn't a real filesystem object; you have to attach to the parent directory or the file that path describes. But you can't attach to the file, because you're watching to see if a file with a given name is created, not changes to a given file.
Theoretically, the kernel could implement an API that allows you to select events for a given pathname when adding a watch to a directory, much in the same way it allows you to select types of events. This would bloat the API, and the kernel would in the end be processing the same data and doing the same string comparison you would be doing in userspace.
Is there a noticeable performance hit to placing a watch on a very active directory? I'm not sure how active you mean; tens of files a second, hundreds, millions?
In any case, I would avoid access: it's always going to be racey. A file could be created and removed between calls to access, and calling access in a very tight loop is going to be slow, and is the kind of problem inotify was designed to solve.
| Can inotify be used to watch for a specific file to be created without monitoring the entire directory? |
1,306,327,420,000 |
I'm trying to find a way to immediately move a file to another folder as soon as it appears in my dropbox on CentOS.
I have scoured the internet for some clues but I can't get any further than the fact that I need to use inotify to invoke a script which will process the file as it appears.
My BASH knowledge is very limited and I doubt I can write this in PHP.
In other words, how can I move a file to another folder as soon as it appears using inotify?
|
This is a simple approach:
#!/usr/bin/env bash
dir=/home/ortix/Dropbox/new/
target=/home/ortix/movedfiles/
inotifywait -m "$dir" --format '%w%f' -e create |
while read file; do
mv "$file" "$target"
done
With more details about the types of files you wanted to move, you could add some checking, logging etc...
| Use inotifywait to move file when it loads in dropbox folder |
1,306,327,420,000 |
I have written a bash script to monitor a particular directory /root/secondfolder/:
#!/bin/sh
while inotifywait -mr -e close_write "/root/secondfolder/"
do
echo "close_write"
done
When I create a file called fourth.txt in /root/secondfolder/ and write stuff to it, save and close it, it outputs the following:
/root/secondfolder/ CLOSE_WRITE,CLOSE fourth.txt
However, it does not echo "close_write". Why is that?
|
inotifywait -m is "monitor" mode: it never exits. The shell runs it and waits for the exit code to know whether to run the body of the loop, but that never comes.
If you remove -m, it will work:
while inotifywait -r -e close_write "/root/secondfolder/"
do
echo "close_write"
done
produces
Setting up watches. Beware: since -r was given, this may take a while!
Watches established.
/root/secondfolder/ CLOSE_WRITE,CLOSE bar
close_write
Setting up watches. Beware: since -r was given, this may take a while!
Watches established.
...
By default, inotifywait will "exit after the first event occurs", which is what you want in a loop condition.
Instead, you might prefer to read the standard output of inotifywait:
#!/bin/bash
while read line
do
echo "close_write: $line"
done < <(inotifywait -mr -e close_write "/tmp/test/")
This (bash) script will read each output line of the inotifywait command into the $line variable inside the loop, using process substitution. It avoids setting up the recursive watches every time around the loop, which might be expensive. If you can't use bash, you can pipe the command into the loop instead: inotifywait ... | while read line .... inotifywait produces one line of output for each event in this mode, so the loop runs once for each.
| Using inotify to monitor a directory but not working 100% |
1,306,327,420,000 |
Imagine two processes, a reader and a writer, communicating via a regular file on an ext3 fs. Reader has an inotify IN_MODIFY watch on the file. Writer writes 1000 bytes to the file, in a single write() call. Reader gets the inotify event, and calls fstat on the file. What does Reader see?
Is there any guarantee that Reader will get back at least 1000 for st_size on the file? From my experiments, it seems not.
Is there any guarantee that Reader can actually read() 1000 bytes?
This is happening on a seriously I/O bound box. For example, sar shows an await times of about 1 second. In my case the Reader is actually waiting 10 seconds AFTER getting the inotify event before calling stat, and getting too-small results.
What I had hoped was that the inotify event would not be delivered until the file was ready. What I suspect is actually happening is that the inotify event fires DURING the write() call in the Writer, and the data is actually available to other processes on the system whenever it happens to be ready. In this case, 10s is not enough time.
I guess I am just looking for confirmation that the kernel actually implements inotify the way I am guessing. Also, if there are any options, possibly, to alter this behavior?
Finally- what is the point of inotify, given this behavior? You're reduced to polling the file/directory anyway, after you get the event, until the data is actually available. Might as well be doing that all along, and forget about inotify.
*** EDIT ****
Okay, as often happens, the behavior I am seeing actually makes sense, now that I understand what I am really doing. ^_^
I am actually responding to an IN_CREATE event on the directory the file lives in. So I am actually stat()'ing the file in response to the creation of the file, not necessarily the IN_MODIFY event, which may be arriving later.
I am going to change my code so that, once I get the IN_CREATE event, I will subscribe to IN_MODIFY on the file itself, and I won't actually attempt to read the file until I get the IN_MODIFY event. I realize that there is a small window there in which I may miss a write to the file, but this is acceptable for my application, because in the worst case, the file will be closed after a maximum number of seconds.
|
From what I see in the kernel source, inotify does only fire up after a write is completed (i.e. your guess is wrong). After the notification is triggered, only two more things happen in sys_write, the function that implements the write syscall: setting some scheduler parameters, and updating the position on the file descriptor. This code has been similar as far back as 2.6.14. By the time the notification fires, the file already has its new size.
Check for things that may go wrong:
Maybe the reader is getting old notifications, from the previous write.
If the reader calls stat and then calls read or vice versa, something might happen in between. If you keep appending to the file, calling stat first guarantees that you'll be able to read that far, but it's possible that more data has been written by the time the reader calls read, even if it hasn't yet received the inotify notification.
Just because the writer calls write doesn't mean that the kernel will write the requested number of characters. There are very few circumstances where atomic writes are guaranteed up to any size. Each write call is guaranteed atomic, however: at some point the data isn't written yet, and then suddenly n bytes have been written, where n is the return value of the write call. If you observe a partially-written file, it means that write returned less than its size argument.
Useful tools to investigate what's going on include:
strace -tt
the auditd subsystem
| Does inotify fire a notification when a write is started or when it is completed? |
1,306,327,420,000 |
I am using Fedora 17 and over the last few days I am having an issue with my system. Whenever I try to start httpd it shows me:
Error: No space left on device
When I execute systemctl status httpd.service, I receive the following output:
httpd.service - The Apache HTTP Server (prefork MPM)
Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled)
Active: inactive (dead) since Tue, 19 Feb 2013 11:18:57 +0530; 2s ago
Process: 4563 ExecStart=/usr/sbin/httpd $OPTIONS -k start (code=exited, status=0/SUCCESS)
CGroup: name=systemd:/system/httpd.service
I tried to Google this error and all links point to clearing the semaphores. I don't think this is the issue as I tried to clear the semaphores but that didn't work.
Edit 1
here is the output of df -g
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 50G 16G 32G 34% /
devtmpfs 910M 0 910M 0% /dev
tmpfs 920M 136K 920M 1% /dev/shm
tmpfs 920M 1.2M 919M 1% /run
/dev/mapper/vg-lv_root 50G 16G 32G 34% /
tmpfs 920M 0 920M 0% /sys/fs/cgroup
tmpfs 920M 0 920M 0% /media
/dev/sda1 497M 59M 424M 13% /boot
/dev/mapper/vg-lv_home 412G 6.3G 385G 2% /home
Here is the deatail of httpd error log
[root@localhost ~]# tail -f /var/log/httpd/error_log
[Tue Feb 19 11:45:53 2013] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Tue Feb 19 11:45:53 2013] [notice] Digest: generating secret for digest authentication ...
[Tue Feb 19 11:45:53 2013] [notice] Digest: done
[Tue Feb 19 11:45:54 2013] [notice] Apache/2.2.23 (Unix) DAV/2 PHP/5.4.11 configured -- resuming normal operations
[Tue Feb 19 11:47:23 2013] [notice] caught SIGTERM, shutting down
[Tue Feb 19 11:48:00 2013] [notice] SELinux policy enabled; httpd running as context system_u:system_r:httpd_t:s0
[Tue Feb 19 11:48:00 2013] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Tue Feb 19 11:48:00 2013] [notice] Digest: generating secret for digest authentication ...
[Tue Feb 19 11:48:00 2013] [notice] Digest: done
[Tue Feb 19 11:48:00 2013] [notice] Apache/2.2.23 (Unix) DAV/2 PHP/5.4.11 configured -- resuming normal operations
tail: inotify resources exhausted
tail: inotify cannot be used, reverting to polling
Edit 2
here is the output of df-i
[root@localhost ~]# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
rootfs 3276800 337174 2939626 11% /
devtmpfs 232864 406 232458 1% /dev
tmpfs 235306 3 235303 1% /dev/shm
tmpfs 235306 438 234868 1% /run
/dev/mapper/vg-lv_root 3276800 337174 2939626 11% /
tmpfs 235306 12 235294 1% /sys/fs/cgroup
tmpfs 235306 1 235305 1% /media
/dev/sda1 128016 339 127677 1% /boot
/dev/mapper/vg-lv_home 26984448 216 26984232 1% /home
Thanks
|
Here we see evidence of a problem:
tail: inotify resources exhausted
By default, Linux only allocates 8192 watches for inotify, which is ridiculously low. And when it runs out, the error is also No space left on device, which may be confusing if you aren't explicitly looking for this issue.
Raise this value with the appropriate sysctl:
fs.inotify.max_user_watches = 262144
(Add this to /etc/sysctl.conf and then run sysctl -p.)
| Httpd : no space left on device |
1,306,327,420,000 |
I have a binary that creates some files in /tmp/*some folder* and runs them. This same binary deletes these files right after running them. Is there any way to intercept these files?
I can't make the folder read-only, because the binary needs write permissions. I just need a way to either copy the files when they are executed or stop the original binary from deleting them.
|
You can use the inotifywait command from inotify-tools in a script to create hard links of files created in /tmp/some_folder. For example, hard link all created files from /tmp/some_folder to /tmp/some_folder_bak:
#!/bin/sh
ORIG_DIR=/tmp/some_folder
CLONE_DIR=/tmp/some_folder_bak
mkdir -p $CLONE_DIR
inotifywait -mr --format='%w%f' -e create $ORIG_DIR | while read file; do
echo $file
DIR=`dirname "$file"`
mkdir -p "${CLONE_DIR}/${DIR#$ORIG_DIR/}"
cp -rl "$file" "${CLONE_DIR}/${file#$ORIG_DIR/}"
done
Since they are hard links, they should be updated when the program modifies them but not deleted when the program removes them. You can delete the hard linked clones normally.
Note that this approach is nowhere near atomic so you rely on this script to create the hard links before the program can delete the newly created file.
If you want to clone all changes to /tmp, you can use a more distributed version of the script:
#!/bin/sh
TMP_DIR=/tmp
CLONE_DIR=/tmp/clone
mkdir -p $CLONE_DIR
wait_dir() {
inotifywait -mr --format='%w%f' -e create "$1" 2>/dev/null | while read file; do
echo $file
DIR=`dirname "$file"`
mkdir -p "${CLONE_DIR}/${DIR#$TMP_DIR/}"
cp -rl "$file" "${CLONE_DIR}/${file#$TMP_DIR/}"
done
}
trap "trap - TERM && kill -- -$$" INT TERM EXIT
inotifywait -m --format='%w%f' -e create "$TMP_DIR" | while read file; do
if ! [ -d "$file" ]; then
continue
fi
echo "setting up wait for $file"
wait_dir "$file" &
done
| Watch /tmp for file creation and prevent deletion of files? [duplicate] |
1,306,327,420,000 |
After reading some articles on the internet I am a little lost in understanding the difference between INotify max_user_instances and max_user_watches.
From official Linux man:
/proc/sys/fs/inotify/max_user_instances
This specifies an upper limit on the number of INotify instances that can be created per real user ID.
and
/proc/sys/fs/inotify/max_user_watches
This specifies an upper limit on the number of watches that can be created per real user ID.
Does it mean that max_user_instances is an instance of INotify process, which can monitor multiple filesystems and limit of that is specified by max_user_watches?
If the former is true, how does it work in practice? Each process, which has to monitor some filesystems is creating user instance of INotify (I think not really because it is related to user id)?
Currently, after deployment on Amazon Ec2 instance, I have an error like this:
System.IO.IOException: The configured user limit (128) on the number of INotify instances has been reached.
If I understand correctly, there are too many instances created, which are monitoring for filesystem changes? What can be the cause of that?
|
An "instance" is single file descriptor, returned by inotify_init(). A single inotify file descriptor can be used by one process or shared by multiple processes, so they are rationed per-user instead of per-process.
A "watch" is a single file, observed by inotify instance. Each watch is unique, so they are also rationed per-user.
If an application creates too many instances, it either starts too many processes (and does not share inotify file descriptors between processes), or it is just plain buggy — for example, it may leak open inotify descriptors (open and then forget about them without closing).
There is also a possibility, that application is just poorly written, and uses multiple descriptors where one could suffice (you almost never need more than 1 inotify descriptor).
Open file descriptors can by listed via procfs:
ls -al /proc/<application process number>/fd/
A bit of extra information about a descriptor can seen in /proc/<PID>/fdinfo/<descriptor number>.
| What is exactly difference between INotify max_user_instances and max_user_watches? |
1,306,327,420,000 |
I have a service which is sporadically publishing content in a certain server-side directory via rsync. When this happens I would like to trigger the execution of a server-side procedure.
Thanks to the inotifywait command it is fairly easy to monitor a file or directory for changes. I would like however to be notified only once for every burst of modifications, since the post-upload procedure is heavy, and don't want to execute it for each modified file.
It should not be a huge effort to come up with some hack based on the event timestamp… I believe however this is a quite common problem. I was not able to find anything useful though.
Is there some clever command which can figure out a burst? I was thinking of something I can use in this way:
inotifywait -m "$dir" $opts | detect_burst --execute "$post_upload"
|
Drawing on your own answer, if you want to use the shell read you could take advantage of the -t timeout option, which sets the return code to >128 if there is a timeout. Eg your burst script can become, loosely:
interval=$1; shift
while :
do if read -t $interval
then echo "$REPLY" # not timeout
else [ $? -lt 128 ] && exit # eof
"$@"
read || exit # blocking read infinite timeout
echo "$REPLY"
fi
done
You may want to start with an initial blocking read to avoid detecting an end of burst at the start.
| Monitor a burst of events with inotifywait |
1,306,327,420,000 |
I'm looking for a reliable way to detect renaming of files and get both old and new file names. This is what I have so far:
COUNTER=0;
inotifywait -m --format '%f' -e moved_from,moved_to ./ | while read FILE
do
if [ $COUNTER -eq 0 ]; then
FROM=$FILE;
COUNTER=1;
else
TO=$FILE;
COUNTER=0;
echo "sed -i 's/\/$FROM)/\/$TO)/g' /home/a/b/c/post/*.md"
sed -i 's/\/'$FROM')/\/'$TO')/g' /home/a/b/c/post/*.md
fi
done
It works, but it assumes you will never move files into or out of the watched folder. It also assumes that events come in pairs, first moved_from, then moved_to. I don't know if this is always true (works so far).
I read inotify uses a cookie to link events. Is the cookie accessible somehow?
Lacking the cookie, I thought about using timestamps to link events together. Any tips on getting FROM and TO in a more reliable way?
Full script gist.
|
I think your approach is correct, and tracking the cookie is a robust way of doing this.
However, the only place in the source of inotify-tools (3.14) that cookie is referenced is in the header defining the struct to match the kernel API.
If you like living on the edge, this patch (issue #72) applies cleanly to 3.14 and adds a %c format specifier for the event cookie in hex:
--- libinotifytools/src/inotifytools.c.orig 2014-10-23 18:05:24.000000000 +0100
+++ libinotifytools/src/inotifytools.c 2014-10-23 18:15:47.000000000 +0100
@@ -1881,6 +1881,12 @@
continue;
}
+ if ( ch1 == 'c' ) {
+ ind += snprintf( &out[ind], size-ind, "%x", event->cookie);
+ ++i;
+ continue;
+ }
+
if ( ch1 == 'e' ) {
eventstr = inotifytools_event_to_str( event->mask );
strncpy( &out[ind], eventstr, size - ind );
This change modifies libinotifytools.so, not the inotifywait binary. To test before installation:
LD_PRELOAD=./libinotifytools/src/.libs/libinotifytools.so.0.4.1 \
inotifywait --format="%c %e %f" -m -e move /tmp/test
Setting up watches.
Watches established.
40ff8 MOVED_FROM b
40ff8 MOVED_TO a
Assuming that MOVED_FROM always occurs before MOVED_TO (it does, see fsnotify_move(), and it's an ordered queue, though independent moves might get interleaved) in your script you cache the details when you see a MOVED_FROM line (perhaps in an associative array indexed by ID), and run your processing when you see a MOVED_TO with the matching half of the information.
declare -A cache
inotifywait --format="%c %e %f" -m -e move /tmp/test |
while read id event file; do
if [ "$event" = "MOVED_FROM" ]; then
cache[$id]=$file
fi
if [ "$event" = "MOVED_TO" ]; then
if [ "${cache[$id]}" ]; then
echo "processing ..."
unset cache[$id]
else
echo "mismatch for $id"
fi
fi
done
(With three threads running to shuffle a pair of files each 10,000 times, I never saw a single out of order event, or event interleaving. It may depend on filesystem and other conditions of course.)
| inotifywait - get old and new file name when renaming |
1,306,327,420,000 |
What I want to do is, to monitor a directory (not recursive, just one) for new files created and append those files to one single big file as they are being written.
The number of files that are being written is huge, could reach as much as 50,000.
By using inotifywait, I am monitoring the directory like:
inotifywait -m -e create ~/folder | awk '($2=="CREATE"){print $3}' > ~/output.file
So I am storing names of new files created in ~/output.file and then using a for loop
for FILE in `cat ~/output.file`
do
cat $FILE >> ~/test.out
done
It works fine, if the rate at which a file is being written (created) in ~/folder is like 1 file per second.
But the requirement is large, and the rate at which the files are being created is very high, like 500 files per minute (or even more).
I checked the number of files in the ~/folder after the process is complete, but it does not match the inotifywait output. There is a difference of like 10–15 files, varies.
Also, the loop
for FILE in `cat ~/output.file`
do
done
doesn't process all the files in ~/output.file as they are being written.
Can anyone please suggest me an elegant solution to this problem?
|
No need to post-process the output... use inotifywait options --format and --outfile
If I run:
inotifywait -m --format '%f' -e create /home/don/folder/ --outfile /home/don/output.file
then open another tab, cd to ~/folder and run:
time seq -w 00001 50000 | parallel touch {}
real 1m44.841s
user 3m22.042s
sys 1m34.001s
(so I get much more than 500 files per minute) everything works fine and output.file contains all the 50000 file names that I just created.
Once the process has finished writing the files to disk you can append their content to your test.out (assuming you are always in ~/folder):
xargs < /home/don/output.file cat >> test.out
Or use read if you want to process files as they are created. So, while in ~/folder you could run:
inotifywait -m --format '%f' -e create ~/folder | while read file; do cat -- "$file" >> ~/test.out; done
| Inotifywait for large number of files in a directory |
1,306,327,420,000 |
I want to trigger an action, when in one of my specified directories, a new file is created. I want to use inotifywait for this purpose. But the problem is that I don't know how to use inotifywait to watch several directories simultaneously. Watching a single directory even recursively is not a problem, but several? Is it possible, or must I run several processes with "inotifywait" in parallel?
|
You can just list directories you want to observe:
$ inotifywait testdir1 testdir2/ -m
Inside application, after instance of inotify is created using inotify_init() function, inotify_add_watch() can be called several times for selected paths. You can find the system limit of watching paths in /proc/sys/fs/inotify/max_user_watches (8192 by default).
| "inotifywait" to watch several directories simultaneously |
1,306,327,420,000 |
The reason why I am asking is because I'm using iwatch (not to confuse with a gadget device) to watch for filesystem events (in my case - file creation/renaming).
What I cannot explain is this log:
/path/to/file.ext.filepart 0 IN_MODIFY
/path/to/file.ext.filepart 0 IN_MODIFY
/path/to/file.ext.filepart 0 IN_MODIFY
/path/to/file.ext.filepart 0 IN_MODIFY
/path/to/file.ext.filepart 0 IN_CLOSE_WRITE
/path/to/file.ext 0 IN_CREATE
/path/to/file.ext.filepart 0 IN_DELETE
/path/to/file.ext 0 IN_ATTRIB
To get it I've copied a file.ext from a remote machine using WinSCP with temporary file creation option turned on (so that it was either no file file.ext at all, in case if transfer was terminated, or the complete file was in the destination).
And what confuses me is that the /path/to/file.ext is only created IN_CREATE and its attributes modified IN_ATTRIB (not sure which ones though, but I think that's where all the magic happens).
The strangest thing here is that:
The file.ext is not a result of moving file.ext.filepart - there would be a different move event
The file.ext is not a result of copying file.ext.filepart - there would be a bunch of write events following by IN_CLOSE_WRITE
So my question is - what is happening here under the hood: how the file.ext was created with the contents without an explicit rename or data copy?
|
$ inotifywait -m /tmp
Setting up watches.
Watches established.
/tmp/ CREATE file.ext.filepart
/tmp/ OPEN file.ext.filepart
/tmp/ MODIFY file.ext.filepart
/tmp/ CLOSE_WRITE,CLOSE file.ext.filepart
/tmp/ CREATE file.ext
/tmp/ DELETE file.ext.filepart
Transcript from running
$ echo hello >/tmp/file.ext.filepart
$ ln /tmp/file.ext.filepart /tmp/file.ext
$ rm /tmp/file.ext.filepart
Moving a file generates a move event, but creating a hard link generates the same create event as creating a new, empty file (as do mkfifo and other ways to create files).
Why does the SCP or SFTP server creates a hard link then remove the temporary file rather than moving the temporary file into place? In the source code of OpenSSH (portable 6.0), in sftp-server.c, in the function process_rename, I see the following code (reformatted and simplified to illustrate the part I want to show):
if (S_ISREG(sb.st_mode)) {
/* Race-free rename of regular files */
if (link(oldpath, newpath) == -1) {
if (errno == EOPNOTSUPP || errno == ENOSYS) {
/* fs doesn't support links, so fall back to stat+rename. This is racy. */
if (stat(newpath, &st) == -1) {
rename(oldpath, newpath) == -1)
}
}
} else {
unlink(newpath);
}
}
That is: try to create a hard link from the temporary file name to the desired file name, then remove the temporary file. If creating the hard link doesn't work because the OS or the filesystem doesn't support that, fall back to a different method: test if the desired file exists, and if doesn't, rename the temporary file. So the point is to rename the temporary file to its final location without risking overwriting a file that may have been created while the copy was in progress. Renaming wouldn't do because rename overwrites the target file if it exists.
| Is it possible to create a non-empty file without write_close and rename event? |
1,306,327,420,000 |
I am using inotify to watch a directory and sync files between servers using rsync. Syncing works perfectly, and memory usage is mostly not an issue. However, recently a large number of files were added (350k) and this has impacted performance, specifically on CPU. Now when rsync runs, CPU usage spikes to 90%/100% and rsync takes long to complete, there are 650k files being watched/synced.
Is there any way to speed up rsync and only rsync the directory that has been changed? Or alternatively to set up multiple inotifywaits on separate directories. Script being used is below.
UPDATE: I have added the --update flag and usage seems mostly unchanged
#! /bin/bash
EVENTS="CREATE,DELETE,MODIFY,MOVED_FROM,MOVED_TO"
inotifywait -e "$EVENTS" -m -r --format '%:e %f' /var/www/ --exclude '/var/www/.*cache.*' | (
WAITING="";
while true; do
LINE="";
read -t 1 LINE;
if test -z "$LINE"; then
if test ! -z "$WAITING"; then
echo "CHANGE";
WAITING="";
rsync --update -alvzr --exclude '*cache*' --exclude '*.git*' /var/www/* root@secondwebserver:/var/www/
fi;
else
WAITING=1;
fi;
done)
|
If the server has a slow processor avoid checksums and compression with rsync.
I would remove ht "-z" option in the rsync command.
rsync --update -alvr --exclude '*cache*' --exclude '*.git*' /var/www/* root@secondwebserver:/var/www/
Note that it will not avoid rsync to compare the 650k files.
You could rsync subdirectories of /var/www one by one to reduce the number of files checked at one time.
| inotify and rsync on large number of files |
1,306,327,420,000 |
Is it possible to hook a script execution on each process creation?
Essentially the equivalent of inotifywait to monitor disk activity but applied to the process table.
It would be to allow to do an action upon spawning of the processes, for example logging it, cgset it, other. I can see the challenge that it would recursively apply on the new processes. But instead of polling the process table as fast as possible to catch changes which would be vulnerable to race conditions, is there a better way.
Thanks
|
First, process creation is rarely a useful event to log and it's irrelevant for security (except for resource limiting). I think you mean to hook the execution of programs, which is done by execve, not fork.
Second, the use cases you cite are usually best served by using existing mechanism made for that purpose, rather than rolling your own.
For logging, BSD process accounting provides a small amount of information, and is available on most Unix variants; on Linux, install the GNU accounting utilities (install the package from your distribution). For more sophisticated logging on Linux, you can use the audit subsystem (the auditctl man page has examples; as I explained above the system call you'll want to log is execve).
If you want to apply security restrictions to certain programs, use a security framework such as SELinux or AppArmor.
If you want to run a specific program in a container, or with certain settings, move the executable and put a wrapper script in its place that sets the settings you want and calls the original executable.
If you want to modify the way one specific program calls other programs, without affecting how other programs behave, there are two cases: either the program is potentially hostile or not.
If the program is potentially hostile, run it in a dedicated virtual machine.
If the program is cooperative, the most obvious angle of attack is to run it with a different PATH. If the program uses absolute paths that aren't easy to configure, on a non-antique Linux system, you can run it in a separate mount namespace (see also kernel: Namespaces support). If you really need fine control, you can load a library that overrides some library calls by invoking the program with LD_PRELOAD=my_override_library.so theprogram. See Redirect a file descriptor before execution for an example. Note that in addition to execve, you'll need to override all the C library functions that call execve internally, because LD_PRELOAD doesn't affect internal C library calls. You can get more precise control by running the program under ptrace; this allows you to override a system call even if it's made by a C library function, but it's harder to set up (I don't know of any easy way to do it).
| Hook action on process creation |
1,306,327,420,000 |
Inside shell script (test.sh) I have inotifywait monitoring recursively some direcotry - "somedir":
#!/bin/sh
inotifywait -r -m -e close_write "somedir" | while read f; do echo "$f hi"; done
When I execute this in terminal I will get following message:
Setting up watches. Beware: since -r was given, this may take a while!
Watches established.
What I need is to touch all files under "somedir" AFTER the watches were established. For that I use:
find "somedir" -type f -exec touch {}
The reason why is that when starting inotifywait after crash, all files that arrived during the time will be never picked up. So the problem and question is, how or when should I execute the find + touch?
So far I have tried to let it sleep few seconds after I call test.sh, but that doesn't work in long run when the number of subdirs in "somedir" will grow.
I have tried to check if the process is running and sleep until it appears, but it seems that process appears before all the watches are established.
I tried to change test.sh:
#!/bin/sh
inotifywait -r -m -e close_write "somedir" && find "somedir" -type f -exec touch {} |
while read f; do echo "$f hi"; done
But no files are touched at all. So I would really need a help...
Additional info is that test.sh is running in background: nohup test.sh &
Any ideas? Thanks
FYI: Based on advice from @xae, I use it like this:
nohup test.sh > /my.log 2>&1 &
while :; do (cat /my.log | grep "Watches established" > /dev/null) && break; done;
find "somedir" -type f -exec touch {} \+
|
When inotifywait outputs the string "Watches established." is secure to make changes in the watched inodes, so you should wait to the string to appears on standard error before touching the files.
As an example,this code should to that,
inotifywait -r -m -e close_write "somedir" \
2> >(while :;do read f; [ "$f" == "Watches established." ] && break;done;\
find "somedir" -type f -exec touch {} ";")\
| while read f; do echo "$f hi";done
| Execute command after inotifywait established watches |
1,306,327,420,000 |
I would like to get a list of pid's which hold shared lock on /tmp/file. Is this possible using simple command line tools?
|
From man lsof:
FD is the File Descriptor number of the file or:
FD is followed by one of these characters, describing the mode under which the file is open:
The mode character is followed by one of these lock characters, describing the type of lock applied to the file:
R for a read lock on the entire file;
W for a write lock on the entire file;
space if there is no lock.
So R in 3uR mean that read/shared lock is issued by 613 PID.
#lsof /tmp/file
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
perl 613 turkish 3uR REG 8,2 0 1306357 /tmp/file
Reading directly from /proc/locks is faster than lsof,
perl -F'[:\s]+' -wlanE'
BEGIN { $inode = (stat(pop))[1]; @ARGV = "/proc/locks" }
say "pid:$F[4] [$_]" if $F[7] == $inode
' /tmp/file
| Monitoring file locks, locked using flock |
1,306,327,420,000 |
I am trying to use inotifywait to watch a folder (/shares/Photos) and when it detects a jpg added to the folder I need to resize it into a sub directory ($small_dir). Under the photos director there will be many subfolders for the jpgs.
The tree looks like this
shares
-Photos
-Folder 1
-Folder 2
.
.
.
Basically whenever someone copies pictures into folder 1 I need to create a new subfolder, and then resize the images and put the smaller versions into that folder.
So the tree would become:
shares
-Photos
-Folder 1
-Resized
-Folder 2
.
.
.
My code so far:
inotifywait -mr --timefmt '%m/%d/%y %H:%M' --format '%T %w %f' -e close_write /shares/Photos --includei "\.jpg|\.jpeg" |
while read -r date time dir file; do
changed_abs=${dir}${file}
small_dir=${dir}${target}/
printf "\nFile $changed_abs was changed or created\n dir=$dir \n file=$file \n small_dir=$small_dir \n"
# Check if the $small_directory exists, if not create it.
if [ -d "$small_dir" -a ! -h "$small_dir" ]
then
echo "$small_dir found, nothing to do."
else
echo "Creating $small_dir"
mkdir $small_dir
chmod 777 $small_dir
fi
# Check to see if the file is in $small_dir, if it is, do nothing.
if [ "$dir" = "$small_dir" ]; then
printf "\nFile is in the $small_dir folder, nothing to do\n"
else
printf "\nResizing file into the $small_dir folder\n"
# Code to resize the image goes here.
fi
done
It mostly works, but what I am banging my head against the wall about is that if I create a new subfolder under Photos while the script is running, inotifywait simply ignores it and does nothing.
I tried replacing close_write with create but it made no difference, and I am really not sure where to go from here.
Any advice/help would be greatly appreciated.
|
OP is using:
inotifywait -mr --timefmt '%m/%d/%y %H:%M' --format '%T %w %f' -e close_write /shares/Photos --includei "\.jpg|\.jpeg" |
The documentation about --includei tells (bold emphasis mine):
--includei <pattern>
Process events only for the subset of files whose filenames match the specified POSIX regular expression, case insensitive.
That's not: "display events" but "process events". Indeed that means that only events about directories with a name including .jpg or .jpeg will be processed.
A directory creation event occuring but not matching the filter won't be processed, so inotifywait will not call inotify_add_watch(2) on this event and later anything happening in this . As a result, there will never be events watched in this subdirectory.
I couldn't find with --includei or other similar options a way to express "process events only for these regex, or also with any directory".
UPDATE: suggest a workaround
So the way to get it working appears to have to filter outside of the command. GNU grep will buffer its output if not a tty, so add --line-buffered.
This will be affected by user input (like spaces in filenames). To mitigate this, using a / separator (which is invalid in a filename) between the directory and filename is needed. As the directory part conveniently includes a trailing /, just removing the space in the format string is enough (along further variable processing and reuse like changed_abs). At the same time, I'm correcting the intent to filter filenames ending with the strings jpg or jpeg, not including these strings, and probably improving initial handling of directories with space inside (affectations of changed_abs and small_dir but there are more later to fix). OP should really protect all relevant variables with quotes in the script ({ } doesn't replace quotes).
Replace:
inotifywait -mr --timefmt '%m/%d/%y %H:%M' --format '%T %w %f' -e close_write /shares/Photos --includei "\.jpg|\.jpeg" |
while read -r date time dir file; do
changed_abs=${dir}${file}
small_dir=${dir}${target}/
with:
inotifywait -mr --timefmt '%m/%d/%y %H:%M' --format '%T %w%f' -e close_write /shares/Photos |
grep --line-buffered -Ei '/[^/]*\.(jpg|jpeg)$' |
while read -r date time changed_abs; do
[ -d "$changed_abs" ] && continue # a directory looking like a picture filename was written to: skip this event
dir="${changed_abs%/*}/"
file="${changed_abs##*/}"
small_dir="${dir}${target}/"
Not completely tested, but that's the idea. I'm not even sure the directory test is needed (it appears there is never a close_write event on it), but it won't hurt.
Notes
if this isn't clear, an action (inotify_add_watch(2)) has to be done by inotifywait for each directory creation event detected, inotifywait must as soon as possible add a new watch to this directory because it might lose following events inside it (race condition). It's even documented in the BUGS section:
There are race conditions in the recursive directory watching code [...] probably not
fixable.
newer versions of inotifywait, when running as root (or privileged enough) on kernel >= 5.9 at least, are supposed to be able to use the fanotify(7) facility, but I couldn't manage to get my version of inotifywait to use this despite being supposed to be compiled with support and having a recent enough kernel. On a system where fanotify(7) would be used, combined with the --filesystem option, hypothetically this could remove the need to have to do an action for each newer directory and make OP's method based on filtering with --includei work.
| inotifywait ignoring new folders in watch directory |
1,306,327,420,000 |
Large files are transferred to a server for processing. The server monitors a specific directory using incrond and when a new file is received the processing script is executed for that file.
Because the files are large it takes some time to transfer them. How do I make sure that the file has finished transferring before process it?
|
Your problem has nothing to do with scp. It's related to
inotify, the kernel interface that's
used to trigger an action on file system events. And you're apparently
triggering on the wrong event. Read the
man page of incrontab
to understand how the system works.
If your processing script already triggers when the file has not been
complete, I assume you trigger on the IN_CREATE event. You can change the
corresponding entry in the incrontab to trigger on IN_CLOSE_WRITE.
| Prevent processing file until SCP transfer is finished |
1,306,327,420,000 |
For some test I start Ubuntu Live from USB.
I'm trying to use tail command to show debug log, but it doesn't work.
I also test opening two terminals (t1, t2) with this code:
t1:
touch a
t2:
tail -f a
t1:
for i in `seq 1 10`; do echo $i >> a; sleep 1; done
Nothing in t2! What can be the cause?
|
If it's a case of tail not working at all, then it could be because your liveCD is using the overlayfs filesystem, which has a bug regarding notifications of modified files. You could try to move the log to another filesystem, such as /tmp if the application creating the log has an option to do so.
You could also carry out your test in /tmp instead of your homedir.
| tail -f produces no output in Ubuntu live CD |
1,306,327,420,000 |
This works perfectly:
$ inotifywait --event create ~/foo
Setting up watches.
Watches established.
/home/ron/foo/ CREATE bar
However, this just sits there when directory tun0 is created under /sys/devices/virtual/net.
$ inotifywait --event create /sys/devices/virtual/net
Setting up watches.
Watches established.
Since those folders are world readable, I'd expect inotifywait to work.
So, what am I doing wrong?
Thanks
|
Although the inotify FAQ implies partial support:
Q: Can I watch sysfs (procfs, nfs...)?
Simply spoken: yes, but with some limitations. These limitations vary between kernel versions and tend to get smaller. Please read information about particular filesystems.
it does not actually say what might be supported (or in which kernel version, since that's mostly down to the inotify support in the filesystem itself rather than the library/utilities).
A simple explanation is that is doesn't really make sense to support inotify for everything in in /sys (or /proc) since they don't get modified in the conventional sense. Most of these files/directories represent a snapshot of kernel state at the time you view them.
Think of /proc/uptime as a simple example, it contains the uptime accurate to the centisecond. Should inotify notify you 100 times a second that it was "written" to? Apart from not being very useful, it would be both a performance issue and a tricky problem to solve since nothing is generating inotify events on behalf of these fictional "writes". Within the kernel inotify works at the filesystem API level.
The situation then is that some things in sysfs and procfs do generate inotify events, /proc/uptime for example will tell you when it has been accessed (access, open, close), but on my kernel /proc/mounts shows no events at all when file systems are mounted and unmounted.
Here's Greg Kroah-Hartman's take on it:
http://linux-fsdevel.vger.kernel.narkive.com/u0qmXPFK/inotify-sysfs
and Linus:
http://www.spinics.net/lists/linux-fsdevel/msg73955.html
(both threads from 2014 however)
To solve your immediate problem you may be able to use dbus, e.g. dbus-monitor --monitor --system (no need to be root) will show trigger on tun devices being created and removed (though mine doesn't show the tun device name, only the HAL string with the PtP IP); udevadm monitor (no need to be root); or fall back to polling the directory (try: script to monitor for new files in a shared folder (windows host, linux guest)).
(With udev you could also use inotifywait -m -r /dev/.udev and watch out for files starting with "n", but that's quite an ungly hack.)
| inotifywait not alterting when device created |
1,306,327,420,000 |
I am developing a small daemon program which needs to run some instructions when a user logs onto the system (all kinds of logins included). In order to do so, I want my program to be woken up whenever this login event occurs. However, I don't want it to check periodically whether a new user arrived, which means it must not:
Read log files such as /var/log/auth.log periodically. Besides the fact that I would have to actually parse the file, I would also probably do it far too often (since there are very few logins on my system).
Check the output of another command such as ps, who or w and keep track of users internally. Using this method, the program could miss some logins, in case someone logs in and out before my program runs its checks on the output.
Since I don't want my program to waste time, I thought about using I/O events, however... I don't quite see where to hook. I have tried watching over /var/run/utmp (using inotify) but it doesn't seem to react correctly: my program receives a lot of events when terminals are opened/closed, but very few when someone actually logs in (if any at all). Additionally, these events are hardly recognisable, and change from a login attempt to another. For the record, here is a little set of what I was able to catch when running su user:
When a terminal opens: IN_OPEN (file was opened), IN_CLOSE_NOWRITE (unwrittable file closed), sometimes IN_ACCESS (file was accessed, when using su -l).
When su is started (password prompt): a few events with no identifier (event.mask = 0).
After a successful login attempt (shell started as another user) : nothing.
When closing the terminal: another set of unnamed events.
Is there another way to hook a program onto "user logins"? Is there a file reflecting user logins on which I could use an inotify watch (just like I could use one on /proc to detect process creations) ? I had another look at /proc contents but nothing seems to be quite what I need.
Side note : I thought about posting this on Stack Overflow since it is programming-related, but beyond implementation, I am more interested by the "visible" reactions a Linux system has when a user logs in (by "visible", I mean reactions we could observe/detect/watch out for, programmatically, without wasting time).
|
Does your system use Pluggable Authentication Modules (PAM)? Most modern Linux or BSD use PAM.
PAM allows you to hook into logins. There are a variety of PAM modules available which might meet your needs, or you can write your own in C. There is even a pam-python* binding which allows you to hook in Python code.
Given that you want the daemon to be running continuously, I would opt for a simple PAM module which logs to a file and signals the daemon.
*The package is named libpam-python under Debian and Ubuntu.
| How can I detect a user login programmatically? [duplicate] |
1,306,327,420,000 |
I got a little script that lists the number of inotify watches per process. That usually gets me what I want, but now I would like to know which files are being watched. I assume this is possible and that a inotify watch corresponds to a file being monitored by an inotify instance?
I also assume that I can build upon what I currently have in that script. For instance,
sudo find /proc/*/fd -lname anon_inode:inotify | cut -d "/" -f 3
gets me a list of processes with inotify file descriptors. If I look at the info for one of the file descriptors, I get what I assume is a list of file handles/watches:
$ sudo cat /proc/50679/fdinfo/19
pos: 0
flags: 00
mnt_id: 15
inotify wd:8 ino:640001 sdev:800001 mask:3cc ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:01006400feaad211
inotify wd:7 ino:a08da sdev:800001 mask:3cc ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:da080a0094019e8f
inotify wd:6 ino:840003 sdev:800001 mask:3cc ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:030084005ae9e3df
inotify wd:5 ino:840002 sdev:800001 mask:3cc ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:020084000d506c1f
inotify wd:4 ino:840001 sdev:800001 mask:3cc ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:01008400e47bab26
inotify wd:3 ino:32004e sdev:800001 mask:3cc ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:4e003200488122df
inotify wd:2 ino:320001 sdev:800001 mask:3cc ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:01003200545a9f32
inotify wd:1 ino:2 sdev:800001 mask:3cc ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:0200000000000000
I was hoping I could find out which file f_handle:01003200545a9f32 corresponds to, basically translate a f_handle in /proc/../fdinfo/ to a file name.
|
I don't know about standard tools that deal with the f_handle field. That would be convenient what with the open_by_handle_at(2) syscall, but anyways that field may not always be valid. The kernel's nfsd for instance does not provide it.
However, the full coordinates of any file in Linux are still the good-old device number and inode number, which are reported in the sdev and ino fields. It's only a matter of decoding them.
Those two values are (currently) expressed in hexadecimal notation. You can take the ino as is, just convert it into decimal notation.
The sdev value on the other hand needs some decoding because you need to split it into the traditional "major and minor" device numbers. Note that even files residing in filesystems not backed by actual block-devices still carry a unique pseudo-device number, which is reported in that sdev field.
Assuming that the sdev field is encoded according to Linux's so-called "huge encoding", which uses 20 bits (instead of 8) for minor numbers, in bitwise parlance the major number is sdev >> 20 while the minor is sdev & 0xfffff. Or using a layman text-manipulation approach, the minor number is the rightmost (up to) 5 hex-digits while the major number is everything before the final 5 hex-digits. If there are fewer than 5 hex-digits, the major number is simply 0.
Once major & minor are obtained, you go looking for them into the target process's mountinfo file. In your example that would be /proc/50679/mountinfo. Specifically you look for a line carrying such major:minor pair as third field. The found line's fifth field is the path you need for a final find that goes hunting for the wanted file/dir.
Note: major & minor obtained from an inotify line in /proc/*/fdinfo/* are expressed in hex-notation, but in mountinfo are expressed in decimal, so you need to convert them before searching them in mountinfo.
Note: the fifth field in mountinfo may contain \-escaped octal sequences in case the \ itself or <space>, <newline>, <tab> characters are part of the path. Meaning that a whitespace is encoded as \040, a \ as \134, and so on. You can unescape those by e.g. feeding that path to printf(1)'s own %b specifier.
Note: in order to account for namespaces (i.e. containers) you need to run the final find command within the mount-namespace that the target process lives into, hence for instance something like (for your example):
nsenter -mt 50679 find "$unescaped_path" -inum "$decimal_ino" -print -quit
| Listing the files that are being watched by `inotify` instances |
1,306,327,420,000 |
I want to know when my battery charge changes, and I don't want to simply run a daemon that checks on it every five seconds.
I've tried running inotifywait -m /sys/class/power_supply/BAT1/capacity, but it doesn't register any modifications even though cat-ing it every once in a while gives different results! In fact, it only reports something when I used cat on it, or run acpi. I've also tried running inotifywait on other files in the BAT1 directory, and found out that none have been seen made modifications to - despite giving new results with cat.
So why doesn't inotifywait report modifications? And how can I get instant updates on changes in battery level if I can't use inotify?
|
As @rudib said in the comments, everything in /sys is virtual: the content of each file is created fresh from the corresponding kernel data structure whenever it's actually being read. So, there are no modifications in the sense of something writing into the file to change it.
The same goes for /proc.
Battery status notifications are available as generic Netlink messages, with family name = acpi_event and multicast group name acpi_mc_group. Unfortunately I don't know of a tool that would give easy access to Netlink messages for shell scripts, but apparently the pyroute2 tool can also decode netlink messages that contain ACPI events, so it might be useful as a Python code example.
| Why doesn't inotifywait report modifications made to battery capacity file? |
1,306,327,420,000 |
I'm using inotifywait in a script and wondering if there is a way to exclude hidden files from being watched?
I can't seem to determine the regex pattern to exclude hidden files.
|
I assume you mean filenames that start with a dot (.), you can ignore those. The thing with inotifywait --exclude is that the pattern appears to be matched against the full path of the file, so you'll need to take that into account.
So, if you give inotifywait the directories foo and bar to watch, then the patterns match against filenames like foo/something, bar/somethingelse. As usual in regexes, you need to escape the dot.
This should watch for all creates in the current directory except for dotfiles (it's a regex, so we need to escape the dots):
inotifywait -ecreate -m --exclude '^\./\.' .
Or, less specifically, exclude dotfiles in any directories, by looking for the combination of a slash and dot:
inotifywait -ecreate -m --exclude '/\.' foo bar
That, of course, will not work if you're watching a directory with a leading dot in some part of the path; it'll match everything in that path.
| inotifywait exclude file types |
1,306,327,420,000 |
I'm currently writing a bash script that uses inotifywait to perform certain actions on a user-supplied list of files and directories.
It has come to my attention that unlike a lot of shell tools, inotifywait is unable to separate output records with \0. This leaves the possibility of injection attacks with specifically crafted, but legal filenames (containing newlines).
I would like to work around this to ensure my script does not introduce any unnecessary vulnerabilities. My approach is as folllows:
Ensure all files/paths passed for inotifywait to watch have trailing backslashes removed
Format inotifywait output with --format "%e %w%f//" to produce output as follows:
<EVENT LIST> <FILE PATH>//
Pipe inotifywait output to sed; any // found at the ends of lines with \0
Use bash while read loop to read \0-separated records
This means after the first record, all following records will have an extra leading newline. This is stripped off
Each record may then be split at the first space - before the space is the event list (comma separated as per inotifywait) - and after the space the full pathname associated with the event
#!/bin/bash
shopt -s extglob
watchlist=("${@}")
# Remove trailing slashes from any watchlist elements
watchlist=("${watchlist[@]%%+(/)}")
# Reduce multiple consecutive slashes to singles as per @meuh
watchlist=("${watchlist[@]//+(\/)/\/}")
printf -vnewline "\n"
inotifywait -qrm "${watchlist[@]}" --format "%e %w%f//" | \
sed -u 's%//$%\x00%' | \
while IFS= read -r -d '' line; do
line="${line#${newline}}"
events="${line%% *}"
filepath="${line#* }"
printf "events=%s\nfilepath=%q\n" "$events" "$filepath"
done
As far as I can tell, this handles file/path names containing funny characters - spaces, newlines, quotes, etc. But it seems like a rather inelegant kludge.
For the purposes of this question, the ${watchlist[]} array is just copied from command-line parameters, but this array may be build otherwise and may contain "funny" characters.
Are there any malicious paths that could break this? i.e. make the contents of the $events and $filepath variables be incorrect for any given event?
If this is water-tight, is there any cleaner way to do this?
Note I know I could easily write a c program to call inotify_add_watch() and friends to get around this. But for now due to other dependencies I am working in bash.
I've been conflicted on whether to post this here or codereview.SE or even the main so.SE.
|
You would need to sanitize watchlist to replace any // with /. Consider a directory named \nabc (where \n is a newline):
$ mkdir t
$ mkdir t/$'\nabc'
$ touch t/$'\nabc'/x
If passed the directory t//$'\nabc' you will see output with bogus // at the end of lines:
$ inotifywait -m -r t//$'\nabc' --format "%e %w%f//"
Setting up watches. Beware: since -r was given, this may take a while!
Watches established.
OPEN t//
abc/x//
ATTRIB t//
abc/x//
CLOSE_WRITE,CLOSE t//
abc/x//
Note, you could also use -c instead of --format to get csv style output, which double-quotes filenames with newlines, but it is harder to parse, and in my case core dumps on the above example.
Example output for -c and touch t/$'new\nfile':
t/,CREATE,"new
file"
| Potential workaround to inotifywait can't produce NUL-delimited output |
1,306,327,420,000 |
When I used INotify with /etc/mtab or /proc/mounts, it doesn't detect changes when things mount or unmount, even though /etc/mtab and /proc/mounts both have changed when I check manually. Why is this, and how can I track mounting and unmounting things?
|
From the inotify man page:
various pseudo-filesystems such as /proc, /sys, and /dev/pts are not monitorable with
inotify.
and /etc/mtab is often just a link to /proc/mounts these days.
You can use udisksctl monitor to see mounts happen, or set your own /etc/udev/rules.d/ rule file to run a program when a new device is added (before any mount), or run dbus-monitor to see mount events pass on that bus. All a bit complicated.
| Why doesn't INotify work with `/etc/mtab` or `/proc/mounts`? [duplicate] |
1,312,853,834,000 |
I am developing software that will utilize inotify to track changes on a large amount of files (tens to hundreds of thousands of files). I have come up with these ideas:
one watch per file
one watch per parent directory
avoid inotify and periodically scan the fs for changes (not preferred)
I will have a database of all of the files I am watching and some basic stat information (like mtime and size), however, I would have to stat each file in that directory until I found the one that changed.
Which would be faster, tons (100,000+) of inotify watches or tons of stat calls?
I'm thinking that reducing the number of stat calls would be better, but I don't know enough about inotify.
Note:
This will be running on a workstation, not a server. It's main purpose is to synchronize changes (potentially to an entire filesystem) between a client and a remote server.
|
When you read() an inotify fd, the name field of the returned struct tells you which file was modified relative to the directory being watched, so you shouldn't have to stat every file in a directory after the event.
See http://linux.die.net/man/7/inotify
Specifically:
struct inotify_event {
int wd; /* Watch descriptor */
uint32_t mask; /* Mask of events */
uint32_t cookie; /* Unique cookie associating related
events (for rename(2)) */
uint32_t len; /* Size of 'name' field */
char name[]; /* Optional null-terminated name */
};
The name field is only present when an event is returned for a file
inside a watched directory; it identifies the file pathname relative
to the watched directory. This pathname is null-terminated, and may
include further null bytes to align subsequent reads to a suitable
address boundary.
| Efficiency of lots of inotify watches or stat calls |
1,312,853,834,000 |
I have inotifywait(version 3.14) on Linux to monitor a folder that is shared with Samba Version 4.3.9-Ubuntu.
It works if I copy a file from Linux machine to samba share(that is on different machine, under Linux as well).
But if I copy a file from Windows machine inotify won't detect anything.
Spaces or no spaces, recursive or not result is the same.
printDir="/media/smb_share/temp/monitor"
inotifywait -m -r -e modify -e create "$printDir" | while read line
do
echo "$line"
done
Does anyone have any ideas of how to solve it?
|
Ok, its an ugly workaround but for my case it should work in ~90% of cases.
temPrint=/dev/shm/print
fcheck_1=$temPrint/fcheck_1
fcheck_new=$temPrint/fcheck_new
fcheck_old=$temPrint/fcheck_old
fcheck_preprint=$temPrint/fcheck_preprint
fcheck_print=$temPrint/fcheck_print
printDir="/media/smb_share/temp/monitor"
test -d $temPrint || mkdir $temPrint
while [ true ]; do
test -e $fcheck_new && rm $fcheck_new
test -e $fcheck_old || touch $fcheck_old
test -e $fcheck_print && rm $fcheck_print
ls -l "$printDir"/*.pdf > $fcheck_1
while read line
do
echo "${line#*"/"}" | sed "s#^#/#" >> $fcheck_new
done < $fcheck_1
rt=$(diff $fcheck_new $fcheck_old | grep "<")
if [ "$rt" ]; then
echo "$rt" > $fcheck_preprint
while read line
do
echo "${line#*"/"}" | sed "s#^#/#" >> $fcheck_print
done < $fcheck_preprint
while read line
do
echo "$line"
done < $fcheck_print
cp $fcheck_new $fcheck_old
fi
sleep 20
done
| inotifywait doesn't monitor Windows users saving to Samba share on Linux |
1,312,853,834,000 |
How to watch for sysfs file changes (like /sys/class/net/eth0/statistics/operstate) and execute a command on content change?
inotify does not work on sysfs
I don't want to poll. I want to set a listener with a callback routine once
|
I have not read the source code that populates operstate, but generally, reading a file in sysfs executes some code on the kernel side that returns the bytes you're reading. So, without you reading operstate, it has no "state". The value is not stored anywhere.
How to watch for sysfs file change
Since these are not actually files, the concept "change" doesn't exist.
There's probably a better way to achieve what you want! netlink was designed specifically for the task of monitoring networking state; it's easy to interface. For example, this minimally modified sample code from man 7 netlink might already solve your problem:
struct sockaddr_nl sa;
memset(&sa, 0, sizeof(sa));
sa.nl_family = AF_NETLINK;
// Link state change notifications:
sa.nl_groups = RTMGRP_LINK;
fd = socket(AF_NETLINK, SOCK_RAW, NETLINK_ROUTE);
bind(fd, (struct sockaddr *) &sa, sizeof(sa));
Generally, if this is not about ethernet-level connectivity but, say, connectivity to some IP network (or, the internet), systemd/NetworkManager is the route you'd go on a modern system instead.
| How to attach a listener to sysfs files? |
1,312,853,834,000 |
I would like to have a trigger and when a particular file is accessed by some process, I would like to be notified (i.e. a script should be run). If I understand correctly, this could be achieved with inotify.
If I have a file /foo/bar.txt how would I set up inotify to monitor that file?
I am using Debian Wheezy with kernel 3.12 (my kernel has inotify support)
|
According to Gilles on Super User:
Simple, using inotifywait (install your distribution's inotify-tools package):
while inotifywait -e close_write myfile.py; do ./myfile.py; done
This has a big limitation: if some program replaces myfile.py with a different file, rather than writing to the existing myfile, inotifywait will die. Most editors work that way.
To overcome this limitation, use inotifywait on the directory:
while true; do
change=$(inotifywait -e close_write,moved_to,create .)
change=${change#./ * }
if [ "$change" = "myfile.py" ]; then ./myfile.py; fi
done
| using inotify to monitor access to a file |
1,312,853,834,000 |
I have a binary file on a Linux server that is being actively appended by a process (written in C with a constantly open file handler and flushing non-ASCII buffer to this file). I would like to replicate this file to another server without locking the write (C process), preferably, and not copying the entire file every time (file size ~1+GB and replication frequency < 1 sec).
I've explored the following:
rsync: I believe rsync does a full replication, but not incremental.
filebeat by elasticsearch: it requires ASCII text and newlines (I have neither).
I would preferably like to leverage standard Linux tools, but I am open to any other 3rd party solution or creating a C program myself :).
|
If it's only being appended to (and not modified in the middle), you could just run tail -f on it. It should wait for any newly appended data and print it, and you can tell it what position to start at:
tail -c 0 -f datafile # start at the current file end
tail -c +123 -f datafile # start at byte 123
To actually move the data somewhere, piping through to ssh should work:
So if the remote end has the first 123456 bytes already:
tail -c +123456 -f datafile | ssh user@somehost 'cat >> datafile.copy'
(Though of course you need to go check the file size on the remote before starting the pipeline.)
If, instead, you have modifications to the middle of the file, you're going to need some sort of a logging layer in the program itself. Filesystem snapshots might do, but the one second interval may be too hard, especially since you'd need to scan the file for the changes anyway.
| Unidirectional syncing/replicate large file incrementally |
1,312,853,834,000 |
I'm trying to make a inotifywait script take different actions for files vs folders if it sees a close_write flag being raised in the watched directory, but cannot seem to get the checks working.
In other scripts I use these kinds of checks all the time and they just_work, but there seems to be something about inotifywait I'm not grasping yet.
This is the script in its current form:
#!/bin/bash
dir1=/sambashares
while true; do
inotifywait -r -e close_write "$dir1" | while read f; do
#debug
echo $f
#Is it a directory?
if [[ -d "${f}" ]]; then
#debug
echo "New directory called ${f} detected"
chown -R jake:jake $f
chmod -R 775 $f
cp -R $f /media/lacie/Film
fi
#Is it a file?
if [[ -f "${f}" ]]; then
#debug
echo "New file called ${f} detected"
chown jake:jake $f
chmod 775 $f
cp $f /media/lacie/Film
fi
done
done
If I run this in a terminal to see what is happening all I get is the confirmation a close_write was detected, followed by a "setting up watches" without any other messages or any of the other code being triggered, not even the debug echo directly below:
while read f; do
:-(
I'm running this on Ubuntu server 12.04 LTS 64 bit.
$ bash --version
GNU bash, version 4.2.25(1)-release (x86_64-pc-linux-gnu)
$ dpkg -l | grep inotify
ii inotify-tools 3.13-3
ii libinotifytools0 3.13-3
ii python-pyinotify 0.9.2-1
$ python --version
Python 2.7.3
|
The following works for me:
Example
Below shows an example of inotifywait using a method similar to your own.
$ inotifywait -r -e close_write "somedir" | while read f; do echo "$f hi";done
Setting up watches. Beware: since -r was given, this may take a while!
Watches established.
I then go into the directory somedir and touch afile, touch afile. Which results in this occurring in the inotifywait window:
somedir/ CLOSE_WRITE,CLOSE afile hi
NOTE: To get all the output from inotifywait you could modify your example slightly:
$ inotifywait -r -e close_write "somedir" 2>&1 | \
while read f; do echo "$f | hi";done
Setting up watches. Beware: since -r was given, this may take a while! | hi
Watches established. | hi
And again touch a file with the command, touch afile, I now see the event:
somedir/ CLOSE_WRITE,CLOSE afile | hi
For another example that shows how to use inotifywait take a look at the example I showed in my answer to this Q&A titled: Automatically detect when a file has reached a size limit.
Your issue
I believe part of your issue is that you're assuming that the output returned from inotifywait is just the file's name while clearly it isn't from the above examples.
So these if statements will never succeed:
if [[ -d "${f}" ]]; then
and
if [[ -f "${f}" ]]; then
When developing Bash scripts such as this it's often helpful to enable debugging messages via this command at the top:
set -x
You can disable it with this:
set +x
You can wrap sections of your code using these messages to turn it on/off.
set -x
...code...
set +x
| inotifywait different action on file or dir |
1,312,853,834,000 |
I have this command this shows me when a file has been modified under a concrete directory (excluding some paths):
inotifywait -m -q -r --format '%T % e %w%f' --excludei '/trash/' --timefmt '%d/%m/%Y-%H:%M:%S%z' /my/monitored/folder
Is there a way to combine this (or a similar) command with tail, so I can retrieve the last line of each modified file? It is important that this combination outputs the file's path and the last line added.
|
In your question you say that you want to scan if a file has been modified, but in your command there's no event specified.
So my answer will use the modify event:
inotifywait -m -q -r \
--format '%T % e %w%f' \
--excludei '/trash/' \
--timefmt '%d/%m/%Y-%H:%M:%S%z' /my/monitored/folder | \
while IFS=' ' read -r time event file; do
echo "file: $file"
echo "modified: $time"
last_line=$(tail -1 "$file")
echo "last line: $last_line"
echo
done
Wich will output something like this:
file: /path/file.txt
modified: 17/02/2021-09:17:02-0300
last line: foo
| How to combine inotify with tail command to print last line of every modified file |
1,312,853,834,000 |
I'm attempting to use part of a one-liner found here: Script to monitor folder for new files?
When I try the following procedure I get no output whatsoever and I cannot figure out why.
In terminal 1:
inotifywait -m ~/somefolder | awk '{ print $3; fflush() }'
Then in terminal 2:
touch ~/somefolder/newfile
When not piping to awk, inotifywait lists all the expected events to stdout and has no problem redirecting to a file either. Awk also appears to work correctly independent of inotifywait on text piped to it structured like the output of inotifywait. Using the two together just doesn't work for me.
EDIT:
awk was an alias for mawk on my machine which didn't work. gawk, however, came through and worked as expected.
|
As you have found out, mawk buffers its input, so you would probably see effects once your total notify messages have reached some k. The linked article suggests that mawk has a -Winteractive flag to disable it, but I am in position to check that.
| No output from inotifywait | awk |
1,312,853,834,000 |
I am trying to use inotifywait to monitor a folder:
inotifywait -m -r /home/oshiro/Desktop/work_folder
The command works and if I create files in that folder, all seems to work correctly.
While the folder is being monitored, if I delete it, I get the following output:
/home/oshiro/Desktop/work_folder/ MOVE_SELF
/home/oshiro/Desktop/work_folder/ OPEN,ISDIR
/home/oshiro/Desktop/work_folder/ CLOSE_NOWRITE,CLOSE,ISDIR
/home/oshiro/Desktop/work_folder/ MOVE_SELF
/home/oshiro/Desktop/work_folder/ ATTRIB,ISDIR
/home/oshiro/Desktop/work_folder/ OPEN,ISDIR
/home/oshiro/Desktop/work_folder/ DELETE Untitled Document
/home/oshiro/Desktop/work_folder/ DELETE Untitled Document 2
/home/oshiro/Desktop/work_folder/ CLOSE_NOWRITE,CLOSE,ISDIR
/home/oshiro/Desktop/work_folder/ DELETE_SELF
If I then re-create that folder again, while the monitoring is still taking place, inotifywait doesn't seem to continue monitoring it, unless I run inotifywait -m -r /home/oshiro/Desktop/work_folder again.
How do I get around this issue? I basically want to monitor a USB stick which will be plugged in and removed many times during a day. When it's unplugged and plugged back in, I think inotifywait will stop monitoring it, the same way the folder above was deleted and re-created where inotifywait wasn't able to continue monitoring it, unless I run the above command again, i.e. inotifywait -m -r /home/oshiro/Desktop/work_folder
Should I be using something more appropriate for such tasks and not use inotifywait? cron is not suitable for my needs, as I am not after time based actions, I am after event based actions.
|
First off, if you delete a folder that inotifywait is watching, then, yes, it will stop watching it. The obvious way around that is simply to monitor the directory one level up (you could even create a directory to monitor especially and put your work_folder in there.
However this won't work if you have a folder underneath which is unmounted/remounted rather than deleted/re-created, the two are very different processes. I have no idea if using something other than inotifywait is the best thing here since I have no idea what you are trying to to achieve by monitoring the directory. However perhaps the best thing to do is to set up a udev rule to call as script which mounts the USB stick and starts the inotifywait process when it is plugged in and another to stop it again when it is unplugged.
You would put the udev rules in a .rules file in /etc/udev/rules.d` directory. The rules would look something like:
ENV{ID_SERIAL}=="dev_id_serial", ACTION=="add", \
RUN+="/path/to/script add '%E{DEVNAME}'"
ENV{ID_SERIAL}=="dev_id_serial", ACTION=="remove", \
RUN+="/path/to/script remove '%E{DEVNAME}'"
Where ID_SERIAL for the device can be determined by:
udevadm info --name=/path/to/device --query=property
with the script something like:
#!/bin/sh
pid_file=/var/run/script_name.pid
out_file=/var/log/script_name.log
# try to kill previous process even with add in case something
# went wrong with last remove
if [ "$1" = add ] || [ "$1" = remove ]; then
pid=$(cat "$pid_file")
[ "$(ps -p "$pid" -o comm=)" = inotifywait ] && kill "$pid"
fi
if [ "$1" = add ]; then
/bin/mount "$2" /home/oshiro/Desktop/work_folder
/usr/bin/inotifywait -m -r /home/oshiro/Desktop/work_folder \
</dev/null >"$out_file" 2>&1 &
echo $! >"$pid_file"
fi
Also, make sure that the mounting via the udev rule does not conflict with and other process which may try to automatically mount the disk when it is plugged in.
| inotifywait not working when folder is deleted and re-created |
1,312,853,834,000 |
I've just learnt how to constantly check if file is modified:
while inotifywait -q -e modify filename >/dev/null; do
echo "filename is changed"
# do whatever else you need to do
done
If I use a directoryname instead of a filename I can check all files of the directory:
while inotifywait -q -e modify directoryname >/dev/null; do
echo "filename is changed"
# do whatever else you need to do
done
But how can I echo the filename of the file that has been changed?
|
inotifywait emits a continuous set of events on the watched directory, so the recommended way would to move the watch out of the while loop and look for events within a new loop that looks on the output of the inotify. Note that this involves removing the --quiet flag, because the read command needs to see those events and read it over standard input.
inotifywait -m -e modify "directoryname" |
while read -r dir action file; do
echo "The file '$file' appeared in directory '$dir' via '$action'"
done
| inotify: Echo which file has changed in directory |
1,312,853,834,000 |
I have a simple bash script setup that uses the built-in inotify daemon running CentOS 6.6. The script will simply echo the file that is upload to a specific directory. The script works but it echos out the same filename over 100 times. I can't seem to figure out why it would do that.
#!/bin/bash
/usr/bin/inotifywait -e create,delete,modify,move -mrq --format %f /home/imgthe/public_html/run/thumbs --excludei sess_* |
while read INPUT
do
FILENAME=$INPUT
DATE='date'
echo $FILENAME
printf $INPUT >> sku.txt
done
|
The modify attribute to inotifywait will notify you whenever the file is modified (i.e. written to). I suggest that you might prefer to replace create and modify with close_write.
| Bash script to detect uploaded files triggers many times for one file |
1,312,853,834,000 |
This is my incrontab line:
/srv/www IN_MODIFY,IN_ATTRIB,IN_CREATE,IN_DELETE,IN_CLOSE_WRITE,IN_MOVE rsync --quiet
--recursive --links --hard-links --perms --acls --xattrs --owner --group --delete --force /var/www_s3/ /var/www
/var/www_s3/ is an s3fs mount. However, it only gets kicked off when a file is modified manually; nothing happens when a file is changed/added on S3.
Is there a way to get incrontab to detect these changes?
|
It's often the case that FUSE based filesystems only support a subset of the features that the underlying filesystems support. It's generally some aspect of one or more of these features which is limiting the incrontab entry from detecting the change on the remote side.
At any rate I thought it best to inquire about this on the s3fs project, and so posted this question there asking the developers for guidance on any potential limitations.
You can track this issue/question here: Issue 385: incrontab & s3fs support?
References
incrontab man page
FUSE-based file system backed by Amazon S3
| Incrontab doesn't detect modifications on a s3fs mount |
1,312,853,834,000 |
I'd like to ignore a directory that has not been created at the time I start inotifywait. I have an empty directory test:
ubuntu@foo:~$ ls -lah test/
total 8.0K
drwxrwxr-x 2 ubuntu ubuntu 4.0K Sep 5 20:00 .
drwxr-x--- 13 ubuntu ubuntu 4.0K Sep 5 19:56 ..
I start inotifywait like this:
ubuntu@foo:~$ inotifywait -mr -e CREATE -e MODIFY -e DELETE -e MOVE /home/ubuntu/test/ @/home/ubuntu/test/log
Setting up watches. Beware: since -r was given, this may take a while!
Watches established.
Then, in another terminal, I create a directory log and put some content there:
ubuntu@foo:~/test$ mkdir log
ubuntu@foo:~/test$ cd log/
ubuntu@foo:~/test/log$ echo "foo" > foo
This is the inotifywait output:
/home/ubuntu/test/ CREATE,ISDIR log
/home/ubuntu/test/log/ CREATE foo
/home/ubuntu/test/log/ MODIFY foo
However, if I close the script and start it again and put some more content into the file, no events are triggered.
Is it possible, to ignore a "soon to be created"-directory in inotifywait?
|
This is easily done by replacing the @ exclusion prefix by the --exclude option with a simple pattern (regular expression). For example,
--exclude /home/ubuntu/test/log'$'
By adding a $ (match end-of-string) we ensure that creating a file such as test/log2 is not also wrongly matched and excluded. Also add a ^ at the start to ensure we don't match the path at some greater depth:
--exclude '^'/home/ubuntu/test/log'$'
The exclusion of the directory is sufficient to exclude new files under it, unless the directory already exists, bizarrely. To cope with that case, replace the $ by an alternate $ or /, i.e. ($|/):
--exclude '^'/home/ubuntu/test/log'($|/)'
However, testing seems to show that (for my version 3.14?) only the last --exclude option is used, which is not documented, and rather unexpected.
So if you have several directories to exclude, you need to combine them into a single regexp. This does not need to be complicated if you are not familiar with the syntax, simply put the list inside ^(...|...|...)($|/), for example:
--exclude '^(/home/ubuntu/test/log|/home/abc/log|/usr/def)($|/)'
If your paths contain special regexp characters like .[]^$()*+?|{}, you need to look up how to escape them.
| inotifywait ignore new directory |
1,312,853,834,000 |
awk behaves differently when analyzing the output of tail -f versus inotifywait -m. Specifically I am searching for a matching string and want to exit awk once it appears. This works fine for tail -f, however in the case of inofitywait, awk needs to be triggered twice. Why so?
Reproducible Examples:
Say we are searching for a specific string ("OPEN") in either case and use a special exit code as marker. Let's also trigger it after a short wait (inotofywait needs a moment) and return the exit code. Pseudo-code:
command | awk-analysis || get-non-0-exit-code & trigger
All good for tail -f
The following a) prints the line, b) returns the exit code and c) terminates. As expected.
tail -f test.file | awk '/OPEN/ { print $0 ; exit 100 }' || echo $? &
{ sleep 2 ; echo OPEN > test.file ; }
However with inotifywait -m
the result is quite different.
inotifywait -m -e OPEN | awk '/OPEN/ { print $0 ; exit 100 }' || echo $? &
{ sleep 2 ; touch test.file ; }
This will print the line (so inotifywait is triggered and awk sees it) but NOT show the exit code nor terminate. Only another trigger like touch test.file is able to stop awk
Maybe control characters?
I thought maybe I am missing a singal awk uses here, so I tried to analyze with cat -A (results file in parent folder, so otherwise a second "OPEN" is triggered in inotifywait):
tail -f test.file | tee >(cat -A >../stream) | ....
cat ../stream
OPEN$
and
inotifywait -m -e OPEN | tee >(cat -A >../stream) | ....
cat ../stream
./ OPEN test.file$
So no unseen control characters missed.
What is the reason for this behaviour?
Am I missing a newline? How comes awk does print the line, but not run the exit command in the same code block? Why does it work with tail, though?
Verisons
awk --verison : GNU Awk 4.2.1, API: 2.0 (GNU MPFR 4.0.2, GNU MP 6.1.2)
inotifywait -h : inotifywait 3.14
tail --verison : tail (GNU coreutils) 8.30
EDIT due to Kusalananda's comments:
tail -f
test.file exists and has the following content:
*case 1
OPEN
spam
*case 2
spam
OPEN
*case 3 (file is empty)
Trigger as above is NOT run, i.e.
tail -f test.file | awk '/OPEN/ {print $0 ; exit 100 }' || echo $?
Case 1 & 2 : immediately returns the matching line, exit code and is terminated.
Case 3: waits, open other terminal and echo OPEN >> 3 or echo OPEN > 3 returns string, exit code and terminates.
|
echo $? runs after the entire pipeline (the code before ||) terminates.
In the case of inotifywait … | awk … || echo … after awk terminates inotifywait still runs. It will get SIGPIPE only if (when) it tries to write more. Try touch test.file again to get to this point and trigger echo.
On the other hand tail in tail -f … | awk … terminates immediately after awk exits because GNU tail takes special steps to detect this situation.
In order to reproduce this with inotifywait, one would need to forward the PID and send a SIGPIPE via awk:
{ inotifywait -m -e OPEN ./ & echo $! ; } |
awk 'NR==1 {pid=$0}
/OPEN/ {print $0,pid
system("echo kill -13 "pid)
exit 122 }' ||
echo $?
| awk behaving different with tail vs inotifywait |
1,312,853,834,000 |
I would like to test the software in development by running a particular script, say script file.ext. But I was told that I an not allowed to run the script if a config file, say file.conf, contains a particular string (not a comment with a leading #) which states the system is in production mode.
So in bash, how can I do something like
If file1.conf contain a string in production and that string is is not of the form something ... # something in production something then execute script file1.ext.
Or, can I somehow get an alert every time the designated file contains the string in production, not in a comment?
I guess I need somehow use the command inotifywait. It would be also nice to have a script that not only checks the file file1.conf but alerts every time that some file with extension .conf shows I'm in a production mode.
|
One way to achieve this is with grep, e.g.,
grep "^[^#]* in production " file.conf || script file.ext
The part behind the || is only executed if the part before it does not exist with exit code 0 and grep has exit code 0 only if it finds its pattern (the first argument).
I chose a simple pattern "^[^#]* in production ". The character ^ stands for the beginning of the line, [^#] stands for any char except # and the quantifier * says there can be any number of them. The rest of the pattern is just the text you would expect to mark production mode.
In order to avoid all the output grep is producing you can send it to /dev/null:
grep "^[^#]* in production " file.conf > /dev/null || script file.ext
If you want to print a message, if you are in production mode and the script is not run you could do
grep "^[^#]* in production " file.conf > /dev/null && \
echo "We are in production mode. The script was not run." || \
script file.ext
| Alert or execute script, only if config file shows not in production mode |
1,312,853,834,000 |
We have this log in our syslog:
udevd [ PID ]: inotify_add_watch(6, /dev/sda, 10) failed: operation not permitted
Why do we get this error and how do we solve it?
Our environment: Ubuntu 12.04; LXC; we run inside a container; and I am not sure about SELinux (I don't have access) but it's not enabled.
|
Ubuntu12.04: How to disable a daemon process at startup
Debian Bug report logs - #620921
udev: Please detect lxc, and don't try to start there
At the first glance udev events are supported in the container. But for
the sake of optimization, I recommend to not use it as it will trigger
the events in all the containers.
In case the above is not clear, I suggest killing it with fire. Usually it would not be desirable for udev inside a container to be even thinking about touching sda etc. Usually there would not be anything that you would want udev to do.
Reading the following, you might guess my answer is toeing the systemd party line :-). Apparently LXC had some different opinions, at least at one time: https://stgraber.org/2013/12/21/lxc-1-0-your-second-container/#comments
I believe the commenter "wwwwww" is a pseudonym (!) of the systemd lead Lennart Poettering. Either that, or someone did a great imitation matching his writing style and his position on this issue :-).
Perhaps someone more familiar with LXC would know exactly which combinations of udev and LXC setups that LXC expects to do anything useful. And what conditions might generate a warning message like this. The above link offered a date range for Ubuntu, which claims the original Ubuntu 12.04 release should be fine. However it does not say whether or not it emits any spurious warnings. (It wouldn't be the first piece of software to do so:-))
Whatever the merits, if you don't need to access any physical device from inside LXC, disabling udev would seem a simple way to avoid seeing any udev warnings. "While we wait for people to figure out exactly how a device namespace should work". The LXC developer mentions "this is far from ideal" :-). This was in 2013, and there is still no device namespace (as of Linux v4.20).
The next relevant comment seems to be "Our default configuration will let udev create device nodes but only access those that are allowed in the configuration." In that sense your LXC was working as LXC want it to: it allowed you to create a device node /dev/sda, but did not allow you to access it.
I do not know why your udev creates /dev/sda, (presumably) does not complain about being unable to run blkid on it, but does complain about not being to watch it.
The kernel (as of v4.20) does not provide isolation for devices. There is no namespace for devices. Compared e.g. to network namespaces, which allow isolating network interfaces. For the list of namespaces which can be isolated, look in man 7 namespaces or man 2 clone.
If you're curious what a principled container runtime can do, the answer is that it can disable access to all devices (except a few virtual ones like /dev/null, /dev/pts/*, etc). I am more familiar with systemd-nspawn (and its documentation). At least with cgroups v1, nspawn uses the device control group to disable access to devices. cgroups v2 eventually gained an equivalent feature. In the mean time, nspawn prevents you from creating a device node by using seccomp(), and that works pretty well. Of course this means you must trust the container filesystem image not to contain the any of the "wrong" device nodes, so the cgroup solution is better.
Current systemd-udevd.service detects that it should not run if /sys has been mounted read-only.
| Source of this error: udevd [ PID ]: inotify_add_watch(6, /dev/sda, 10) failed: operation not permitted |
1,312,853,834,000 |
I recently installed Dropbox on my computer running Debian 9.3. But it will not sync. When I mouse over the icon in the notification area of my toolbar, a message says...
Can't monitor Dropbox folder (Click to fix)
Can't access Dropbox folder
When I click the icon, the menu comes up, and I click "Can't monitor Dropbox folder (Click to fix)."
But when I do, a window pops up that says "Type your Linux password to let Dropbox make changes." It also asks if I'd like to save this password to my keyring. When I type my regular login password, the window says it is the incorrect password.
My designated Dropbox folder is on an HDD while Debian 9 is installed on an SSD.
How do I allow Dropbox program to access Dropbox folder?
|
Based on this AU Q&A titled: How do I fix a “Can't access Dropbox folder” error? it sounds like you could try these to see if they resolve your issue:
$ sudo sysctl fs.inotify.max_user_instances=256
$ sudo sysctl fs.inotify.max_user_watches=1048576
If you find this resolves it you can make these permanent. Add the following to this file:
$ cat /etc/sysctl.d/99-dropbox.conf
fs.inotify.max_user_watches = 1048576
fs.inotify.max_user_instances = 256
Then run this to pick up the changes:
$ sudo sysctl -p
| Can't monitor Dropbox folder |
1,312,853,834,000 |
I have added a systemd service to monitor a path, but it is not working. I touched a .txt file under /tmp/test/, but it is not kicking in my service. I can't see "/tmp/testlog.txt" getting generated. Is there anything wrong in my service?
myservice.path
[Unit]
Description=Path Exists
[Path]
PathExistsGlob=/tmp/test/*.txt
PathChanged=/tmp/test/
[Install]
WantedBy=multi-user.target
myservice.service
[Unit]
Description=Test
[Service]
ExecStartPre=/bin/sh -c 'mkdir /tmp/test && sleep 60'
ExecStart=/bin/sh -c 'echo "Test Success" >> /tmp/testlog.txt & '
[Install]
WantedBy=multi-user.target
tmp dir:
# ls /tmp/test/
ab.txt
#
What could be the reason for the failure?
|
That was a timing issue. I added dependency and made this service to start as the very last one. That one solved the issue.
| systemd-path service not working |
1,312,853,834,000 |
I am having an issue with exiting a bash script out of a while loop:
while read -r dir event name; do
case $event in
OPEN)
chown $VHOST:$VHOST $WEBPATH/$name;
echo "The file \"$name\" was created (not necessarily writable)";
;;
WRITE)
echo "The file \"$name\" was written to";
;;
DELETE)
echo "The file \"$name\" was deleted";
exit 0;
;;
esac
done < <(/usr/bin/inotifywait -m $WEBPATH)
The loop correctly listens for file changes in the given Directory, so far so good.
This also shows on the console output:
root #: bash /var/scriptusr/letsencrypt/dir-change
Setting up watches.
Watches established.
The file "tes" was created (not necessarily writable)
The file "tes" was deleted
root #:
Apparently it seems the script exited nicely but when you search for it in the process tree it is still there:
root #: ps aux | grep dir-
root 5549 0.0 0.0 14700 1716 pts/0 S 14:46 0:00 bash /var/scriptusr/letsencrypt/dir-change
root 5558 0.0 0.0 14184 2184 pts/1 S+ 14:46 0:00 grep dir-
root #:
So my question is how to really exit the script?
|
I came up with a solution after searching for a bit.
The problem originates from inotifywait going subshell as @mikeserv stated in the comments above.
So I had to write a cleanup method for it. My script:
#!/bin/bash
#
#
# script for immediatly changing the owner and group of the let's encrypt challenge file in the given webroot
Pidfile="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"/run-file-chowner.pid
echo $$ > $Pidfile
function terminate_process () {
trap - SIGHUP SIGINT SIGTERM SIGQUIT
printf "\nTerminating process...\n"
rm "$Pidfile" > /dev/null 2>&1;
kill -- -$$
exit $1
}
function main () {
trap terminate_process SIGHUP SIGINT SIGTERM SIGQUIT
local OPTIND D opt
while getopts D: opt;
do
case $opt in
D)
Domain=$OPTARG;;
esac
done
shift $((OPTIND-1))
case $Domain in
'domain-b.com')
VHost="doma-www"
;;
'domain-a.com')
VHost="domb-www"
;;
*)
printf "\nScript usage : [ $0 -D \"example.com\" ]\n\n"
exit 1;
;;
esac
WebPath=/var/www/$Domain/$VHost/htdocs/public/.well-known/acme-challenge
inotifywait -m $WebPath | while read -r dir event name; do
case $event in
CREATE)
chown $VHost:$VHost $WebPath/$name
printf "\nOwner and group of \"$name\" were changed to $VHost...\n"
;;
DELETE)
printf "\nThe file \"$name\" was deleted\n"
terminate_process 0
;;
*)
printf "\nEvent $event was triggered.\n"
;;
esac
done
}
main "$@"
In this is the output, when a file in the watched folder is created and deleted:
root #: bash file-chowner -D dom-a.com
Setting up watches.
Watches established.
Owner and group of "test" were changed to doma-www...
Event OPEN was triggered.
Event ATTRIB was triggered.
Event CLOSE_WRITE,CLOSE was triggered.
Event ATTRIB was triggered.
The file "test" was deleted
Terminating process...
Terminated
Terminating process...
Terminated
| Breaking out of while loop with a switch case inside |
1,312,853,834,000 |
In the /etc/incron.allow I added both:
root
USER_1
Then I edited the incrontab so it looks like this:
/var/www/laravel/public/js/main.js IN_MODIFY yui-compressor -o /var/www/laravel/public/js/main.min.js /var/www/laravel/public/js/main.js
I think the issue may have something to do with permissions or perhaps groups but I'm not sure as I'm not getting anything when I check with tail -f /var/log/syslog
The /var/www/laravel/public folder is owned by USER_1 in group www-data. Incrond Inotify is done by the root user. When the main.js file is updated shouldn't Incrond Inotify run regardless of the current user since it's been set by root? Why isn't it working and how can I make it work with either root or USER_1?
|
This is probably an issue with the PATH environment in incron. Try to add the full path to your program like this (I am assuming yui-compressor resides in /usr/bin/)
/var/www/laravel/public/js/main.js IN_MODIFY /usr/bin/yui-compressor -o /var/www/laravel/public/js/main.min.js /var/www/laravel/public/js/main.js
to find out which is the full path to yui-compressor run the following in a terminal:
which yui-compressor
If you have problems with corrupted output files, consider to use IN_CLOSE_WRITE instead of IN_MODIFY because that fires only once the file is closed and not on every write operation. Depending on how your editor handles edits, that may be closer to what you want or not.
Edit: This should work fine under your USER_1 account. Just run incrontab -e in a terminal when logged in as USER_1 and add the line I gave above.
| Using Incrond Inotify but having issues with user groups/permissions |
1,377,610,923,000 |
I need to monitor if, for example, file /tmp/somefile123 was created after some events. I tried to use inotifywait but here is a problem:
# inotifywait -q -e create /tmp/somefile?*
Couldn't watch /tmp/somefile?*: No such file or directory
because there is exactly no such file, I want to know if it will be there!
How can I resolve this issue?
UPD: Maybe if I explain what I want to reach it will be more clear.
I need to write shell script (sh) with minimal CPU consumption, something like this:
if [ $(inotifywait -e create $SPECIFIC_FILE) ]; then
blah-blah-blah some actions
fi
# And then similarly monitor if this file was deleted and then do another actions
I expect that script will stop execution on inotifywait -e create $SPECIFIC_FILE till this $SPECIFIC_FILE will not created and it would be better then
while [ ! -f $SPECIFIC_FILE ]; do
blah-blah-blah some actions
sleep 1
done
|
By having inotifywait check on the parent directory:
/tmp$ inotifywait -e create -d -o /home/me/zz /tmp
/tmp$ touch z1
/tmp$ cat ~/zz
/tmp/ CREATE z1
You can also specify the time format for the event with the -timefmt option. Also, if you want to act immediately by executing some script file, for instance, you may use tail -f in the script file to monitor continuously the log file, here /home/me/zz, or you can create a named pipe file and have inotifywait write to it, while your script reads from it.
| How to monitor whether a file was created? |
1,377,610,923,000 |
I upload files for deployment into a remote directory. That remote server has a script that watches the directory for new files:
inotifywait --monitor --event create --format '%f' --quiet /foo
When a new file is detected, the deployment process starts.
The problem is the upload takes time - and the file is detected as soon as it starts writing. So the deployment fails as it attempts to use a partial file.
Is there a way to debounce the inotifywait so it reports the new file only after it is fully created?
|
As you have experienced, watching for create events isn't very useful: these events trigger when the file is created, but that doesn't tell you if any data has been written to it, nor do you know when something has finished writing data to it.
You will generally want to monitor the close or close_write events. From the man page:
EVENTS
close_write
A watched file or a file within a watched directory was closed,
after being opened in writeable mode. This does not necessarily
imply the file was written to.
close_nowrite
A watched file or a file within a watched directory was closed,
after being opened in read-only mode.
close
A watched file or a file within a watched directory was closed,
regardless of how it was opened. Note that this is actually
implemented simply by listening for both close_write and
close_nowrite, hence all close events received will be output as
one of these, not CLOSE.
| Debounce inotifywait for large files |
1,377,610,923,000 |
I'm trying to get notification of the state of a problematic sshfs mount
I have tried two bash scripts
while inotifywait -e modify /proc/mounts; do
echo "modified"
done
and
inotifywait -m /proc/mounts |
while read event; do
echo $event
done
To test, I'm running the following sequence, but neither of these scripts is responding.
stephen@asus:~/log$ sudo umount /mnt/lexar
stephen@asus:~/log$ sshfs michigan:/home/stephen/ /mnt/lexar
stephen@asus:~/log$ sudo umount /mnt/lexar
stephen@asus:~/log$ sshfs michigan:/home/stephen/ /mnt/lexar
stephen@asus:~/log$ grep lexar /proc/mounts
michigan:/home/stephen/ /mnt/lexar fuse.sshfs rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 0 0
stephen@asus:~/log$ sudo umount /mnt/lexar
stephen@asus:~/log$ grep lexar /proc/mounts
|
inotify doesn't work with the proc filesystem. Though they may look as regular files, the files in the proc filesystem don't contain any static data -- the kernel makes up the data on the fly when you read them. For instance, /proc/mounts => /proc/self/mounts has the size 0, but when you read it, it magically happens to contain something.
But /proc/self/mounts and /proc/self/mountinfo are pollable -- you can select(2) or poll(2) on it for an exceptional condition. According to the proc(5) manpage:
/proc/[pid]/mounts (since Linux 2.4.19)
This file lists all the filesystems currently mounted in the process's mount namespace (see mount_namespaces(7)). The format of this file is documented in fstab(5).
Since kernel version 2.6.15, this file is pollable: after
opening the file for reading, a change in this file (i.e., a
filesystem mount or unmount) causes select(2) to mark the file descriptor as having an exceptional condition, and poll(2) and epoll_wait(2) mark the file as having a priority event (POLLPRI).
[the same holds true for /proc/[pid]/mountinfo]
I don't think there's any way to do that from the shell. You can do it from perl, though:
#! /usr/bin/perl
use strict;
my $mf = "/proc/self/mountinfo";
open my $mh, "<$mf" or die "open <$mf: $!";
vec(my $ebits, $mh->fileno, 1) = 1;
while(1){
select(undef, undef, my $e = $ebits, undef) == -1 and die "select: $!";
print "some mount or umount happened\n";
}
A more useful example, which also shows what changed in /proc/self/mountinfo:
#! /usr/bin/perl
use strict;
my $mf = "/proc/self/mountinfo";
open my $mh, "<$mf" or die "open <$mf: $!";
vec(my $ebits, $mh->fileno, 1) = 1;
sub read_mounts {
seek $mh, 0, 0 or die "seek: $!";
my ($h, $i); $$h{$_} = ++$i while <$mh>; return $h;
}
for(my ($old, $new) = read_mounts;; $old = $new) {
select undef, undef, my $e = $ebits, undef or die "select: $!";
$new = read_mounts;
for(keys %$new){
if(exists $$old{$_}){ delete $$old{$_} }
else{ print '+ ', $_ }
}
print '- ', $_ for keys %$old;
}
| inotifywait not responding to change in /proc/mounts |
1,377,610,923,000 |
Here is the shell script I've got so far. I want it to check recursively, hence the following options:
-r for recursive
-m for monitoring
-e for event notification and tracking
For a reason unknown to me, this approach isn't working. I'm creating/modifying/deleting files using rm/nano/touch etc and in the terminal I ran the script I get a message saying that a particular operation has been used, where it was used and the file it was used on e.g. /home/stephen/ CREATE test where test is the file I've created using touch.
#!/bin/sh
while inotifywait -mre create,delete,modify /home;do
echo "test"
done
|
I believe inotifywait -m does not exit causing the while-loop not to run as expected.
while inotifywait -r /home -e create,delete,modify; do { echo "test"; }; done however should work as you expect it.
| How would I use inotifywait to execute a command if a file in a directory is created, deleted or modified? |
1,377,610,923,000 |
man inotifywait:
delete_self
A watched file or directory was deleted. After this event the file or directory is no longer being watched. Note that this event can
occur even if it is not explicitly being listened for.
unmount
The filesystem on which a watched file or directory resides was unmounted. After this event the file or directory is no longer being
watched. Note that this event can occur even if it is not explicitly
being listened to.
How to understand mean of "that this event can occur even if it is not explicitly being listened" on that manual page?
https://manpages.debian.org/stretch/inotify-tools/inotifywait.1.en.html
|
It means you can get these events even if you used the -e option and didn't specify them. For instance, if you use
inotifywait -e modify filename
and the file is deleted, you'll get a delete_self event, even though you only asked for modify events.
This means you need to check the event type in the output, even if you only requested a specific event.
| What is mean of "not explicitly being listened" on manual page of inotifywait? |
1,377,610,923,000 |
I have a problem where inotify, no matter what I do, doesn't detect changes in one specific folder. It detects changes in other folders that otherwise are no different. What could be causing this?
inotifywait 3.14
Linux titan 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt25-2+deb8u3 (2016-07-02) x86_64 GNU/Linux
inotify works as expected here:
In one terminal:
ben@titan:~$ mkdir -p notifytest/example
ben@titan:~$ cd notifytest
ben@titan:~/notifytest$ inotifywait -rme attrib,modify,move,create,delete . --exclude '(log|[a-z]+.sqlite)'
In another terminal:
ben@titan:~$ cd notifytest
ben@titan:~/notifytest$ touch test.txt
ben@titan:~/notifytest$ touch example/test.txt
ben@titan:~/notifytest$ rm example/test.txt
ben@titan:~/notifytest$ rm test.txt
Output:
Setting up watches. Beware: since -r was given, this may take a while!
Watches established.
./ CREATE test.txt
./ ATTRIB test.txt
./example/ CREATE test.txt
./example/ ATTRIB test.txt
./example/ DELETE test.txt
./ DELETE test.txt
inotify doesn't work as expected here:
I have an existing folder called blog that is ignored :(
I create a new folder called example that is correctly watched
In one terminal:
ben@titan:~$ cd some-path
ben@titan:~/some-path$ ls
drwxr-xr-x 3 ben ben 4096 Aug 16 14:23 blog
-rw-r--r-- 1 ben ben 17408 Aug 15 13:58 blog.sqlite
-rw-r--r-- 1 ben ben 325 Aug 15 13:01 config.py
-rw-r--r-- 1 www-run www-run 91800 Aug 16 14:23 log
drwxr-xr-x 2 ben ben 4096 Aug 15 14:14 public_html
-rw-r--r-- 1 ben ben 1999 Aug 15 16:21 schema.sql
-rwxr-xr-x 1 ben ben 6019 Aug 16 14:01 start.py
ben@titan:~/some-path$ mkdir example
ben@titan:~/some-path$ ls
drwxr-xr-x 3 ben ben 4096 Aug 16 14:23 blog
-rw-r--r-- 1 ben ben 17408 Aug 15 13:58 blog.sqlite
-rw-r--r-- 1 ben ben 325 Aug 15 13:01 config.py
drwxr-xr-x 2 ben ben 4096 Aug 16 14:28 example
-rw-r--r-- 1 www-run www-run 91800 Aug 16 14:23 log
drwxr-xr-x 2 ben ben 4096 Aug 15 14:14 public_html
-rw-r--r-- 1 ben ben 1999 Aug 15 16:21 schema.sql
-rwxr-xr-x 1 ben ben 6019 Aug 16 14:01 start.py
ben@titan:~/some-path$ file example
example: directory
ben@titan:~/some-path$ file blog
blog: directory
ben@titan:~/some-path$ inotifywait -rme attrib,modify,move,create,delete . --exclude '(log|[a-z]+.sqlite)'
In another terminal:
ben@titan:~$ cd some-path
ben@titan:~/some-path$ touch test.txt
ben@titan:~/some-path$ touch blog/test.txt
ben@titan:~/some-path$ touch example/test.txt
ben@titan:~/some-path$ rm test.txt
ben@titan:~/some-path$ rm blog/test.txt
ben@titan:~/some-path$ rm example/test.txt
Output:
inotifywait -rme attrib,modify,move,create,delete . --exclude '(log|[a-z]+.sqlite)'
Setting up watches. Beware: since -r was given, this may take a while!
Watches established.
./ CREATE test.txt
./ ATTRIB test.txt
./example/ CREATE test.txt
./example/ ATTRIB test.txt
./ DELETE test.txt
./example/ DELETE test.txt
Expected Output:
inotifywait -rme attrib,modify,move,create,delete . --exclude '(log|[a-z]+.sqlite)'
Setting up watches. Beware: since -r was given, this may take a while!
Watches established.
./ CREATE test.txt
./ ATTRIB test.txt
./example/ CREATE test.txt
./example/ ATTRIB test.txt
./blog/ CREATE test.txt
./blog/ ATTRIB test.txt
./ DELETE test.txt
./example/ DELETE test.txt
./blog/ DELETE test.txt
|
Your --exclude (log) pattern matches b**log**.
Use ^log$ instead.
| inotify not working on one specific folder |
1,377,610,923,000 |
On my Debian Buster system I want to run a bash script when a certain file is modified. I have created and enabled a service in /etc/systemd/system which looks like this
[Service]
ExecStart=/usr/local/bin/watch_file.sh
Restart=always
RestartSec=1
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=backup-channels
User=root
Group=root
[Install]
WantedBy=multi-user.target
and the file watch_file.sh looks like this
#!/bin/bash
while inotifywait -e close_write "path_to_watched_file"
do
sh /usr/local/bin/action_to_perform.sh
done
All files are owned by the root user. Things are ok as long as the root user modifies the file. However, I would like to run the script as well when a non-root user modifies the file. Currently, a modification by a user different from root does not trigger the script action_to_perform.sh.
|
inotifywait doesn't care about effective UIDs/GIDs - it works on a different level.
The while loop you have here will terminate whenever inotifywait exits with a non-zero return status - perhaps that's your issue:
EXIT STATUS
1 An error occurred in execution of the program, or an event occurred which was not being listened for. The latter generally occurs if something happens which forcibly removes the inotify watch, such as a watched file being deleted or the filesystem containing a watched file being unmounted.
Please try changing it to
#! /bin/bash
while inotifywait -e close_write "path_to_watched_file"; true; do
sh /usr/local/bin/action_to_perform.sh
done
It will burn CPU cycles if the file is not present, you can add sleep 1 after the action_to_perform.
| inotify only acts upon modification by root |
1,377,610,923,000 |
I have a folder, inside of which there are a lot of files. I was looking for a way to check whether any file inside that folder has been opened or not. If it is open, I need to get notified. I know this can be done using inotify-wait but have not been able to do so.
Here is my script
MONITORDIR="/home/aniketshivamtiwari/Downloads/Projects"
inotifywait -m -r -e create --format '%w%f' "${MONITORDIR}" | while read NEWFILE
do
echo "File ${NEWFILE} has been opened"
done
|
As suggested by Rastapopoulos in the comments Here is the solution
First install sudo apt-get install inotify-tools
MONITORDIR="path/to/the/folder"
inotifywait -m -q -e open --format '%w%f' ${MONITORDIR}/* | while read NEWFILE
do
echo "File ${NEWFILE} has been open"
done
| Check whether a file is opened or not |
1,377,610,923,000 |
I'd like to monitor a file with inotify, and trigger some code when someone changes the content (IN_MODIFY or IN_CLOSE_WRITE), but I'm running into problems where inotify stops returning events when users edit the file with their favorite tool. The file is meant to be simple (single line, no spaces, max 20 characters). I'd rather not restrict their usage, but I'm not sure how to handle different situations.
I'm using inotify and these are the events that I receive when various applications edit the file:
Action
inotify Events
touch file
IN_OPEN
echo "data" > file
IN_MODIFY, IN_OPEN, IN_ACCESS, then IN_CLOSE_NOWRITE
nano file (on open)
IN_OPEN
nano file (on ^O)
IN_MODIFY, IN_CLOSE_WRITE, IN_OPEN, IN_ACCESS
vim file (on open)
IN_OPEN, IN_CLOSE_NOWRITE
vim file (on :w)
IN_MOVE_SELF, IN_ATTRIB, then events stop coming from this file
gedit file (on open)
IN_OPEN, IN_CLOSE_NOWRITE, IN_ACCESS
gedit file (on save)
IN_OPEN, IN_CLOSE_WRITE, IN_ATTRIB, then events stop coming from this file
mv newfile file
IN_ATTRIB, then events stop coming from this file
At one point I thought I saw gedit trigger also trigger IN_DELETE_SELF before going silent.
In the case where a user uses vim and gedit, I stop getting inotify events after the user has finished the edits. How should I deal with this?
The only thing I see in common is the IN_ATTRIB event. I suspect that when I receive the IN_ATTRIB event, I should inotify_rm_watch() that wd, and then re-create a new inotify_add_watch() based on the same path. But is that the correct approach?
Another option could be to watch the parent directory. The affected file name is included in the inotify_event::name, and so I could filter on the file of interest, and trigger off of any IN_MODIFY or IN_CLOSE_WRITE where the name matches my file of interest.
|
As ikkachu mentions, some editors create a new file, then replace the original, changing the inode. That means any watches on the original watch descriptor will expire.
The answer is to look at the parent directory, and check for changes on any file with the target name. Something like this:
namespace fs = std::filesystem;
fs::path path = "./file1";
assert( !path.is_directory() );
int fd = inotify_init();
int wd = inotify_add_watch(
fd,
path.parent_path().c_str(),
IN_MODIFY | IN_CREATE | IN_CLOSE_WRITE
);
...
inotify_event event;
read(fd, &event, BUF_SIZE);
if (wd == event->wd && path.filename() == event->name) {
emit_file_changed();
}
These events (IN_MODIFY|IN_CREATE|IN_CLOSE_WRITE) capture the techniques I tried above (touch, echo "" >, vim, nano, gedit). I bet I could also capture changed symbolic links with these.
| How to reliably maintain watches on edited files with inotify? |
1,377,610,923,000 |
I want to capture inotify CLOSE_WRITE events in a directory and write CSV info to a history file.
My attempt :
#!/bin/bash
dir=/home/stephen/temp
host=$(hostname)
inotifywait -e close_write -m --timefmt '%Y-%m-%d,%H:%M' --format '%T,$host,%e,%w,%f' $dir | while read event;
do
echo $event
done
This produces
2022-06-14,16:58, `hostname`, CLOSE_WRITE,CLOSE, /home/stephen/temp/, testfile.txt
I've tried $host and $(hostname) with the same effect. The event's format definition does not accept externally defined variables.
I could wrap it all up in a python script but I'd rather find a shell native solution.
|
You can insert any string (even multi-line) stored in a variable or use command substitution as long as you keep in mind the rules that apply to expansion when quoting. Anything inside single quotes will be printed literally. Break single quotes and use double-quotes for variables or command substitution that you want expanded. See
What is the difference between the "...", '...', $'...', and $"..." quotes in the shell?
e.g. in your particular case using a format like
--format '%T, $host: '"${host}"',%e,%w,%f'
or
--format '%T, $host: '"$(hostnamectl hostname)"',%e,%w,%f'
prints
2022-12-20,06:21, $host: tuxedo,CLOSE_WRITE,CLOSE,./,test
so the first $host is printed literally while the part between double-quotes is expanded to the host name.
| inotifywait : insert arbitrary string to output format |
1,377,610,923,000 |
In one terminal I run the following command, which generates lots of directories in /proc over time:
$ while true; do /bin/echo helloworld | cat -; echo $$; sleep 3s; done
Then after several minutes, I inspect the output of inotifywatch, which contains only a few directories:
$ sudo inotifywatch --recursive /proc -v
Establishing watches...
Setting up watch(es) on /proc
OK, /proc is now being watched.
Total of 5718 watches.
Finished establishing watches, now collecting statistics.
^Ctotal access close_nowrite open filename
1558 540 509 509 /proc/
816 272 272 272 /proc/1529/
168 56 56 56 /proc/437/
105 35 35 35 /proc/3496/
57 19 19 19 /proc/1/
42 14 14 14 /proc/419/
38 22 8 8 /proc/1632/
21 7 7 7 /proc/1120/
12 4 4 4 /proc/sys/kernel/
12 4 4 4 /proc/211/
12 4 4 4 /proc/219/
6 2 2 2 /proc/292/
6 2 2 2 /proc/415/
6 2 2 2 /proc/568/
Why isn't inotifywatch --recursive /proc -v able to see all created directories in /proc?
Is it because that /proc is a pseudo-filesystem and that inotifywatch only works with real filesystems? If so, why is inotifywatch then able to output a few directories (see above)?
I've also tried to execute inotifywatch using sudo, but the results are the same.
OS:
PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
NAME="Debian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
VERSION_CODENAME=stretch
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
|
You cannot watch /proc or any other pseudo-fs with inotify.
From inotify(7):
Inotify reports only events that a user-space program triggers
through the filesystem API. As a result, it does not catch remote
events that occur on network filesystems. (Applications must fall
back to polling the filesystem to catch such events.) Furthermore,
various pseudo-filesystems such as /proc, /sys, and /dev/pts are not
monitorable with inotify.
| Why isn't `inotifywatch --recursive /proc -v` able to see all created directories in `/proc`? |
1,377,610,923,000 |
Some months ago I realized when I am writing a new post on my blog (with Hugo), the feature to reload the contents as files changes stopped working.
I waited in order to see if it was a problem with hugo, but the problem is with my Gentoo. For example, if I rename a file, the file manager does not see it immediately, I have to press F5 to be able to see the renamed file. The same happens if I download a file with the folder in which the file is being downloaded opened in the file manager.
I though the problem may be I didn't have installed inotify-tools, but it is installed.
In my kernel configuration I have inotify enabled:
grepr inot .config
CONFIG_INOTIFY_USER=y
Any ideas in which package I accidentally could have removed?
|
I found the problem!, accidentally
I am working with ensime, and in its logs I saw this exception:
java.lang.Exception: java.io.IOException: User limit of inotify watches reached
So I just needed to increase the number of files that can be watch in systemctl:
fs.inotify.max_user_watches=32768
Now everything is working right
| System unable to detect renamed/new files |
1,377,610,923,000 |
I have a directory to monitor. When it is updated, I run some command via incron. When multiple files are copied in this directory, incron execute multiple commands at the same time. Is there any way that when one job of incron is running, the second job should not run?. I have followed this tutorial for guidance.
|
No, incron doesn't have a built-in lock feature. If you want to prevent jobs from running at the same time, do it from within the job.
If you want to delay a job until the previous one(s) have finished, make them take a lock. You can use the flock command. There are examples in the man page.
If you want to skip a job if the previous one hasn't finished, you can still use flock, but with a timeout of 0. If you can't obtain the lock, exit. Note that this is prone to a race condition: it could happen that a new file is copied just after the time job #1 had finished enumerating files but before it had time to release the lock, and job #2 would see that the lock is still held and exit without processing the file either. There's no easy way to solve that race.
| Allow one entry at a time in incron job? |
1,377,610,923,000 |
First it is my first time writing a bash script so I apologize if this is trivial. I am trying to setup a watch so every time a jpg file is uploaded to a specific folder, it gets converted to webp using cwebp. after googling the web, it seemed that using inotifywait is the best way (please let me know if that is not correct). reading bash script manual and inotifywait page I managed to write this code:
inotifywait -m /home/ben -e create -e moved_to |
while read path action file; do
# echo "The file '$file' appeared in directory '$path' via '$action'"
if [[ $file = *.jpg ]]
then
cwebp $file -o $file.webp
fi
done
this works when I use for example mv command but when using the code above, I get this error:
Could not read 0 bytes of data from file test.jpg
Error! Could not process file test.jpg
Error! Cannot read input picture file 'test.jpg'
if I run the command cwebp test.jpg -o test.jpg.webp separately, it executes without any errors.
what am I doing wrong? this triggers after the file is created, why cwebp is getting 0 bytes?
|
The inotifywait script you had was using -e create instead of -e close_write; the difference is that the create event will fire off before data has been written to the file; thus, cwebp had "0 bytes of data from file".
From the inotifywait page you referenced:
create
A file or directory was created within a watched directory.
close_write
A watched file or a file within a watched directory was closed, after
being opened in writeable mode. This does not necessarily imply the
file was written to.
| convert a jpg file after being uploaded using inotifywait |
1,377,610,923,000 |
There are two RHEL 7.2 linux servers located in different places. Both have same directory structure. Requirement is to keep certain directories of both servers in sync, i.e any modification in server1 should get reflected in server2 and vise versa, but if some file gets deleted locally it should not get deleted from remote server also. If some files get modified while link between those two servers is down, it should be copied as soon as link gets established.To implement this following script is made :
#!/bin/bash
EVENTS="CREATE,MOVED_TO,MODIFY"
inotifywait -e "$EVENTS" -m -r --format '%w%f' --fromfile list.txt|
while read FILE; do
echo $FILE
returnvalue=1
while [[ $returnvalue -ne 0 ]]
do
rsync -azr $FILE backupserver:/$FILE
returnvalue=$?
echo $returnvalue
if [[ $returnvalue -ne 0 ]]
then
sleep 60
fi
done
done
list.txt contains all the directories to be monitored. This is script is running in both the servers.
Problem : Whenever some modification is made in server1, it detects and copies it to server2 with return code 0. But that is detected in server2 as new modification and it tries to copy back to server1, as that file is present in server1 rsync returns an error code 23, so programs gets stuck.
What is the best solution to this problem?
Note: We cannot use --ignore-existing option as same file exists in both places which needs to be updated if content gets changed.
|
The problem was due to temporary file creation as suggested in other answers. It got resolved when I added --temp-dir=/tmp as an rsync option.
| Sync same directories between two linux servers |
1,377,610,923,000 |
I'm scripting a special program to my company.
By using Inotifywait from inotify-tools, I'm watching a specific folder for new items, and as soon a new file appears, it will be encrypted with gpg and moved to another folder for further treatment.
For a single file, it works fine, but I noticed a problem: When a new file enters while another one is being processed, he is ignored and intotifywait don't treat it, so he stays stucked in the folder. Is there any way to handle multiple files at the same time?
Here is the code I have so far:
origin=/BRIO/QPC/conclu01/Criptografar
output=/BRIO/QPC/conclu01/GPG
finished=/BRIO/QPC/conclu01/Concluido
while true; do
inotifywait -e create -e moved_to -e close_write -e moved_from $origin --exclude ".*(\.filepart|gpg|sh)" | while read dir event file
do
echo $event
if [ "$event" == 'CLOSE_WRITE,CLOSE' ] || [ "$event" == 'MOVED_TO' ] || [ "$event" == 'CREATE' ]
then
echo "Found the file $origin/$file, starting GPG"
sleep 5
gpg --encrypt --recipient Lucas --output "$output/$file.gpg" "$origin/$file"
echo "The file $file was succesfully encrypted to $output/$file.gpg"
mv -f "$origin/$file" $finished
echo "The file $origin/$file was moved"
fi
done
done
|
Don't run inotifywait repeatedly, run it once in the monitor mode and read from its output:
inotifywait -m ... |
while read dir event file ; do
...
done
| Use Inotifywait to handle multiple files at the same time |
1,377,610,923,000 |
Trying to use split with inotifywait which basically splits the file when the file is created from ftp-server.
#!/bin/bash
TARGET=/home/test-directory/incoming
SPLITTED=/home/test-directory/incoming/splitted
PROCESSED=/home/test-directory/incoming/processed
LOGFILE=/var/log/inotify-ftp.log
inotifywait -m -e create -e moved_to --format "%f" $TARGET \
| while read FILENAME
do
echo Detected $FILENAME >> $LOGFILE
echo Splitting $FILENAME >> $LOGFILE
split -d -l 1000 "$TARGET/$FILENAME" "$SPLITTED/$FILENAME"
#/usr/bin/split -d -l 1000 /home/test-directory/incoming/test-file.csv /home/test-directory/incoming/splitted/test-file.csv
mv "$TARGET/$FILENAME" "$PROCESSED/$FILENAME"
echo Completed splitting $FILENAME >> $LOGFILE
done
So, the following code works fine when its executed separated. The above script is supposed to do the same thing but instead, it creates a first splitted file with couple hundred lines only.
#/usr/bin/split -d -l 1000 /home/test-directory/incoming/test-file.csv /home/test-directory/incoming/splitted/test-file.csv
Any idea, whats going on ?
|
That would be because the file is zero-length when it is created. There is a race condition, where split figures out the size of the file, and decides how to split it up, while ftp-server is happily making it bigger all the time.
It would be wise to figure a mechanism where the split waits for the file to arrive completely before starting to read it. Typically, stat the file in a loop until it has not grown in the previous minute.
| Split behaving weird in bash |
1,377,610,923,000 |
I have of bash script with a loop watching a directory recursively:
while true
do
if path=`inotifywait -q -r -e create --format %w%f $watchpath`; then
#modify file
fi
done
If I'm not mistaken this comes with this problem:
If many files are created in that directory or the machine is busy with other tasks, a file could be created before inotifywait is reached again - which would mean that it would be ignored.
Is there a way to mitigate that? Perhaps there is way to continuously "watch" and process a stream/feed of modfied files instead?
|
One way is to run inotifywait in monitor mode, e.g:
inotifywait -m -q -r -e create --format '%w%f' "$watchpath" |
while read -r path; do
: # do something with path
done
There still will be a race between processes though, I'm not sure there is a way to avoid race conditions using shell utilities.
Even the man page lists this under caveats.
The inotify API identifies affected files by filename. However, by
the time an application processes an inotify event, the filename may
already have been deleted or renamed.
| Reliability of inotifywait loop |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.