date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,382,994,068,000
I have a fresh install of Ubuntu 16.04.1 with nginx installed, and when dpkg installed nginx, it registered the boot time config in two locations. Systemd location systemd config which states to start nginx daemon on boot (or "multi-user target") % ls -l /etc/systemd/system/multi-user.target.wants/nginx.service /etc/systemd/system/multi-user.target.wants/nginx.service -> /lib/systemd/system/nginx.service Init V location initV config which states to start nginx daemon on boot (or "run level 5") % ls -l /etc/rc5.d/S02nginx lrwxrwxrwx 1 root root 15 Apr 2 23:27 /etc/rc5.d/S02nginx -> ../init.d/nginx If I disable nginx, systemd gives some output indicating some kind of backward compatibility actions are occuring % sudo systemctl disable nginx.service Synchronizing state of nginx.service with SysV init with /lib/systemd/systemd-sysv-install... Executing /lib/systemd/systemd-sysv-install disable nginx insserv: warning: current start runlevel(s) (empty) of script `nginx' overrides LSB defaults (2 3 4 5). insserv: warning: current stop runlevel(s) (0 1 2 3 4 5 6) of script `nginx' overrides LSB defaults (0 1 6). This will remove BOTH the symlinks from above. Why is this setup this way? Why isn't there just one or the other - either the new systemd config or the old systemV init?
As jordanm comments, this is inherited from Debian where different init systems are supported. Not only that, but you can change your init system without reinstalling, and expect your configuration to survive — including which services are enabled or disabled. That’s the reason why the systemd and sysvinit setups are kept in sync. (Note that at least some of the features being used are provided by upstream systemd and aren’t Debian- or Ubuntu-specific.)
Ubuntu 16.04.1: Why are some programs started by both systemd AND initV systems?
1,382,994,068,000
The official Gentoo Dockerfile contains this line: RUN sed -e 's/#rc_sys=""/rc_sys="docker"/g' -i /etc/rc.conf As of the present time, Gentoo's default init is OpenRC. If I run the docker image with CMD /sbin/init, issuing OpenRC-type commands gives the response You are attempting to run an openrc service on a system which openrc did not boot. ...and, indeed, strings /sbin/init | grep -q "sysvinit" gives SYSVINIT However, also from the above link, OpenRC is based on sysvinit, so that could be correct. If I run the docker image with CMD /sbin/openrc, the image will not start, claiming dependency errors. If I want to run multiple processes under OpenRC, should I change /etc/rc.conf back to "" (Nothing special)? Is there something special about the "docker" RC system? Does it, perhaps, presume no init system and instead expect only one running process? What does a "docker" RC in /etc/rc.conf mean in terms of what init system is used?
If you look at many of the boot runlevel init scripts, such as /etc/init.d/hostname, you will see a block as follows: depend() { keyword -docker -lxc -prefix -systemd-nspawn } This states that the init script should NOT be used automatically on any of those system types (you can see the manpage openrc-run(8) for the full explanation of keyword). The scripts may still be started manually, but will not be considered during automatic init dependency building. I don't recall offhand the behavior if the script is explicitly added to a runlevel.
What effect does rc_sys="docker" have?
1,382,994,068,000
I've a RaspberryPi, and I would like to run a nodeJS script (running a server), and then open Chrome when the server has been launched. Currently, I launch my nodeJS script thanks to a script in /etc/init.d and chromium thanks to a line in /etc/xdg/lxsession/LXDE/autostart The problem is chromium is launched BEFORE my server is up, thus, it displays an error and I have to manually refresh the page in order to make it work. Do you have any tips on how to handle this situation? pi@legalpi ~ $ uname -a Linux legalpi 4.0.7+ #802 PREEMPT Wed Jul 8 17:35:23 BST 2015 armv6l GNU/Linux
Invoke chromium on a local HTML file that looks like this: <script> function vico_func() { location = "URL_to_your_server"; } setTimeout(vico_func, 3000); </script> setTimeout(some_function, delay) is like the at command — it schedules the function to be called in the future, after a delay, which is expressed in milliseconds.  So the above will cause chromium to go to your server page three seconds after it is started. You can condense this a little, using an anonymous function: <script> setTimeout(function() { location = "URL_to_your_server"; }, 3000); </script>
Launch node & chrome after system boot
1,382,994,068,000
Theoretical question, but for example, is it possible to hibernate on a laptop and boot into that image on a desktop which could have otherwise identical configuration in terms of distro/config files. The practical application for this would be to transfer all running programs from a laptop to a desktop for greater performance or vice versa for portability. Is it possible to "pretend" to hibernate the OS and to copy all changed files to another computer. I understand that this might not be practical but I want to hear your thoughts on if it is even possible using current technologies.
This is indeed possible through the magic of virtualization. See, for example https://www.usenix.org/legacy/event/hotos09/tech/full_papers/kozuch/kozuch_html/index.html and https://en.wikipedia.org/wiki/Live_migration which contains a list of virtual machine managers that support live migration.
Transfer running instance of OS to another machine
1,382,994,068,000
I have a live-boot USB with Linux Mint 19.1 Cinnamon, I placed the boot image with Rufus and it has worked without problems on other machines. Now I have built a PC with various parts I've found on my shelf. Specs: Motherboard: Gigabyte GA-970A-DS3P Rev 1.0 (BIOS Version F1) RAM: Kingston HyperX 8G (4G+4G kit) CPU: AMD AM3+ FX-6300 Six-core processor GPU: MSI GTX 1060 Gaming X 6G PSU: Corsair VS550 80 Plus White Storage: Kingston A400 120G SSD When I try to use the Live-USB, I can get to the Grub menu where I select whether to launch Mint normally or in compatibility mode. I tried both ways, but the Linux Mint logo just stays there loading, and when i press arrows or escape to show the background loading messages, it shows following parts: Failed to mount '/dev/sda': Operation not permitted and The disk contains an unclean file system (0, 0). Metadata kept in Windows cache, refused to mount. Failed to mount '/dev/sda': Operation not permitted The NTFS partition is in an unsafe state. Please resume and shutdown Windows fully (no hibernation or fast restarting), or mount the volume read-only with the 'ro' mount option and /init line 7 can't open /dev/sdb no medium found Since sda is mentioned alongside the Windows-related message, I believe it is the Kingston SSD and the sdb is the USB-stick. The Kingston SSD does not contain Windows installation, but I used it as a secondary SSD in my Windows laptop as an installation drive for games. I did format it before removing it from the laptop. I removed the SSD and tried to boot again (with no other storage media attached, just the USB itself), then it gave me just this message: /init line 7 can't open /dev/sda no medium found Here sda must be the USB-stick since the SSD is now removed. I tried different USB-stick with different distro (Zorin OS), but it did not work, the same error message still appeared. There is no "Secure-boot" option in BIOS and I made sure that the 1st boot order device is the USB-stick. Any ideas why this happens? Since removing SSD and switching USB-sticks did not help, it's not the storage media. And CPU/GPU wouldn't/shouldn't affect in this case anyway. That leaves Motherboard BIOS and RAM, but the RAM sticks are quite new still (I'll run memtest86 just in case). But I am running out of ideas where to look after these mentioned efforts. UPDATE: I wiped the SSD with Windows Diskpart utility (CLEAN ALL -command, total wipe, all sectors zero), now I don't get the NTFS-related error, but I still cannot boot to Mint or other distros. I took pictures of the messages: And: dev sr0 is the CD/DVD-drive, but I don't even have one connected, so that message shouldn't be the reason. CD/DVD-drive is not mandatory for booting. I can't make any sense of the rest of the messages. UPDATE: I may have found potential solution. I managed to boot to desktop after switching the USB-stick from USB3.0 to USB2.0, for some reason USB3.0 is not recognized at boot. Then I had new problem when mouse and keyboard did not respond after reaching the desktop. Turns out that some specific Gigabyte-motherboards need to have IOMMU (Input–output memory management unit) enabled, after enabling the IOMMU, the mouse and keyboard worked fine. So far this issue seems to be solved. I'll try to actually install a distro and boot to it and update the situation again after that. UPDATE: OK, I managed to install Pop!_OS 19.10 to the PC now, though it first did not want to do it (somehow Rufus does not flash the image properly, had to use Etcher). SUMMARY: NTFS-problem => just wipe the disk medium not found => change from USB3.0 to USB2.0 Mouse/Keyboard not responding after boot => enable IOMMU (might be Gigabyte-specific solution)
OK, I managed to install Pop!_OS 19.10 to the PC now, though it first did not want to do it (somehow Rufus does not flash the image properly, had to use Etcher). SUMMARY: NTFS-problem => just wipe the disk Medium not found => change from USB3.0 to USB2.0 Mouse/Keyboard not responding after boot => enable IOMMU (might be Gigabyte-specific solution) Install failing => flash the USB with different tool (from Rufus to Etcher)
Linux Mint Live USB wont boot. /Init line 7. Can't open sdb. No medium found
1,513,338,265,000
Whenever I boot my gentoo laptop, openrc hangs forever in the "Caching service dependencies..." stage. This causes my computer to be unbootable unless I use a sysrq key to kill it and manually boot the system. Using ps as a diagnostic tool, I found that the grep and cut programs (children of a script gendepends.sh) were hanging, using 0% CPU. Killing these programs allowed the boot process to continue properly, after invoking openrc default. Anyway, I could use many methods to fudge around this problem, but I'd like to know the most likely cause and fix it properly. Here's the relevant output of pstree when I try to resolve dependencies manually: | | `-doas /lib/rc/bin/rc-depend -u | | `-rc-depend -u | | `-gendepends.sh /lib64/rc/sh/gendepends.sh | | `-gendepends.sh /lib64/rc/sh/gendepends.sh | | `-gendepends.sh /lib64/rc/sh/gendepends.sh | | |-cut -d = -f 2 | | |-grep pid | | |-tr -d \\" | | `-tr -d [:space:] In gendepends.sh, these commands are not mentioned anywhere, so I assume they are invoked from another script which was sourced by it. EDIT: I've fixed the problem now. It turned out to be caused by an old init script with unresolvable dependencies which was still lying around in my initscripts directory for some reason. Deleting the script solved the problem. Thanks for the suggestions.
These commands don't hang out of nowhere; they must have been invoked by some init script, but perhaps with wrong parameters or expecting non-existent data. I can imagine that they wait for some input which just isn't provided. In the output of ps -ef you find the ID of the parent process (PPID), which is probably the culprit; you could also try pstree for a better overview. Try to figure out the script line where the command(s) are invoked, and perhaps you can figure out the reason why the process and as a result the complete boot process hangs. If you're not able to point out the issue then add the output of ps -ef to your question (you may shorten it to the hanging command and its parents up to PID 0), and the init script if one is involved.
Gentoo's openrc hangs forever in "caching service dependencies..."
1,513,338,265,000
I have a Stretch system n which I would like to replace agetty with ngetty (for various reasons like because I have no use for serial lines, and I like the way ngetty can be configured, for examples). I know how to do that in runit or sysvinit, but I can't find where the info is with systemd. I can find nothing which seems related in /etc (the inittab file is simply not used for the related lines) but there seems to have related files in /lib/systemd/system/. I must admit I do not feel comfortable to hack things in this folder, so what would be the cleanest way to do that in Debian? Thanks.
Seems like you may be on a virtual environment where getty is useless. You may switch to mingetty (default at Amazon AWS now), which uses minimal resources and still be able to look at the "Console Logs" (via Amazon vm GUI ..eeeek). To switch from agetty to ngetty or mingetty, (you just need one): # apt install mgetty # apt install mingetty To tell debian to start using you new getty, update your /sbin/getty symbolic-link to (pick one): # cd /sbin # rm getty # ln -s mgetty getty # ln -s mingetty getty BONUS: If in a cloud based environment, you really don't care about multiple consoles, you may even reduce the # of consoles to just 1 (for viewing console logs on Amazon CLI). To do this: Edit /etc/default/console-setup and replace: ACTIVE_CONSOLES=/dev/tty[1-6] with... ACTIVE_CONSOLES=/dev/tty[1-1] Cheers...
how to change the getty binary in Debian Stretch?
1,513,338,265,000
So, I have a problem with the partitions setup of my laptop. I will try to include as many details as possible in order to make it easier to help. In the past, I had an ubuntu 15.10 system on my laptop with 2 identically sized drives. These were both identically formatted with 2 partitions each, one for /boot and one for /. Both partitions were raid1'd together with mdadm. On the boot-raid I then had a btrfs file system for /boot. On the /-raid I had a LUKS-volume with a LVM-volume inside on it. On the LVM I then had a btrfs partition. This setup worked quite well. but I wanted to change it: first, I wanted to have atomic backups, So the boot partition had to go so I would be able to snapshot the entire OS at once. Second, I wanted to encrypt the /boot, too. Third, I wanted to get rid of LVM. And fourth, I didn't want bit rot, so mdadm needed to go in favor of btrfs-raid. So my idea for a better suited system was as follows: I would have only 1 partition on each drive with a btrfs-parity for a btrfs-raid1 inside. I did that, moved the system over to the new partitions, added the cryptodisk-stuff into /etc/default/grub, corrected crypttab, corrected fstab, made sure initramfs is ok, updated grub, installed it to the drives once more, uninstalled mdadm and lvm2 from the system, moved away their config files I knew of (in /etc) and rebooted. I used external hard drives to balance around my btrfs file system during the procedure, and a USB stick with a ubuntu system with the same version as my system's. When I rebooted, grub came up asking me for the first unlock, I did it, it apparently didn't care about the second LUKS-container, booted through fine anyway (I guess because it only needs to read, where 1 disk is enough), the init begun and it asked me to unlock the first container again (as expected). I did it, it continued and instead of unlocking the second one, it came with "crypsetup: lvm is missing". I double-checked fstab and crypttab, as well as /etc/default/grub, and it all seemed fine. Do you have any hints on why this error could occur? also, if I use the same IDs to mount and chroot into my system from usb, everything works fine. So the system itself is perfectly fine.
The question is answered by myself now! The problem was that the crypttab entry for the second container was invalid. Even though I double-checked, I missed the error, and the update-initramfs didn't complain either. What do I take away from this? Always triple- or quadruple-check such critical things, as it can often save you a lot of hassle (and others that try to help you, too ;).
Cryptsetup: LVM is missing (on a system without LVM)
1,513,338,265,000
If you want to customize the colors in Emacs, specifying them in the .emacs init file, without installing any extra package, and without using a pre-made theme, something like this seems to work: (set-background-color "#003c3c") (set-foreground-color "#ffffff") (set-face-background 'fringe "#253c3c") (set-face-background 'cursor "#ffffff") (set-face-background 'region "#ff0000") (set-face-foreground 'font-lock-comment-face "#ff0000") (set-cursor-color "#00ff00") But for doing this, it would be useful to have a complete list of all the "keys" for these "key-value" pairs. (I don't know if "key" is the right term in Emacs-lingo.. just getting started with Emacs) Is such a complete list available anywhere? Or can I somehow generate one? (I currently use the GNU OSX version of Emacs in its own window, not in a terminal)
If you type M-x customize-face RET and then hit TAB, the completion window will provide a list of all faces, and you could copy the list from the completion window. Or you could hit RET, and then you would be brought into the Emacs face customization interface, where you can change the colors and save them. This does not involve any extra packages or themes, and has been part of Emacs a long time (so you don't need to worry about which version you have).
Emacs, complete list of color "keys"
1,513,338,265,000
I'm running Debian Weezy on an ARM board. Right now I'm working around an issue with my network driver by running an ethtool command that limits the Ethernet interface to 100 megabit. However, the issue with the driver manifests itself as early as DHCP negotiation, so I need to run ethtool before dhclient runs. I've been trying to find a place where ethtool (or mii-tool, either way) can run before dhclient. So far all of the places I've tried (/etc/network/if-pre-up.d and /etc/dhcp/dhclient-enter-hooks.d) won't work because "eth0" isn't present yet. Is there a clean hook in the Debian network or system configuration where I can make changes to the Ethernet state before dhclient runs? Or am I trying to do something impossible here?
It should be possible to bring up the eth as "manual" then apply whatever arbitrary scripts you want to run, including sleeps to slow things down, and then call for dhclient at the end. On Ubuntu it would look like this in /etc/network/interfaces auto eth0 iface eth0 inet manual pre-up /etc/network/pre-up-scripts/eth0.sh ( one way to do it) pre-up some-script-or-command (another way) up dhclient eth0 The man page for interfaces is quite helpful.
Executing a command after eth0 is available, but before DHCP client
1,513,338,265,000
In old 5.3 rhel, we used to define the number of terminals and their respawn settings in /etc/inittab file as below. 1:2345:respawn:/sbin/mingetty tty1 1:2345:respawn:/sbin/mingetty tty2 1:2345:respawn:/sbin/mingetty tty3 1:2345:respawn:/sbin/mingetty tty4 ....etc for 12 terminals In new RHEL 6.4, we need to define the terminals in /etc/sysconfig/init file as below ACTIVE_CONSOLES="/dev/tty[1-9] /dev/tty10 /dev/tty11 /dev/tty12" Now, how can I turn off the respawn property for any terminal.. say tty5?
Unfortunately this is more involved than just editing the /etc/inittab now. I found 2 examples that were helpful: Replacing TTY with a script in CentOS 6 RHEL 6 Tech Notes Deployment The gist, modify this file: /etc/init/start-ttys.conf: script . /etc/sysconfig/init for tty in $(echo $ACTIVE_CONSOLES) ; do [ "$RUNLEVEL" = "5" -a "$tty" = "$X_TTY" ] && continue if [ "$tty" == "/dev/tty5" ]; then initctl start no_respawn_tty TTY=$tty continue fi initctl start tty TTY=$tty done end script Then create the corresponding script, /etc/init/no_respawn_tty.conf: # tty - getty # # This service maintains a getty on the specified device. stop on runlevel [S016] instance $TTY exec /sbin/mingetty $TTY usage 'tty TTY=/dev/ttyX - where X is console id' The changes should be seen immediately, I don't think you need to restart anything.
How to disable respawn for terminal?
1,513,338,265,000
We have an embedded version of Meego Linux running on an x86 chip-set that currently uses X11 as the window technology. For various reasons we want to remove X11 from the mix (along with mutter, we are using clutter as a graphic toolkit). However, our main web browser needs to run in a X11 window. So far we have kept to using X11 for this reason. But we would like to run clutter just on the OpenGL layers. I could start up X11 through running init 5 but would like to do it in a more gentle fashion. Is there a way of starting and exiting X11 for this? Thanks. Update to answer questions - 4/05/2012: startx does not appear to be on the system...? Not sure how X starts without this. There is no .xsession either. You want to start X, and then exit immediately? Yes and No. Only want to exit X once the browser exits. Do you want the X window to display directly on the screen? In terms of the browser, yes. The browser is the only app that uses X11, and it is a full screen application (i.e. no 'window' type of scaling, moving, etc.) Do you need a specific web browser, or will any do? Yes, Very specific version. Do you need to run the browser and clutter at the same time? No. Once the browser has been launched, it takes full control until exited.
startx is just a script that wraps xinit and sets up an environment. You can probably copy it from just about any regular Linux install and customize it to your needs. If you're also missing xinit, all it does is run /usr/bin/X :0 and xterm when invoked without options (it's only slightly fancier when wrapped by startx). In other words, the lowest level way to run X is to run /usr/bin/X :0. After that simply run clients and connect them to that display. X automatically exits when the last client disconnects.
How to enable and disable X11 outside of init
1,513,338,265,000
I have a systemd service named webserver.service that is wanted by multiuser.target (enabled by default on system). I have another service under another target named test.service that I want to run after webserver.service. Within test.servive I’m adding: After=webserver.service Is it enough or should I add following require statement as well: Requires=webserver.service Tried with just after statement is seems working as expected, just worried about any potential of race during system bootup.
Short answer: yes. Before and After in systemd are interpreted purely for timing, they don't start services. You generally want a Wants or Requires or BindsTo in order to start a service. People rarely have a purely-timing requirement. Long answer: The Wants link from the Target unit (that was probably installed by the service install section) is enough to start the required service. It's good practice though to capture all the relationships in the unit files. If Service A must have Service B it should Require it even if the service runs on it's own so that a systemctl disable service_b doesn't cause Service A to silently fail later.
Systemd: should I use wants/requires for already enabled service listed in After=
1,513,338,265,000
Probably this is a really naïve question, but I can’t make this work by trying the methods I’ve found in the existing documentation or in other solutions. I have Alpine Linux installed on a Raspberry Pi, which SD card is formatted to have the usual boot partition and an ext4 to host /, I added a swap partition since my Pi has not much RAM. The issue is that the swap partition does not activates at boot. As far as I’m aware, the conventional method to configure a dedicated swap partition, is to declare it on the /etc/fstab file. This does not work, so my other approach was to try making a script in the /etc/init.d folder to force its activation. To my surprise, an init.d file already exists in this build to do exactly that, which is the /etc/init.d/swap, which reads as follows. depend() { after clock root before localmount keyword -docker -jail -lxc -openvz -prefix -systemd-nspawn -vserver } start() { ebegin "Activating swap devices" case "$RC_UNAME" in NetBSD|OpenBSD) swapctl -A -t noblk >/dev/null;; *) swapon -a >/dev/null;; esac eend 0 # If swapon has nothing todo it errors, so always return 0 } stop() { ebegin "Deactivating swap devices" case "$RC_UNAME" in NetBSD|OpenBSD) swapctl -U -t noblk >/dev/null;; *) swapoff -a >/dev/null;; esac eend 0 } Somehow, this does not run properly, neither the /etc/init.d/swap.apk-new which has the exact same contents as /etc/init.d/swap. I know that the /etc/fstab is properly configured as running swapon -a >/dev/null activates the swap partition the intended way! Yet Alpine refuses to do this at boot despite being declared already… Am I missing something? I know I can activate the swap manually each time I turn on the device, but I’m sure the system should do it automatically on boot. If it serves of any help, the line I added in /etc/fstab reads as follows. UUID=<my partition UUID number> none swap defaults 0 0 And swapon -a recognizes the partition. This Alpine build was made using the sys install, and its specs are the followings OS: Alpine Linux v3.18 aarch64 Host: Raspberry Pi 3 Model B Rev 1.2 Kernel: 6.1.37-0-rpi Thanks in advance.
Doing something completely different I ran into the solution, and it works! I feel extremely silly since it was something as trivial as just declare this in the terminal: rc-update add swap boot And now the swap activates as intended! I’ll just leave this in case anyone runs into a similar issue I guess…
Alpine Linux in Raspberry Pi not activating swap partition on boot
1,513,338,265,000
In a minimal Busybox-based Linux system, which commands must be invoked as part of the init script to ensure all kernel modules for the current hardware are loaded?
After going down the rabbit hole, assuming a minimal initramfs with some drivers built into the kernel and others present as kernel modules along with all relevant depmod-generated metadata, here is what I found: Drivers built into the kernel are loaded before /init is invoked. Drivers built as modules must be loaded by /init as follows: first /sys and /proc must be mounted then the existing hardware should be scanned and the relevant kernel modules should be loaded The hardware scanning and module loading should normally be accomplished by a simple mdev -s invokation. Unfortunately that doesn't work as it should. One must thereforce force this process to occur by invoking find /sys/ -name modalias | xargs sort -u | xargs -n 1 modprobe instead. After that all the drivers for the current hardware (and their dependencies) will have been loaded and initialized.
How to load kernel modules for current hardware in init of minimal Busybox-based system
1,513,338,265,000
So, recently I was doing the Linux from scratch project and I had multiple terminals open, so I was continuing to make it, and by accident I typed the line in another terminal tab (root), and it messes up symlinks completely!, I can't run any commands on bash. case $(uname -m) in i?86) ln -sfv ld-linux.so.2 $LFS/lib/ld-lsb.so.3 ;; x86_64) ln -sfv ../lib/ld-linux-x86-64.so.2 $LFS/lib64 ln -sfv ../lib/ld-linux-x86-64.so.2 $LFS/lib64/ld-lsb-x86-64.so.3 ;; esac I'm on arch linux, when I restarted the computer, also the kernel panic happened and it says: "switch_root: failed to execute /sbin/init: Too many levels of symbolic links." Any solutions? I hope if someone helps.
What to recover The LFS variable was presumably unset when you ran this command. So it modified /lib64/ld-linux-x86-64.so.2 and /lib64/ld-lsb-x86-64.so.3. You've corrupted the dynamic loader. As a consequence, you can't run any dynamically linked program. Pretty much every program is dynamically linked, including bash, init, ln, etc. /lib64/ld-linux-x86-64.so.2 is the important one. It's the dynamic loader used by 64-bit Arch programs. The symbolic link is provided by the glibc package. From a working Linux system, run ln -snf ld-2.33.so /lib/ld-linux-x86-64.so.2 Note: the number 2.33 will change over time! Check what file /lib/ld-*.so exists on your system. /lib64/ld-lsb-x86-64.so.3 is for compatibility with programs not built for Arch. It's provided by the ld-lsb package. If this package is installed, restore the link: ln -snf ld-linux-x86-64.so.2 /lib/ld-lsb-x86-64.so.3 If ld-lsb is not installed, remove /lib/ld-lsb-x86-64.so.3. Self-contained recovery with advance planning When dynamic libraries are corrupted, you can still run statically linked executables. If you're running any kind of unstable or rolling-release system, I recommend having a basic set of statically linked utilities. (Not just a shell: a statically linked bash is of no help to create symbolic links, for instance.) Arch Linux doesn't appear to have one. You can copy the executable from Debian's busybox-static or zsh-static: both include a shell as well as built-in core utilities such as cp, ln, etc. With such advance planning, provided you still have a running root shell, you can run busybox-static and ln -snf ld-2.33.so /lib/ld-linux-x86-64.so.2 Or run zsh-static and zmodload zsh/files ln -snf ld-2.33.so /lib/ld-linux-x86-64.so.2 If you've rebooted and are stuck because /sbin/init won't start, boot into the static shell: follow the steps in Crash during startup on a recent corporate computer under “Useful debugging techniques:”, starting with “press and hold Shift”. On the linux command line, add init=/bin/busybox-static (or whatever the correct path is). Repairing from a recovery system Without advance planning, you'll need to run a working Linux system to repair yours. The Arch wiki suggests booting a monthly Arch image. You can also use SysRescueCD. Either way, use your written notes, lsblk, fdisk -l, lvs, or whatever helps you figure out what your root partition is, and mount it with mount /dev/… /mnt. Then repair the symbolic link: ln -snf ld-2.33.so /mnt/lib/ld-linux-x86-64.so.2
switch_root: failed to execute /sbin/init: Too many levels of symbolic links
1,513,338,265,000
We recently moved our development infrastructure from our own old machines running Ubuntu 12.04 to Google Cloud instances running Ubuntu 18.04. Developers usually start some screens and run django servers within those screens. For example, one may create a screen screen -S webserver_5552 and run its django development application within the screen python manage.py runserver 0.0.0.0:5552 On our previous machines, we could detach the screen (ctrl+a d) and come back later (screen -r xxxx.webserver_5552): the django server process would still be here, up and running, and owned by the bash process of the screen. On the Google Cloud machine, this is, however, working differently and has been driving us crazy. We can still detach the screen, but if we come back later after a while, the django process is no longer owned by the bash process! Instead, it is owned by the init process (ppid set to 1 from ps). Usually, we got a backtrace from the django process before it received some signals and got his ownership changed, but that's all we got and we can't figure out what's the root cause and how to prevent it: Traceback (most recent call last): File "manage.py", line 65, in <module> execute_from_command_line(sys.argv) File "/home/testing/env/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 367, in execute_from_command_line utility.execute() File "/home/testing/env/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 359, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/home/testing/env/local/lib/python2.7/site-packages/django/core/management/base.py", line 294, in run_from_argv self.execute(*args, **cmd_options) File "/home/testing/env/local/lib/python2.7/site-packages/django/core/management/commands/runserver.py", line 58, in execute super(Command, self).execute(*args, **options) File "/home/testing/env/local/lib/python2.7/site-packages/django/core/management/base.py", line 345, in execute output = self.handle(*args, **options) File "/home/testing/env/local/lib/python2.7/site-packages/django/core/management/commands/runserver.py", line 97, in handle self.run(**options) File "/home/testing/env/local/lib/python2.7/site-packages/django/core/management/commands/runserver.py", line 106, in run autoreload.main(self.inner_run, None, options) File "/home/testing/env/local/lib/python2.7/site-packages/django/utils/autoreload.py", line 333, in main reloader(wrapped_main_func, args, kwargs) File "/home/testing/env/local/lib/python2.7/site-packages/django/utils/autoreload.py", line 304, in python_reloader exit_code = restart_with_reloader() File "/home/testing/env/local/lib/python2.7/site-packages/django/utils/autoreload.py", line 290, in restart_with_reloader exit_code = os.spawnve(os.P_WAIT, sys.executable, args, new_environ) File "/home/testing/odesk_android/../env/lib/python2.7/os.py", line 575, in spawnve return _spawnvef(mode, file, args, env, execve) File "/home/testing/odesk_android/../env/lib/python2.7/os.py", line 548, in _spawnvef wpid, sts = waitpid(pid, 0) OSError: [Errno 4] Interrupted system call This has been a particularly annoying behavior and we could not figure out the root cause (does it come from ubuntu, GCP, some misconfiguration, ...?). EDIT: I made a test by starting the screen and launching screen: testing@whova-qa-01:/home/simon_ninon_whova_com$ ps -ejf | grep 55530 testing  20764 19638 20764 19638  4 00:12 pts/18   00:00:01 ../env/bin/python manage.py runserver 0.0.0.0:55530 testing  20769 20764 20764 19638 12 00:12 pts/18   00:00:04 /home/testing/appium_android/../env/bin/python manage.py runserver 0.0.0.0:55530 As you can see, I started a django process ../env/bin/python manage.py runserver 0.0.0.0:55530 with PID=20764 and PPID=19638 (bash process) This django process created a child /home/testing/appium_android/../env/bin/python manage.py runserver 0.0.0.0:55530 with PID=20769 and PPID=20764 (the original process I spawned) Now, this morning, when I logged back on the machine, before I reattached the screen, everything was still the same: simon_ninon_whova_com@whova-qa-01:~$ ps -ejf | grep 55530 simon_n+ 9026 9011 9025 9011 0 09:09 pts/9 00:00:00 grep --color=auto 55530 testing 20764 19638 20764 19638 0 00:12 pts/18 00:00:01 ../env/bin/python manage.py runserver 0.0.0.0:55530 testing 20769 20764 20764 19638 2 00:12 pts/18 00:13:56 /home/testing/appium_android/../env/bin/python manage.py runserver 0.0.0.0:55530 So when I re-attached the screen, I expected the issue to not be there. However, when I re-attached the screen: boom, the process was killed! testing@whova-qa-01:~$ ps -ejf | grep 55530 testing 9085 9031 9084 9011 0 09:10 pts/9 00:00:00 grep --color=auto 55530 testing 20769 1 20764 19638 2 00:12 pts/18 00:13:59 /home/testing/appium_android/../env/bin/python manage.py runserver 0.0.0.0:55530 As you can see, the parent process got killed, and the child is still there, with the init process owning it. Interestingly, checking if the original bash process is still alive proves so: simon_ninon_whova_com@whova-qa-01:~$ ps aux | grep 19638 simon_n+ 9315 0.0 0.0 14664 1016 pts/9 S+ 09:16 0:00 grep --color=auto 19638 testing 19638 0.0 0.0 25360 7692 pts/18 Ss+ Feb27 0:00 /bin/bash So it seems that re-attaching the screen keeps the bash process but leads to the parent django process to be killed for some reason. Not sure what can happen at that step? Note that if I start the server, detach and reattach the screen in a short time, the problem is not triggered, it only happens after a while.
I found the root cause. When re-attaching the screen, SIGWINCH signal got sent to the parent django process. The process don't handle it and just crashes, leaving the child orphan. This can then be easily retriggered by resizing the term or using kill -28 PID. I am not sure why it only happens on the GCP instances though, there may be something different in the environment (python version?), anyway, that gives me more clues about where to find a solution. Edit: After searching for a while, the root cause was coming from one of our django dependencies importing readline in its source code. The python readline is a binding to the gnureadline library, and it appears that the signal handling on gnureadline interferes with the one done in python/django. Considering that it only happens on the GCP machine, but not our previous machines, I suspect that the gnureadline installed on our GCP machine is different (either in terms of version, or in terms of compilation options used), leading to a different signal handling behavior.
Process ownership automatically changes to init process on GCP Ubuntu 18.04LTS
1,513,338,265,000
On Kali Linux 2, before you are greeted with the GUI login screen, there is some pre-GUI text that scrolls on the screen. It shows you what modules and programs are working correctly, which ones failed, etc. Well, OpenVas always failed, so I did some commands to make it not fail, and now there is no text at all. That may or may not be the reason that the text is gone. I just know that it was there before, and is gone now. So if you have any suggestions on how to get it back, please share. Thanks
You have to edit the /etc/default/grub file: remove quiet from GRUB_CMDLINE_LINUX_DEFAULT. After that, run update-grub.
Kali Linux 2.0 - Startup Text has disappeared
1,513,338,265,000
Given that a simple program: /* ttyname.c */ #include <stdio.h> #include <unistd.h> #include <stdlib.h> #include <string.h> #include <errno.h> int main(int argc, char **argv) { char **tty = NULL; tty = ttyname(fileno(stderr)); if (tty == NULL) { fprintf(stderr, "%s\n", strerror(errno)); exit(EXIT_FAILURE); } printf("%s\n", tty); exit(EXIT_SUCCESS); } compile it as ttyname and invoke it as init , the result as following: Inappropriate ioctl for device which means that the error code is ENOTTY.Why can fprintf(stderr, ....) output to screen when stderr doesn't refer to a terminal device ?
If you're invoking it as init then you're not getting output to the screen; the output is being sent to the kernel and the kernel is printing it to the screen. init is a special process You can think of this as similar to the following shell script: $ x=$(ttyname 2>&1) $ echo $x Inappropriate ioctl for device This is done via the /dev/console device; stdin/stdout/stderr for the init process are attached to this by the kernel. Writes to that device are handled by the kernel and sent to the current console device(s), which may be the current vty or a serial port or elsewhere.
Why can fprintf(stderr, ....) output to screen when stderr doesn't refer to a terminal device?
1,513,338,265,000
In sysvinit, telinit is a symlink to init. init is run as a daemon. Is telinit run as a daemon? I don't have sysvinit installed on my Lubuntu. For comparison, systemctl plays similar role to systemd as telinit to init, and systemctl has a controlling terminal so is not running as a daemon, while systemd is run as a daemon. Thanks.
Whether a file is a symlink to another one has no bearing on how it runs. telinit, like systemctl, runs as a “normal” process.
Is telinit run as a daemon?
1,386,587,555,000
Exploring amazing Book How Linux Works by Brian Ward I usually have no question. But this one. At the "6.7.0 Shutting Down Your System" there is an ordered list of jobs. After remount root file system in ReadOnly mode (6) write buffered data by the sync program (7). How is it possible to write data in a file system after mount in in ReadOnly mode? May be it is an error, correct order is first write buffers (7) then unmount (5) and ro remount (6)? 1. init asks every process to shut down cleanly. 2. If a process doesn’t respond after a while, init kills it, first trying a TERM signal. 3. If the TERM signal doesn’t work, init uses the KILL signal on any stragglers. 4. The system locks system files into place and makes other preparations for shutdown. 5. The system unmounts all filesystems other than the root. 6. The system remounts the root filesystem read-only. 7. The system writes all buffered data out to the filesystem with the sync program. 8. The final step is to tell the kernel to reboot or stop with the reboot(2) system call. This can be done by init or an auxiliary program such as reboot, halt, or poweroff. P.S. The book is amazing it is an only question usolved during several chapters.
There's the filesystem driver (which translates blocks on some block storage medium into directories and files), and there's a caching layer beneath that (so that you can quickly write data to the storage medium, and continue doing other things, while the kernel actually takes care of getting the data written to the storage device in the background, because that typically is relatively slow). sync makes sure anything that's still in that caching layer is written to storage. So, this is about data "below" the filesystem, if you will.
Shut down actions order: write buffers after RO root remount
1,386,587,555,000
Let us say I have a custom init like this #!/bin/bash sleep infinity Which of these will load init #!/bin/bash /sbin/init sleep infinity #!/bin/bash exec /sbin/init sleep infinity I know that exec is supposed to start a new shell but is it necessary?
It is actually very normal to invoke init from a script. Common bootloader scripts will properly mount the root disk and then run init. To invoke init as init, and not as telinit, it must be run as PID 1. Thus you need the exec. Thus, only your (2) script is potentially useful to run init. After an exec, the script is no longer running. Thus, the sleep infinity will not run. Anything run before the exec can be quite useful. If you just want something else running, you might want to just list it appropriately in /etc/inittab, or a script invoked from there (typically placed in /etc/init.d and symlinked into /etc/rc#.d).
Can I call /sbin/init from init script
1,386,587,555,000
If you build a custom GNU/Linux system for an embedded device, do you need to execute mount -t proc proc /proc mount -t sysfs sysfs /sys somewhere in init process or is this done automatically by the kernel? I've read contradicting statements about this. An embedded Linux book advises to run the commands in init scripts while I've read somewhere that Systemd is not doing this as it is done by the kernel before userspace is created. What is actually true? Who mounts /proc and /sys?
If you have systemd, it does that automatically (and some extra mount points as well, including /dev/, /dev/shm, /dev/pts, /run and even /tmp). If you have a different init system, you'll have to do that according to its documentation, most likely manually using /etc/fstab or/and scripts. Here's what gets mounted on Fedora 38 with systemd automatically without any configuration files: debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime,seclabel) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000) devtmpfs on /dev type devtmpfs (rw,nosuid,noexec,relatime,seclabel,size=32889888k,nr_inodes=8222472,mode=755) fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime,seclabel) ramfs on /run/credentials/systemd-sysctl.service type ramfs (ro,nosuid,nodev,noexec,relatime,seclabel,mode=700) ramfs on /run/credentials/systemd-tmpfiles-setup-dev.service type ramfs (ro,nosuid,nodev,noexec,relatime,seclabel,mode=700) ramfs on /run/credentials/systemd-tmpfiles-setup.service type ramfs (ro,nosuid,nodev,noexec,relatime,seclabel,mode=700) selinuxfs on /sys/fs/selinux type selinuxfs (rw,nosuid,noexec,relatime) sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel) tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,size=13156600k,nr_inodes=819200,mode=755) tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=6578296k,nr_inodes=1644574,mode=700,uid=1000,gid=1000) tmpfs on /tmp type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=49337248k) tmpfs on /var/tmp type tmpfs (rw,nosuid,nodev,relatime,seclabel)
Who mounts /proc and /sys in GNU/Linux systems?
1,386,587,555,000
So I am working on building minimal os using busybox. What I want is I want to run my .net program from BIOS. But I am not sure linux will run .net program or not, so to clear my path I am using C program instead of .net program. I am generating initrd.img file successfully. Now before generating initrd.img file. I want to integrate my hello.c program with init file. This command I used to read file and which is reading C program code successfully. echo 'cat /etc/hello.c' >> init Now I want to execute this hello.c. So I tried following command but it not working as cat command. echo 'gcc -o echo /etc/hello.c' >> init echo 'chmod +x echo' >> init echo './echo' >> init This is the error I am getting: /init: line 6: gcc: not found chmod: echo: No such file or directory /init: line 8: ./echo: not found
Your script is failing because you don’t have gcc in your initrd. You should not ship hello.c in your initrd; you should build the program and ship that instead in your initrd. You should also specify the full path to your program when attempting to run it.
How to integrate C program with init file?
1,386,587,555,000
I'm learning about how Linux works and for that I'm watching Tutorial: Building the Simplest Possible Linux System by Rob Landley. He basically goes through some steps to build a minimal system and around 20:00 he starts explaining about building a "hello world binary" that he will later use as the init program for the kernel to run as the very first program. My question is, why do I have to statically link the hello.c application I want to use as the init application for the kernel to run after booting (as mentioned at 21:39 and seen at 23:05)?
The init program can be anything that the internal kernel code which backs the execve system call can run. Many systems use a shell script, but it could even be a python script. The advantage of the init program being a statically linked binary is that it has less dependencies, so you don't require to have the run time linker and the shared libraries that it links to present. On a 64 bit x86 system you might need something like /lib64/ld-linux-x86-64.so.2 and /lib/x86_64-linux-gnu/libc.so.6 in your initial filesystem as well as the init program itself.
Why do I have to statically link a c program if I want to use it as the init program for the kernel?
1,386,587,555,000
I created a 1 line shell script to send me some custom notifications, and it works as intended. I placed the script in /etc/init.d/, ran update-rc.d scan defaults. After reboot, the notifier script is working properly. However systemd-analyze blame reports that the system is still booting for up to 5 minutes after I'm logged in, as the script is in tail reading a log file (and will never end unless terminated externally). How can I get this init script to finish "booting" earlier? Is there a cleaner way to do this task?
Can you get the script to fork? Add a & to the line of code. Look up job control in the bash manual, to find out more. Job control does a little more than a fork (if in an interactive shell).
Init script still running after boot
1,386,587,555,000
I'm trying to install freeBSD onto a VPS (OVH provider). So far, the third method from this response has come the closest to getting me where I want to go. I think OVH has a problem with nested virtualization, because the methods where I boot the installer from QEMU in rescue mode just haven't worked. The command: # https://mfsbsd.vx.sk/files/images/12/amd64/mfsbsd-se-12.1-RELEASE-amd64.img | dd of=/dev/sda Actually completes successfully. When I reboot I even get to see the boot menu! But then, regardless of whether I boot in multiuser or single user mode I eventually get a message that says Panic: Going nowhere without my init!, followed by a vigorous round of reboots. And now I'm at a total loss. I assume that init et. al. would be in the image already, so I assume that I must have sent dd to the wrong of=. Here's the output of lsblk from the rescue mode of my VPS: NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 2.5G 0 disk └─sda1 8:1 0 2.5G 0 part / sdb 8:16 0 20G 0 disk ├─sdb1 8:17 0 19.9G 0 part /mnt/sdb1 ├─sdb14 8:30 0 4M 0 part └─sdb15 8:31 0 106M 0 part /mnt/sdb15 Should I be writing the image somewhere other than /dev/sda? Update: I wasn't having any luck getting mfsBSD to boot, and so I went back to trying nested virtualization. I'm now able to get the KVM started on my VPS, and I've successfully run bsdinstall. However, when I reboot out of rescue mode, I get a grub error. Still not running FreeBSD yet. Further Update: My VPS is now running FreeBSD quite merrily thanks to a tip from @ClausAndersen. Here's how I did it: Reboot in rescue mode from OVH's management panel. Once logged in (via SSH or KVM, either works), perform the following sequence of commands Unmount your original filesystem with umount /dev/sdb*. Note that the rescue system is mounted from /dev/sda. Don't touch /dev/sda. Destroy your original filesystem and the partition it lives on with fdisk. fdisk -u /dev/sdb followed by a series of d until the partition table is empty, then w. Install (or confirm that your rescue image has) the package xz-utils. Since my VPS started out life as an Ubuntu server, for me this meant apt-get install xz-utils. Get a copy of a raw virtual image from FreeBSD.org, decompress it, and write it to /dev/sdb. From the command line in your rescue system, you would type wget https://download.freebsd.org/ftp/snapshots/VM-IMAGES/12.1-STABLE/amd64/Latest/FreeBSD-12.1-STABLE-amd64.raw.xz | xz -dc | dd of=/dev/sdb bs=1M Then reboot and login via KVM in the OVH control panel to configure your FreeBSD server.
Reboot in rescue mode from OVH's management panel. Once logged in (via SSH or KVM, either works), perform the following sequence of commands Unmount your original filesystem with umount /dev/sdb*. Note that the rescue system is mounted from /dev/sda. Don't touch /dev/sda. Destroy your original filesystem and the partition it lives on with fdisk. fdisk -u /dev/sdb followed by a series of d until the partition table is empty, then w. Install (or confirm that your rescue image has) the package xz-utils. Since my VPS started out life as an Ubuntu server, for me this meant apt-get install xz-utils. Get a copy of a raw virtual image from FreeBSD.org, decompress it, and write it to /dev/sdb. From the command line in your rescue system, you would type wget https://download.freebsd.org/ftp/snapshots/VM-IMAGES/12.1-STABLE/amd64/Latest/FreeBSD-12.1-STABLE-amd64.raw.xz | xz -dc | dd of=/dev/sdb bs=1M Then reboot and login via KVM in the OVH control panel to configure your FreeBSD server. Note that step 2 may not be necessary; I performed it out of an abundance of caution. YMMV.
Where should I `dd` mfsBSD to get it to boot correctly?
1,386,587,555,000
I was trying to find out how to run a script at startup and during shutdown during which I got to know that level 6 corresponds to reboot in ubuntu. When I opened the /etc/rc6.d every link's name started with K which is for kill I suppose.
The K does indeed stand for “kill”. The symlinks link all the init scripts which are supposed to be called to stop the corresponding service when the system switches to runlevel 6; this tries to ensure that all the system’s services are stopped correctly before the system reboots. Each link is called with a stop argument.
Why do all the links in /etc/rc6.d start with K if runlevel 6 corresponds to reboot?
1,386,587,555,000
Looking in my /etc/inittab file I see the following entry: ca:12345:ctrlaltdel:/sbin/shutdown -t1 -a -r now What do the -t1 and -a options mean? They do not appear in the manual for the shutdown command. I have also seen another /etc/inittab in a reference book that shows: ca::ctrlaltdel:/sbin/shutdown -r -t 4 now Since no runlevel is specified, does this mean that it is works for all run levels from 0 to 6? What does the "-t 4" mean? Also, is there a reason why the -a and -t options are not mentioned in the manual for the shutdown command?
Those options are options to the sysvinit version of shutdown (see the relevant manpage): -t sec Tell init(8) to wait sec seconds between sending processes the warning and the kill signal, before changing to another runlevel. -a Use /etc/shutdown.allow. This version of -t is also valid in your second example, which as you guess applies to any runlevel. The options are no longer mentioned in your version of the manual because that’s no doubt the systemd version, which doesn’t support them.
Understanding shutdown command in inittab
1,386,587,555,000
I have a couple of questions concerning qemu boot options 1) When using the following argument init=/bin/sh It works - but is really the init-process replaced by a sh-process? Qemu-system-x86_64 -hda output/images/rootfs.ext2 -kernel output/images/bzImage --append "root=/dev/sda console=ttyS0 rw init=/bin/sh" -serial stdio > /home/john/kernel_debug_mess.txt 2) if I use the following option - the system cannot boot init=/bin/ls the following error pops up in the kernel-logg: End kernel panic - not syncing: attempted to kill init! exitcode 0x000000000 When using ls - how could the crash be explained?
Yes. When you tell the kernel to use /bin/sh as init, then it does exactly what you tell it to. /bin/ls runs and then exits, so the kernel panics because there is no init process any more. init is supposed to be a long-lasting process.
Qemu - substitute the init process
1,386,587,555,000
I have a very simple SysVinit service in /etc/rc.d: #!/bin/bash PIDFILE="/var/run/test.pid" status() { if [ -f "$PIDFILE" ]; then echo 'Service running' return 1 fi return 0 } start() { if [ -f "$PIDFILE" ] && kill -0 "$(cat "$PIDFILE")"; then echo 'Service already running' return 1 fi echo 'Starting...' test & echo $! > "$PIDFILE" return 0 } stop() { if [ ! -f "$PIDFILE" ] || ! kill -0 "$(cat "$PIDFILE")"; then echo 'Service not running' return 1 fi echo 'Stopping...' kill -15 "$(cat "$PIDFILE")" && rm -f "$PIDFILE" return 0 } case "$1" in start) start ;; stop) stop ;; status) status ;; restart) stop start ;; *) echo "Usage: $0 {start|stop|restart}" exit 1 esac When the system starts it starts the service. But when the system stops, it never calls the stop command. The only reason I can think off is that the system either thinks the service is not running or was not started correctly. But what are the requirements for that? Do you need to return a special exitcode for the start command? Do I need to create a file in /var/lock/subsys to signal that it is active? Anything else that might cause the system to think the service did not start?
It looks like Synology moved from classic SysVinit to upstart in DSM 6 or so, and then to systemd in DSM 7. Both init systems provide backward compatibility for classic SysVinit-style start/stop scripts, but there are some quirks you should be aware of. If you have DSM 7.0 or newer, then after installing the script you probably should run systemctl daemon-reload, so systemd-sysv-generator should automatically create a .service file for it (maybe in /run/systemd). Then you can start your script with systemctl start <script name> - and in fact should do just that, instead of just running the script manually. systemd will be aware of the need to run <your script> stop job only if it has actually executed the corresponding start job. This is because systemd will set up each service as a separate control group of processes as it starts them up (and the administrator running the start script manually doesn't do that). This is something that is completely invisible to the services themselves (unless they specifically go looking for it), and any child processes of the services will inherit this control group membership. If a control group has no processes left in it, it will cease to exist automatically. When shutting down, systemd will just go through the existing control groups and will run the stop command for any non-default control group it finds. Any services that have been started without using systemctl start will be part of the "administrator's interactive session" control group rather than the "service X" control group, and will essentially be just killed without running the corresponding stop script. If you need features like an automatic restart for your service if it dies for some reason, you should consider using the appropriate "native" configuration method for the applicable init system: /etc/init/* files for Upstart in Synology DSM 6.x series /etc/systemd/system/*.service files for systemd in Synology DSM 7.x series and newer. These init systems have built-in automatic restart features you can use with just a little bit of configuration, rather than having to write a wrapper script to watch your service process yourself. Developer Guide for Synology DSM 7 Developer Guide for Synology DSM 6 Possibly helpful notes on configuring services for DSM 6 and 7
Stop not called for init rc.d service
1,386,587,555,000
I have an old embedded control system that still uses a 2.6 kernel and runs Debian 4. I am looking for a way to display a message whenever the superuser calls (interactively) reboot, shutdown, poweroff or halt. The message only needs to appear on the interactive terminal that the command is sent from. Essentially, I want to replace the classic "The system is going down for system halt NOW!" Is there an easy way to configure those binaries to do that, or will I need to resort to writing ugly wrappers?
Shutdown and so on usually progress through telinit setting the runlevel to 6 or 0, and this calls the kill scripts in /etc/rc6.d/K* so you could add a wall command to one of those scripts in the stop section.
How can I configure shutdown, reboot etc. to display a message on an old (2.6) Linux?
1,386,587,555,000
I'm trying to write an init, but can't figure out the reboot\poweroff thing. apparently reboot is just a link to systemctl? (I'm using arch) So how does this work? init poweroff works and stuff, but reboot/poweroff just seems to be linked to systemctl
Many programs behave differently depending on the name with which they are called. Something like systemctl inspects the value of argv[0] and behaves differently if it is reboot vs if it is systemctl. You can see this taken to the extreme with busybox, which is a single binary that provides almost an entire (minimal) userspace by symlinking all the commands to the single busybox binary. You can do exactly the same thing with a shell script: #!/bin/bash if [[ $0 =~ foo ]]; then echo "running foo action" elif [[ $0 =~ bar ]]; then echo "running bar action" else echo "running default action" fi Assuming this is multicall.sh, if we set things up like this: ln -s multicall.sh foo ln -s multicall.sh bar And then see: $ ./foo running foo action $ ./bar running bar action $ ./multicall.sh running default action For systemctl in particular, the logic is implemented here: int systemctl_dispatch_parse_argv(int argc, char *argv[]) { assert(argc >= 0); assert(argv); if (invoked_as(argv, "halt")) { arg_action = ACTION_HALT; return halt_parse_argv(argc, argv); } else if (invoked_as(argv, "poweroff")) { arg_action = ACTION_POWEROFF; return halt_parse_argv(argc, argv); } else if (invoked_as(argv, "reboot")) { if (kexec_loaded()) arg_action = ACTION_KEXEC; else arg_action = ACTION_REBOOT; return halt_parse_argv(argc, argv); . . .
making an init: how exactly does the reboot command work?
1,386,587,555,000
I have a script in /etc/rc.d/init.d/ on a Red Hat 7 system that is provided by a vendor. This script is able to be started and stopped via systemctl, but it appears to not actually be a systemd unit file. The script depends on a drive being mounted on boot by a systemd unit file. However, this init script tries to start before the mount is finished, so it invariably fails. I have attempted a hack by adding a line to the beginning of the init script, that causes the script to sleep for 30 seconds before the rest of the script executes: sleep 30. However, the sleep functionality does not work all the time. Is there any way to make this init script depend on the systemd mount unit file being completed? Any better ways to accomplish this task than adding a sleep to the beginning of the init script? Thanks.
SysV init scripts are auto-converted by systemd into systemd Unit files. See man systemd-sysv-generator. You would like to edit the generated Unit to add a suitable dependency for the mount point. You can do this by creating a "drop-in" file with just a few extra lines. If your init file is called, say, /etc/rc.d/init.d/mysysv, then the generated Unit will be called mysysv.service. Enter the command: sudo systemctl edit mysysv and you should be in your chosen editor (set env variable EDITOR) on a temporary file. Edit the file to contain something like: [Unit] # default timeout of 90 secs for dir to be mounted JobTimeoutSec=600 RequiresMountsFor=/path/to/mount and exit the editor cleanly. This creates file /etc/systemd/system/mysysv.service.d/override.conf. When you now start the mysysv Unit, this modification will make the job wait upto 600 seconds until the mount point has something mounted on, before starting. Otherwise it fails with a timeout. The default wait time for a job is a system global value of 90 seconds.
Require systemd service to be started before executing init.d script
1,386,587,555,000
Forgive me if this is a noob question, however, I just installed Artix with OpenRC, and while following the guide on setting up ALSA with OpenRC from the gentoo wiki, I was told to add the alsasound service to OpenRC using: rc-update add alsasound boot I was about to do this, until I realized that Pulseaudio and ALSA are actually both already running, despite me never explicitly running them nor adding them as services to OpenRC. Maybe I am confused and incorrect here, but shouldn't those programs not start unless I tell the init system (OpenRC) to start them? Is there a way that I can find out what is invoking ALSA and Pulseaudio to start, if it's not my init system? Apologies if I am confused about the way these sound applications and init systems operate, as this is my first time tinkering with them and trying to manually set them up.
aplay -l only needs the kernel modules, which can be autoloaded by the kernel if your sound card/chip is PCI-based or otherwise autodetectable by the kernel (or listed in device tree information if you have an ARM system, I guess?). But the autoloading may not take care of restoring your sound mixer settings, so everything will be using "factory default" volume settings, which may be quite low to protect the hearing of the users of headphones from accidental auditory assault. On systemd-based systems (which I'm more familiar with) pulseaudio is started either directly as a user service, or via socket activation using pulseaudio.socket systemd unit. On a system using OpenRC, PulseAudio might be started by the GUI session start-up scripts: it doesn't necessarily have to run as root. But PulseAudio also has an autospawning mechanism: if you start any PulseAudio client while the server is not running, the client will attempt to start the server automatically, unless /etc/pulse/client.conf or ~/.config/pulse/client.conf includes the setting autospawn = no.
ALSA and PulseAudio starting without being invoked by init system?
1,386,587,555,000
I am learning Linux, using Ubuntu. I wanted to remove network management from one of the run levels. I had done this correctly before, but now, no matter how hard I try, I can not remove a script from the desired run levels. the rc3 folder is empty so how can I work on run level 3?!
Yes, with version 15.04 Ubuntu switched to systemd. The rcX.d folders are mostly obsolete. You can use a configuration command like sudo systemctl disable network-manager.service to disable the network manager (which should leave the networking mostly unconfigured). There are no runlevels in systemd, but an equivalent called "targets". tecmint lists the mapping like this: Run level 0 is matched by poweroff.target. Run level 1 is matched by rescue.target. Run level 3 is emulated by multi-user.target. Run level 5 is emulated by graphical.target. Run level 6 is emulated by reboot.target. You can switch to a specific target via systemctl isolate multi-user.target. Symlinks usually exist, so you can also enter systemctl isolate runlevel3.target In order to remove a unit from a particular target, you can modifiy the unit's WantedBy directive. Please be aware that targets can depend on each other, so removing a unit from a target will also remove it from the dependees.
Are RC folders obsolete on Ubuntu?
1,386,587,555,000
Assuming that I decide, for some reason, to never use the syscall wait ever again in any of the programs I write. Does it mean that my memory will be cluttered with all the finished processes that their father didn't wait for? This is a part of an academic assignment and I find the question a bit perplexing because both answered sound acceptable to me. This is how I answered this and I simply want feedback if that's indeed true. If the father process doesn't wait for his children before exiting - these children will be linked to the init process on the exit call (inside the function forget_original_parent())on the father process. At some point, the init process will hold more processes than it can - regarding memory limits. So not calling wait is indeed cluttering the memory. Also, I would love clarification on what happens in this case? does the machine shuts down and exits the init process? what happened to all of the children of init if that's the case?
At some point, the init process will hold more processes than it can - regarding memory limits. Not quite: zombie processes (processes which have exited but haven’t been reaped) don’t occupy memory in their parent process; they occupy memory in the kernel’s process table. If your init is a “standard” init, it will reap zombie processes anyway, and you won’t run into any issues. If your init also ignores child processes, it still won’t run into limits which could cause it to be killed. The main limit which will come into play is the maximum number of processes; reaching that will prevent new processes from being created, which will immediately cause problems (processes are constantly being created). The system will keep on running, but you won’t be able to log in, and you’ll only be able to use existing shells etc.
Zombie processes and exiting init
1,386,587,555,000
Recently encountered a problem: when entering init 1, it gives an error: init: must be run as PID 1. I Entering ps and it turns out that /sbin/init has PID 1. How now can I use init?
You cannot use init. It is the wrong program for the job. You need to un-learn the idea that init can be invoked as a normal command. The init programs where this is/was true are not the init program that you have. There are 4 init programs where one can invoke it as a normal command, and you are not using any of them. Rather, you are using BusyBox init, which if it detects that it has been invoked as anything other than process #1 on the system, prints that message and exits. It has no functionality for other than as process #1. There is no telinit in BusyBox, either. Its init does not have a client/server interface over a FIFO. To shut down, you must do something that eventually results in SIGPWR, SIGUSR1, SIGUSR2, or SIGTERM being sent to process #1. Note that, as with other system management toolsets, "single-user mode" (a misnomer since 1995) is not a shutdown target. One does not shut down to such a mode, and BusyBox init is not actually involved in enacting such a mode. Rather, in systems using OpenRC on top of BusyBox init, this is a mode that is entirely the province of OpenRC mechanisms. openrc single changes to the mis-named "single" mode. (Using OpenRC's own init, which is not the case for you, there is a shutdown command that talks to it. But that's just a quite roundabout way of running openrc single, it turns out.) Alpine Linux is documenting an outdated OpenRC, note. OpenRC itself does not have a single directory any more. That was removed in 2019. Furthermore, the rc command changed to openrc in 2014. Further reading https://unix.stackexchange.com/a/463504/5132
init: must be run as PID 1
1,386,587,555,000
I am using ubuntu kernel 4.xx with corresponding ubuntu initrd.img, and it works. But, I want to use a custom initramfs inspired by lfs (linux from scratch) initramfs. The kernel extracts, and runs my init script successfully including mounting sysfs. But /sys doesn't expose any trace to available storage (two disks exist), and therefore it's not possible to initialize the kernel root. What is the problem? Does ubuntu add-on to the kernel (/ubuntu directory) dictate any special policy for initrd?
On the working system, look at the device(s) in sysfs, and their device symlink. This points to the parent device - which may in turn have its own parent device, and so on. Write yourself a list of the device and all its parent devices. Then you can check all of them in the initramfs. You might be missing more requirements than just the two disk devices. Secondly, when you make your list of devices, look at the driver/module for each one and write down what it is. This tells you which kernel module is recognizing the device. udev is supposed to be loading the kernel modules for you. Unfortunately, the LFS initramfs takes systemd-udev and tries to run it without systemd. This is unfortunate because using systemd would let systemd-udev log any errors it encountered to the systemd journal. You could then check the journal for errors. I do not know whether udev error logging works in the LFS initramfs. does ubuntu add-on to the kernel (/ubuntu directory) dictates any special policy for initrd? No.
kernel sysfs doesn't recognize storage kobjects [closed]
1,386,587,555,000
I'm trying to run a script at startup as root. (Just sets up a root-owned directory in /tmp). Currently, I'm using this script to set up the boot hook and it appears to get the job done: #!/bin/sh -eu if [ 0 -eq $((${1:-0})) ]; then #install [ -x /etc/init.d/tmpsetup ] || { cat > /etc/init.d/tmpsetup <<'EOF' #!/bin/sh -eu [ $(id -u) -eq 0 ] umask 0222 mkdir -p /tmp/u/ EOF chmod a+rx /etc/init.d/tmpsetup update-rc.d tmpsetup defaults 99 } else #uninstall rm -f /etc/init.d/tmpsetup update-rc.d tmpsetup remove fi Is there a more portable/better way to do it? (It's to implement a /tmp per user feature. Should be part of an install script that adapts an existing system.)
I would've just put this into /etc/rc.local instead: umask 0222 && mkdir -p /tmp/u/ Making a service around this seems like it's over complicating things.
Portable way to run a simple script at startup
1,386,587,555,000
I need to update a raw UBI partition with a new UBIFS image from Linux userspace with superuser rights, however I'm getting EBUSY (Device or resource busy) error whenever I'm trying to open my corresponding /dev/ubiX_Y for writing, even if the current filesystem present on it is mounted as read-only. I suspect that an usual block partition with e.g. an ext4 filesystem could be opened for writing when it's mounted as read-only, seeing that utilities like zerofree and ext4magic work that way. That doesn't seem to be the case with UBI partitions. Theoretically I could either terminate the processes using the partition or attach to them and forcibly close all files on it before unmounting the partition completely, but seems I can do neither to the busybox init process which keeps constantly holding its /etc/inittab open. And yes, the partition in question is a root / mounted partition. I also could implement a kernel module which would do the dirty work, but I'd like to retain as much binary forward compatibility for my update utility and basically keep it as much kernel version agnostic as I can, thus solving it in such a manner is highly undesirable. Is there any other way I can do this?
If there a line in /etc/inittab like: ::restart:/tmp/updater_stage2 Then if you send SIGQUIT to init it will replace itself with /tmp/updater_stage2. To reload /etc/inittab after you have changed it send SIGHUP. You can replace /etc/inittab with a bind mount: mount --bind /tmp/inittab /etc/inittab kill -HUP 1 sleep 1 kill -QUIT 1 If there is no /etc/inittab or support for inittab in not compiled in init will run init, so you will have to replace /sbin/init like: mkdir /tmp/old_sbin mount --bind /sbin /tmp/old_sbin cp -as /tmp/old_sbin /tmp/new_sbin ln -sf /tmp/updater_stage2 /tmp/new_sbin/init mount --bind /tmp/new_sbin /sbin kill -QUIT 1 You can then use pivot_root and chroot to replace the root filesystem, which you then be able unmount (after moving /tmp, /proc etc).
Opening raw UBI partition for writing on Linux if it's mounted and used by init
1,386,587,555,000
I am trying to restart my Centos 6.7 system using the command line: init 6 But I need it stay down N number of seconds before starting back up again. I have been searching with Google, but I cannot by a variant of the init command that will do this.
As you are on Debian, you want the rtcwake utility.Manual page Not good for very short sleeps (say less than 10 seconds) as it may take more time to put the system to sleep than that. The basic idea is that you program the RealTimeClock chip as a wake source for n seconds in the future and then suspend, either to ram or disk, or even switch the system off..
How can I tell my system to shutdown, stay off for X seconds, then restart?
1,386,587,555,000
NOTE: I am running Red Hat 6.7 I have a service that is configured with the Linux init system to start a process as a service when the machine boots. This was done by doing this one-time configuration from the command line: ln -snf /home/me/bin/my_service /etc/init.d/my_service chkconfig --add my_service chkconfig --level 235 my_service on When the OS reboots, the service starts as expected. I ALSO need the service to be restarted if the service (my_service) crashes. From what I've read, all I need to do is add an entry to /etc/inittab that looks like this: mysvc:235:respawn:/home/me/bin/my_service_starter Where my_service_starter looks like: #!/bin/bash /home/me/bin/my_service start The my_service script looks like: #!/bin/bash "/usr/java/bin/java" /home/me/bin/my_service.jar start My understanding is that when the init system detects that my_service is not running, it will attempt to restart it by running "my_service_starter". However this does not seem to be working. i.e. the service does not start when the OS reboots. I need to understand how to tell the Linux init system to restart my service when the service crashes.
RedHat 6 uses upstart as the init system. At the very beginning of the provided inittab files are the lines: # inittab is only used by upstart for the default runlevel. # # ADDING OTHER CONFIGURATION HERE WILL HAVE NO EFFECT ON YOUR SYSTEM. You need to create a proper init definition in /etc/init (note: NOT /etc/init.d). eg (but may need debugging) /etc/init/myservice start on runlevel [2345] stop on runlevel [S016] respawn exec /home/me/bin/my_service_starter
inittab not restarting service after service crash in Red Hat 6.7
1,386,587,555,000
I know, when we run application in shell for a large website, we'd better set ulimit for our shell, But most of the service is started by systemd/sysv. Do I need set the ulimit in the service script (/etc/init.d) ?
You would normally set the ulimit on the user the service runs as in something like /etc/security/limits.conf. For example, if the web service is running as www-data, you would add an entry for www-data to /etc/security/limits.conf setting the relevant limits. If the process runs as root then it's more complicated given the limits in /etc/security/limits.conf would then apply to all root owned processes. One issue with setting the limits in /etc/security/limits.conf is that it relies on processes going through the PAM stack. In the case of services and daemons which don't do that, then yes, modifying the relevant service scripts is an acceptable approach. It's probably necessary to do this on a per process basis, and depending on your distribution, service start scripts are usually package managed meaning you'll get conflicts every time you upgrade.
Do I need set ulimit for system services, such as nginx.service(systemd)/nginx(sysv)?
1,386,587,555,000
I have latest Kubuntu. I have installed mysql. I was looking into the /etc/init. I see the following: In /etc/init/mysql.conf description "MySQL Server" [18/40] author "Mario Limonciello <[email protected]>" start on runlevel [2345] stop on starting rc RUNLEVEL=[016] If I understand this correctly mysql should start on level 2 and be up in all levels 2 up to 5. Then I did the following: Linux:/etc$ ls rc0.d/ K10unattended-upgrades K20kerneloops README S20sendsigs S30urandom S31umountnfs.sh S40umountfs S48cryptdisks S59cryptdisks-early S60umountroot S90halt Linux:/etc$ ls rc1.d/ K20kerneloops K20saned README S30killprocs S70dns-clean S70pppd-dns S90single Linux:/etc$ ls rc2.d/ README S20kerneloops S50rsync S50saned S70dns-clean S70pppd-dns S75sudo S99grub-common S99ondemand S99rc.local Linux:/etc$ ls rc3.d/ README S20kerneloops S50rsync S50saned S70dns-clean S70pppd-dns S75sudo S99grub-common S99ondemand S99rc.local Linux:/etc$ ls rc4.d/ README S20kerneloops S50rsync S50saned S70dns-clean S70pppd-dns S75sudo S99grub-common S99ondemand S99rc.local Linux:/etc$ ls rc5.d/ README S20kerneloops S50rsync S50saned S70dns-clean S70pppd-dns S75sudo S99grub-common S99ondemand S99rc.local I was expecting that the mysqld would be listed in one of those directories. I mean the services have the .conf files in the /etc/init and for each runtime level there is a link to the service executable to start/stop. But why there is nothing for mysql? Please note that mysql is up and running: Linux:/etc$ ps -ef|grep mysql mysql 994 1 0 21:24 ? 00:00:08 /usr/sbin/mysqld jim 4396 4223 0 23:44 pts/8 00:00:00 grep --color=auto mysql
Ubuntu uses Upstart for its Init, which doesn't use /etc/rcX.d the way SysVInit does. More information: http://upstart.ubuntu.com/
How are the services exactly starting in (K)Ubuntu?
1,678,537,061,000
I had to move from Debian Jessie to Buster. The script that runs to create a small custom boot disc runs update-usbids to get the latest files to copy over to the build. However it now says update-usbids command not found. Looking around people say it was removed for systemd but the boot disk still uses init (moving it to systemd is not a reality and would bloat it too much). So the question is, how do I update the usb.ids file so I can keep the boot disk up to date? If the file was some place could it just be downloaded using the wget command? TIA!!
# For pci.ids sudo wget -O /usr/share/misc/pci.ids http://pciids.sourceforge.net/pci.ids # For usb.ids sudo wget -O /usr/share/misc/usb.ids http://www.linux-usb.org/usb.ids
How to get the latest usb.ids when update-usbids no longer exists?
1,678,537,061,000
Im trying to understand the lifecycle of a process during the restart. For eg: If we issue the restart command It'll kill the process id remove or flush all the open files in the descriptors. Close the TCP or Unix socket Then start - all actual command will be triggered. Can someone help to understand this in a better way?
A SIGTERM signal will be sent to the server process, with the expectation that the process will exit. It is up to the process itself to catch the signal and do whatever is needed to gracefully exit. I.e. the process itself should take care of flushing files, closing network connections it has open, etc. If the process does not exit within a timeout limit, it is forcvly killed with a SIGKILL signal. The default value of the timeout is 90 seconds.
What happens when I restart a service via systemctl or init
1,678,537,061,000
If I do a: echo "foobar" > /etc/init.d/foobar chmod 744 /etc/init.d/foobar ln -s /etc/init.d/foobar /etc/rc.d/rc3.d/S99foobar on a SLES 11, then when will the "foobar" command launch during boot? as the last S99? or a normal start script format would be needed for that?
The SysVinit start/stop scripts are launched in alphanumerical order according to the sorting order of the default "C" (aka POSIX) locale, so S99foobar will start after any S99[a-e]* scripts but before any S99[g-z]* scripts. The scripts are launched by /etc/init.d/rc master script. The relevant code is essentially: for link in /etc/rc.d/rc3.d/S[0-9][0-9]*; do test -x "$link" || continue # omitted optimization: if previous runlevel already started this service, don't start it again # omitted logic: if $DO_CONFIRM is set, prompt for each service # omitted logging $link start; status=$? # omitted status reporting/logging logic based on value of $status done
When will S99 launch if it isn't in a normal form?
1,678,537,061,000
This might look stupid but i need to know. What if the initdefault is set to 0 or 6 in my redhat7 system. How to revert it back. As you can see i'm no expert so please give me a detailed explanation.
You need to access your Redhat OS via emergency or rescue mode. To do this, when your OS is booting up and grub2 prompts you for boot time selection press 'e' on your boot choice to edit the boot time parameters. Look for the linux with 'linux16' and at it's end append the string "systemd.unit=rescue.target" (without qoutes). Enter Ctrl+X to boot with the parameter you provided. This will take you to rescue mode where you will enter you root password. On RH7 systemd handles runlevels. Enter: systemctl set-default multi-user.target #to boot in runlevel 3 or if you want to boot into graphical interfaces: systemctl set-default graphical.target #boot into runlevel 5 To check what runlevel you are at currently: systemctl get-default After changing the default runlevel target. Reboot your system and it will boot into the given new default run level. Here's a helpful link of how you can boot into emergency or rescue mode.
What if the initdefault is set to 0 or 6 in RHEL7. How to solve it?
1,678,537,061,000
After init 1, the ssh connexion on a remote server was interrupted with the following error packet_write_wait: Connection to UNKNOWN port 0: Broken pipe Now, even root cannot connect $ ssh root@remoteserver ssh: connect to host remoteserver port 22: Connection refused Is there any way to recover a ssh connexion?
Switching to runlevel 1 kills all processes (except the top-level init/upstart command itself), including the SSH daemon. From http://www.debianadmin.com/debian-and-ubuntu-linux-run-levels.html: Run Level 1 is known as ‘single user' mode. A more apt description would be ‘rescue', or ‘trouble-shooting' mode. In run level 1, no daemons (services) are started. Hopefully single user mode will allow you to fix whatever made the transition to rescue mode necessary. The easiest way to get sshd running is to switch to a runlevel that starts it by default. In Ubuntu, that's any of 2, 3, 4 and 5. If you can't access the single-user shell to enter the init or telinit command, eg. because you were connected remotely, or it's hidden by the splash screen, then you're out of luck. The only option left is to reboot.
Recover ssh connexion after init 1
1,678,537,061,000
I'm working on a embedded system. I have multiple SD cards to save copy of Linux rootfs on (kernel saved in nand). On a original SD card, where is located a system, and from this card the system is copied to another - everything works nice. Init service is working as it should. But there is a problem on copied system on another SD cards - system is working, but it's not turning on init service, where is located for example network, sshd init, needed for a application. Two things - when I was copying system not all of files wanted to copy (especially from /dev/, but it is normal, beacuse of aim of this files). But maybe another files weren't coppied properly? Second thing - i'm mounting: /var /tmp /var/tmp On tmpfs (RAM) - but i think it's not a problem (it's working good on original SD card). Maybe I shouldn't copy rootfs, and do something else?
Had to do some copy/paste things. First, I've downloaded minimal ELDK distribution (i'm using it), copied all with rsync. Next i've rsynced copy of system and copied it on SD card on fresh system. All worked.
Embedded Linux and Init problem - Init won't start
1,678,537,061,000
I've been searching for this a little while now: How do I, when I change from runlevel 2 to runlevel 5, start f.e. proftpd? When I go back to runlevel 2, the service should be stopped again. So - Start ftp-server when changing from runlevel 2 to 5 - Stop ftp-server when changing back (Sidenote: the ftp-server is not allowed to boot on startup, so that shouldn't change either) The closest thing I found was this: # update-rc.d -n <service> start 2 . stop 2 . ofcourse, that's not correct. Any ideas?
If you look at man update-rc.d you can see some examples. Here's what you probably want: update-rc.d proftpd start 80 5 . stop 20 0 1 2 3 4 6 . The 80 and 20 are just to make proftpd start later than most other services. You may need to remove existing links first with: update-rc.d -f proftpd remove. If you have a newer version of the OS, the above may seem to work, but will not take your options into account. Instead you will need to edit the /etc/init.d/proftpd file and change the headers there to something like this: #!/bin/sh ### BEGIN INIT INFO # Provides: proftpd # Required-Start: $all # Required-Stop: $all # Default-Start: 5 # Default-Stop: 0 1 2 3 4 6 # X-Interactive: false # Short-Description: proftpd ### END INIT INFO And run update-rc.d proftpd defaults instead. This is because later update-rc.d just call insserv to do the work, and all dependencies and start/stop are now worked out automatically, and you cannot change them. Check you have the right links with ls -l /etc/rc*/*proftpd. Eg output: lrwxrwxrwx 1 root root /etc/rc0.d/K01proftpd -> ../init.d/proftpd lrwxrwxrwx 1 root root /etc/rc1.d/K01proftpd -> ../init.d/proftpd lrwxrwxrwx 1 root root /etc/rc2.d/K01proftpd -> ../init.d/proftpd lrwxrwxrwx 1 root root /etc/rc3.d/K01proftpd -> ../init.d/proftpd lrwxrwxrwx 1 root root /etc/rc4.d/K01proftpd -> ../init.d/proftpd lrwxrwxrwx 1 root root /etc/rc5.d/S04proftpd -> ../init.d/proftpd lrwxrwxrwx 1 root root /etc/rc6.d/K01proftpd -> ../init.d/proftpd
FTP server to start when changing runlevel 2 to 5
1,678,537,061,000
My os is mint 17.2 First off, when I start it with: sudo /etc/init.d/lsyncd start it starts. But when I reboot my system it isn't started by default. How do I have it start at boot time? I had previously had it as an Upstart job, but that wasnt working at startup either. Here are my files/settings: /etc/init.d/lsyncd #! /bin/sh ### BEGIN INIT INFO # Provides: lsyncd # Required-Start: $remote_fs # Required-Stop: $remote_fs # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: lsyncd daemon init script # Description: This script launches the lsyncd daemon. ### END INIT INFO # Author: Ignace Mouzannar <[email protected]> PATH=/sbin:/usr/sbin:/bin:/usr/bin DESC="synchronization daemon" NAME=lsyncd DAEMON=/usr/bin/$NAME CONFIG=/etc/lsyncd/lsyncd.conf.lua PIDFILE=/var/run/$NAME.pid DAEMON_ARGS="-pidfile ${PIDFILE} ${CONFIG}" SCRIPTNAME=/etc/init.d/$NAME NICELEVEL=10 # Exit if the package is not installed [ -x "$DAEMON" ] || exit 0 # Exit if config file does not exist [ -r "$CONFIG" ] || exit 0 # Read configuration variable file if it is present [ -r /etc/default/$NAME ] && . /etc/default/$NAME # Define LSB log_* functions. # Depend on lsb-base (>= 3.0-6) to ensure that this file is present. . /lib/lsb/init-functions # # Function that starts the daemon/service # do_start() { start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON \ --test > /dev/null \ || return 1 start-stop-daemon --start --quiet --pidfile $PIDFILE \ --nicelevel $NICELEVEL --exec $DAEMON -- \ $DAEMON_ARGS \ || return 2 } # # Function that stops the daemon/service # do_stop() { start-stop-daemon --stop --quiet --pidfile $PIDFILE --name $NAME RETVAL="$?" [ "$RETVAL" = 2 ] && return 2 start-stop-daemon --stop --quiet --oknodo --exec $DAEMON [ "$?" = 2 ] && return 2 # Many daemons don't delete their pidfiles when they exit. rm -f $PIDFILE return "$RETVAL" } # # Function that sends a SIGHUP to the daemon/service # do_reload() { start-stop-daemon --stop --signal 1 --quiet --pidfile $PIDFILE --name $NAME return 0 } case "$1" in start) log_daemon_msg "Starting $DESC" "$NAME" do_start case "$?" in 0|1) log_end_msg 0 ;; 2) log_end_msg 1 ;; esac ;; stop) log_daemon_msg "Stopping $DESC" "$NAME" do_stop case "$?" in 0|1) log_end_msg 0 ;; 2) log_end_msg 1 ;; esac ;; status) status_of_proc $DAEMON $NAME && exit 0 || exit $? ;; restart|force-reload) log_daemon_msg "Restarting $DESC" "$NAME" do_stop case "$?" in 0|1) do_start case "$?" in 0) log_end_msg 0 ;; 1) log_end_msg 1 ;; # Old process is still running *) log_end_msg 1 ;; # Failed to start esac ;; *) # Failed to stop log_end_msg 1 ;; esac ;; *) echo "Usage: $SCRIPTNAME {start|stop|restart|force-reload}" >&2 exit 3 ;; esac : /etc/lsyncd/lsyncd.conf.lua settings { nodaemon = false, statusFile = "/tmp/lsyncd.stat", statusInterval = 1, logfile = "/var/log/lsyncd/lsyncd.log", statusFile = "/var/log/lsyncd/lsyncd-status.log" } sync { default.rsync, source = "/home/user/bin_pers/data", target = "/home/user/test", delay=0, rsync = { perms = true, owner = true, archive = true } } empty files /var/log/lsyncd/lsyncd.log /var/log/lsyncd/lsyncd-status.log
As it turned out, it was not starting because one of the rules involved an HD that was not mounted at started. Mounting then restarting lsyncd did the trick.
lsyncd won't start at startup
1,678,537,061,000
this problem occurs on Debian jessie x86 with systemd. It leads to an incomplete boot sequence on init 2 because network-manager won't start. it leaves the whole system unusable NetworkManager[785]: segfault at e7394845 ip b74ab7a1 sp b7548810 error 7 in libgnutls-deb0.so.28.41.0[b746f000+13a000]
Turned out, I interrupted an upgrade process earlier. I manually reinstalled the network manager package.
segfault in libgnutls - Debian won't complete boot
1,678,537,061,000
My Lenovo Y500 intel i7 nvidia gt 560m LinuxMint 14 x64 halts during startup at: stopping samba auto-reload integration No error shown. No login prompt. I accidentally executed this in the wrong terminal: sudo dpkg-divert --local --rename --add /sbin/initctl sudo ln -s /bin/true /sbin/initctl then I tried to fix it with: mv /sbin/initctl /initctl dpkg-divert --remove /sbin/initctl How can I make my system start properly again?
Solution: startup from live cd. Chroot to installation. mv /sbin/initctl /initctl dpkg-divert --remove /sbin/initctl apt-get install --reinstall initctl
LinuxMint 14 x64 halts during startup at: stopping samba auto-reload integration
1,678,537,061,000
I'm new to init scripts, but the one I'm using I've copied almost verbatim (I did have to change a few things around from the source I found to work with Fedora). The daemon initializes fastcgi just fine, which was a great victory. However, the init script itself never finishes running, and never returns [ok], even though the program is initialized. It just hangs at "Starting PHP FastCGI" with a blinking cursor. I can't ^C out of it, either. The init script is below. This is a Fedora14 server. #!/bin/sh #chkconfig 3 85 15 #processname: php-fcgi . /etc/rc.d/init.d/functions php_cgi="/usr/bin/php-cgi" prog=$(basename $php_cgi) bind=/tmp/php.socket php_fcgi_children=15 php_fcgi_max_requests=1000 user=root php_cgi_args="- USER=$user PATH=/usr/bin PHP_FCGI_CHILDREN=$php_fcgi_children PHP_FCGI_MAX_REQUESTS=$php_fcgi_max_requests $php_cgi -b $bind" RETVAL=0 start() { echo -n "Starting PHP FastCGI: " daemon /usr/bin/env $php_cgi_args RETVAL=$? echo "$prog." } stop() { echo -n "Stopping PHP FastCGI: " killall -q -w -u $user $php_cgi RETVAL=$? echo "$prog." } case "$1" in start) start ;; stop) stop ;; restart) stop start ;; *) echo "Usage: php-fcgi {start|stop|restart}" exit 1 ;; esac exit $RETVAL
Sounds like the php-fcgi process doesn't daemonize by default, which means it will stay in the foreground and block progress just like you have observed. Most applications intended to run as daemons have an option that will cause it to daemonize (in short; run as a background process). If the php-fcgi process does not have such an option you will have to explicitly run it in the background instead, replace the deamon line with something like this: /usr/bin/env $php_cgi_args >/dev/null 2>&1 & If the process does any kind of direct logging to standard out or error that you want to be able to look at replace /dev/null above with the name of a log file.
Init Script initializing daemonized process, but won't return [ok]
1,678,537,061,000
There are many times when I'd like to have a certain daemons run when a user logs in and killed when a user logs out. I'd like these daemons to be restarted if the daemon exits unexpectedly and I'd like a handy way to view the daemon status and what not. I want the daemon process to be owned by the user. Basically, I want systemd and systemctl, but for sessions and not for the system. Is there such a thing? I'd imagine this is already part of systemd, but I haven't found anything regarding it. Usually folks recommend adding services to ~/.profile, but this seems like a really poor init system.
Yes, systemd has a user service manager which takes care of user-scoped services. You can control it using the same systemctl commands you’d apply to system services, but with an extra --user option.
Is there a systemd equivalent for user sessions?
1,678,537,061,000
On OSes with Systemd, reboot and halt are symlinks to systemctl. On OSes with SysvInit, what are reboot and halt symlinks to? Is it telinit? Or are they themselves executable files, not symlinks? Thanks.
On Debian Jessie (for example), reboot may come from sysvinit-core ( https://packages.debian.org/jessie/sysvinit-core ) Downloading the file ( https://packages.debian.org/jessie/amd64/sysvinit-core/download ) $ mkdir X $ cd X $ ar x ../sysvinit-core_2.88dsf-59_amd64.deb $ xz -dc < data.tar.xz | tar tvf - | egrep 'reboot|halt' -rwxr-xr-x root/root 18776 2015-04-06 14:44 ./sbin/halt -rw-r--r-- root/root 1753 2015-04-06 14:44 ./usr/share/man/man8/halt.8.gz lrwxrwxrwx root/root 0 2015-04-06 14:44 ./sbin/poweroff -> halt lrwxrwxrwx root/root 0 2015-04-06 14:44 ./sbin/reboot -> halt lrwxrwxrwx root/root 0 2015-04-06 14:44 ./usr/share/man/man8/reboot.8.gz -> halt.8.gz lrwxrwxrwx root/root 0 2015-04-06 14:44 ./usr/share/man/man8/poweroff.8.gz -> halt.8.gz So we can see that halt is a separate program, and poweroff and reboot are symlinks to that.
On OSes with SysvInit, are `reboot` and `halt` symlinks to some executables?
1,678,537,061,000
Kali version 2016.2 64bit full version Kali installation: main os-SSD An error message: [ 2.691529] radeon 0000:01:00.0 VCE init error (-22). Solutions tried and their result: gdm3 / X also apt-get update apt-get upgrade -y apt-get install -f gdm3 Noting worked, also tried more soultions in the web.. The Grapical intall work fine, but when the system come up its stayed just text. And I have to move to tty2.. Specs: AMD Radeon™ R5 M430 and Intel grapichs HD 620. Tried this, not worked.
I try to reinstall windows but this time using UEFI. And then using the kali installation with UEFI or Lagacy mode. That solved my problem..
After a clean installation GUI not working-AMD GPU
1,678,537,061,000
In https://manpages.debian.org/stretch/sysvinit-core/init.8.en.html /sbin/telinit is linked to /sbin/init. It takes a one-character argument and signals init to perform the appropriate action. ... Init listens on a fifo in /run, /run/initctl, for messages. Telinit uses this to communicate with init. Does the first sentence mean that telinit is a symlink to init? If yes, is it correct that telinit and init are run in the same process (e.g. maybe by some file lock) ? If yes, how can telinit communicate with init using FIFO or signals? For comparison, in Systemd, systemd and systemctl are different program files. Does telinit perform the same role to init in sysvinit, as systemctl to systemd? Thanks.
It is a symlink, but programs can look at how they are called and perform different actions. This is extremely common in the Unix world. And so when you run the telinit comamnd, it runs in its own process space, separate from the init process. It sends a messgae to the init process. This may be sent via a FIFO, or by a signal, depending on compile time options.
In sysvinit, do `telinit` and `init` run in the same process?
1,678,537,061,000
Gilles wrote an excellent reply about how Linux kernel shuts down at https://unix.stackexchange.com/a/122667/674 I was wondering how Linux OS shuts down (in cases of both systemvinit and systemd)? I am expecting something similar to the boot sequence of Linux OS. I am particularly wondering how the processes notify each other, by some signal like SIGTERM and SIGKILL, or some other interprocess communication way? Thanks. Related: What is the difference between "when the operating system shuts down" and "when the kernel shuts down"? When the operating system shuts down, how does a service manager know that it should sends SIGTERM and SIGKILL to its services?
With both sysvinit and systemd, shutting the operating system down starts by notifying the init process (the process with pid 1) that the system should shut down (or reboot, or power off). sysvinit does this by using the /run/initctl FIFO to communicate with init, asking it to switch to the corresponding runlevel. See the init manpage for a brief overview. systemd supports a variety of ways to do this. Various signals can be used to request a shutdown, reboot, etc., in various flavours; it’s also possible to request this over d-bus (the busctl manpage explains how to explore that). Once pid 1 has been asked to shut down, it follows its configuration and specification and goes through all the appropriate steps for it. This typically includes notifying all users that a shutdown is in progress, shutting down all the running services (in a managed way with systemd; using shutdown scripts in various forms with sysvinit), syncing the mounted file systems, possibly unmounting them, killing all remaining processes (with the usual TERM then KILL sequence), and finally calling the kernel’s reboot system call with the appropriate parameters. This describes the general sequence. There are many more twists to all this, including access control (with Polkit), various available hooks, kexec, sudden power-off handling, CtrlAltDel handling... The systemd documentation covers many of these details.
How does Linux operating system shut down? [closed]
1,678,537,061,000
It seems that a server is not necessarily running as a daemon, e.g. X server. If I am not correct, please let me know. Is a daemon necessarily a server? Is there a daemon which is not a server? I guess there are quite a few, and I am not sure if the init processes under sysvinit and systemd are such examples. Thanks.
Anything that is performing a task without being requested to do so by a client. I.e. a daemon that is not serving clients. I've recently played around with SSHGuard, a daemon that parses connection logs and that blocks abusive hosts. This is not a server. The DHCP client daemon that many Unices runs variations of is not a server. NTP is often implemented as a daemon that can function without being a server (only as a leaf node client).
Is there a daemon which is not a server? [closed]
1,678,537,061,000
I have lot of questions about scripts. How to configure the script that 1 It automatically turns on when the computer is turned on? 2 I was possible start and close the script with console? 3 After closing console the script still have been work?
That's depend on what OS you are running. A program that starts when a computer is turned on is generally called a service. Traditional Unix way is to use rc script. If you used systemd, that should still be supported. See How does systemd use /etc/init.d scripts? All scripts can be started from the console by design, just use their full path or have their directory in your path and use their name. Stopping a script can be done using CtrlC if running in the foreground, or ps and kill interactively from another script, or better, pkill when available. Depending on the signal and the script, it can be terminated gracefully or not. A script launched with nohup and running in the background is unaffected when the console from where it was launched is closed.
Linux bash scripting [closed]
1,393,437,350,000
So I received a warning from our monitoring system on one of our boxes that the number of free inodes on a filesystem was getting low. df -i output shows this: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/xvda1 524288 422613 101675 81% / As you can see, the root partition has 81% of its inodes used. I suspect they're all being used in a single directory. But how can I find where that is at?
I saw this question over on stackoverflow, but I didn't like any of the answers, and it really is a question that should be here on U&L anyway. Basically an inode is used for each file on the filesystem. So running out of inodes generally means you've got a lot of small files laying around. So the question really becomes, "what directory has a large number of files in it?" In this case, the filesystem we care about is the root filesystem /, so we can use the following command: { find / -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -n; } 2>/dev/null This will dump a list of every directory on the filesystem prefixed with the number of files (and subdirectories) in that directory. Thus the directory with the largest number of files will be at the bottom. In my case, this turns up the following: 1202 /usr/share/man/man1 2714 /usr/share/man/man3 2826 /var/lib/dpkg/info 306588 /var/spool/postfix/maildrop So basically /var/spool/postfix/maildrop is consuming all the inodes. *Note, this answer does have three caveats that I can think of. It does not properly handle anything with newlines in the path. I know my filesystem has no files with newlines, and since this is only being used for human consumption, the potential issue isn't worth solving and one can always replace the \n with \0 and use -z options for the sort and uniq commands above as following: { find / -xdev -printf '%h\0' |sort -z |uniq -zc |sort -zk1rn; } 2>/dev/null Optionally you can add head -zn10 to the command to get top 10 most used inodes. It also does not handle if the files are spread out among a large number of directories. This isn't likely though, so I consider the risk acceptable. It will also count hard links to a same file (so using only one inode) several times. Again, unlikely to give false positives* The key reason I didn't like any of the answers on the stackoverflow answer is they all cross filesystem boundaries. Since my issue was on the root filesystem, this means it would traverse every single mounted filesystem. Throwing -xdev on the find commands wouldn't even work properly. For example, the most upvoted answer is this one: for i in `find . -type d `; do echo `ls -a $i | wc -l` $i; done | sort -n If we change this instead to for i in `find . -xdev -type d `; do echo `ls -a $i | wc -l` $i; done | sort -n even though /mnt/foo is a mount, it is also a directory on the root filesystem, so it'll turn up in find . -xdev -type d, and then it'll get passed to the ls -a $i, which will dive into the mount. The find in my answer instead lists the directory of every single file on the mount. So basically with a file structure such as: /foo/bar /foo/baz /pop/tart we end up with /foo /foo /pop So we just have to count the number of duplicate lines.
Find where inodes are being used
1,393,437,350,000
From the article Anatomy of the Linux file system by M. Tim Jones, I read that Linux views all the file systems from the perspective of a common set of objects and these objects are superblock, inode, dentry and file. Even though the rest of the paragraph explains the above, I was not that comfortable with that explanation. Could somebody explain to me these terms?
First and foremost, and I realize that it was not one of the terms from your question, you must understand metadata. Succinctly, and stolen from Wikipedia, metadata is data about data. That is to say that metadata contains information about a piece of data. For example, if I own a car then I have a set of information about the car but which is not part of the car itself. Information such as the registration number, make, model, year of manufacture, insurance information, and so on. All of that information is collectively referred to as the metadata. In Linux and UNIX file systems metadata exists at multiple levels of organization as you will see. The superblock is essentially file system metadata and defines the file system type, size, status, and information about other metadata structures (metadata of metadata). The superblock is very critical to the file system and therefore is stored in multiple redundant copies for each file system. The superblock is a very "high level" metadata structure for the file system. For example, if the superblock of a partition, /var, becomes corrupt then the file system in question (/var) cannot be mounted by the operating system. Commonly in this event, you need to run fsck which will automatically select an alternate, backup copy of the superblock and attempt to recover the file system. The backup copies themselves are stored in block groups spread through the file system with the first stored at a 1 block offset from the start of the partition. This is important in the event that a manual recovery is necessary. You may view information about ext2/ext3/ext4 superblock backups with the command dumpe2fs /dev/foo | grep -i superblock which is useful in the event of a manual recovery attempt. Let us suppose that the dumpe2fs command outputs the line Backup superblock at 163840, Group descriptors at 163841-163841. We can use this information, and additional knowledge about the file system structure, to attempt to use this superblock backup: /sbin/fsck.ext3 -b 163840 -B 1024 /dev/foo. Please note that I have assumed a block size of 1024 bytes for this example. An inode exists in, or on, a file system and represents metadata about a file. For clarity, all objects in a Linux or UNIX system are files; actual files, directories, devices, and so on. Please note that, among the metadata contained in an inode, there is no file name as humans think of it, this will be important later. An inode contains essentially information about ownership (user, group), access mode (read, write, execute permissions), file type, and the data blocks with the file's content. A dentry is the glue that holds inodes and files together by relating inode numbers to file names. Dentries also play a role in directory caching which, ideally, keeps the most frequently used files on-hand for faster access. File system traversal is another aspect of the dentry as it maintains a relationship between directories and their files. A file, in addition to being what humans typically think of when presented with the word, is really just a block of logically related arbitrary data. Comparatively very dull considering all of the work done (above) to keep track of them. I fully realize that a few sentences do not provide a full explanation of any of these concepts so please feel free to ask for additional details when and where necessary.
What is a Superblock, Inode, Dentry and a File?
1,393,437,350,000
I want to see how many files are in subdirectories to find out where all the inode usage is on the system. Kind of like I would do this for space usage du -sh /* which will give me the space used in the directories off of root, but in this case I want the number of files, not the size.
find . -maxdepth 1 -type d | while read -r dir do printf "%s:\t" "$dir"; find "$dir" -type f | wc -l; done Thanks to Gilles and xenoterracide for safety/compatibility fixes. The first part: find . -maxdepth 1 -type d will return a list of all directories in the current working directory.  (Warning: -maxdepth is a GNU extension and might not be present in non-GNU versions of find.)  This is piped to... The second part: while read -r dir; do (shown above as while read -r dir(newline)do) begins a while loop – as long as the pipe coming into the while is open (which is until the entire list of directories is sent), the read command will place the next line into the variable dir. Then it continues... The third part: printf "%s:\t" "$dir" will print the string in $dir (which is holding one of the directory names) followed by a colon and a tab (but not a newline). The fourth part: find "$dir" -type f makes a list of all the files inside the directory whose name is held in $dir. This list is sent to... The fifth part: wc -l counts the number of lines that are sent into its standard input. The final part: done simply ends the while loop. So we get a list of all the directories in the current directory. For each of those directories, we generate a list of all the files in it so that we can count them all using wc -l. The result will look like: ./dir1: 234 ./dir2: 11 ./dir3: 2199 ...
How do I count all the files recursively through directories
1,393,437,350,000
I had a problem (new to me) last week. I have a ext4 (Fedora 15) filesystem. The application that runs on the server suddenly stopped. I couldn't find the problem at first look. df showed 50% available space. After searching for about an hour I saw a forum post where the guy used df -i. The option looks for inodes usage. The system was out of inodes, a simple problem that I didn't realize. The partition had only 3.2M inodes. Now, my questions are: Can I make the system have more inodes? Should/can it be set when formatting the disk? With the 3.2M inodes, how many files could I have?
It seems that you have a lot more files than normal expectation. I don't know whether there is a solution to change the inode table size dynamically. I'm afraid that you need to back-up your data, and create new filesystem, and restore your data. To create new filesystem with such a huge inode table, you need to use '-N' option of mke2fs(8). I'd recommend to use '-n' option first (which does not create the fs, but display the use-ful information) so that you could get the estimated number of inodes. Then if you need to, use '-N' to create your filesystem with a specific inode numbers.
How can I increase the number of inodes in an ext4 filesystem?
1,393,437,350,000
Let's say when I do ls -li inside a directory, I get this: 12353538 -rw-r--r-- 6 me me 1650 2013-01-10 16:33 fun.txt As the output shows, the file fun.txt has 6 hard links; and the inode number is 12353538. How do I find all the hard links for the file i.e. files with the same inode number?
The basic premise is to use: find /mount/point -mount -samefile /mount/point/your/file On systems with findmnt you can derive the mount point like this: file=/path/to/your/file find "$(findmnt -o TARGET -cenT "$file")" -mount -samefile "$file" It's important not to search from / - unless the target file is on that filesystem - because inode numbers are reused in each mounted filesystem.
List all files with the same inode number?
1,393,437,350,000
It is well-known that empty text files have zero bytes: However, each of them contains metadata, which according to my research, is stored in inodes, and do use space. Given this, it seems logical to me that it is possible to fill a disk by purely creating empty text files. Is this correct? If so, how many empty text files would I need to fill in a disk of, say, 1GB? To do some checks, I run df -i but this apparently shows the % of inodes being used(?) rather than how much they weigh. Filesystem Inodes IUsed IFree IUse% Mounted on udev 947470 556 946914 1% /dev tmpfs 952593 805 951788 1% /run /dev/sda2 28786688 667980 28118708 3% / tmpfs 952593 25 952568 1% /dev/shm tmpfs 952593 5 952588 1% /run/lock tmpfs 952593 16 952577 1% /sys/fs/cgroup /dev/sda1 0 0 0 - /boot/efi tmpfs 952593 25 952568 1% /run/user/1000 /home/lucho/.Private 28786688 667980 28118708 3% /home/lucho
This output suggests 28786688 inodes overall, after which the next attempt to create a file in the root filesystem (device /dev/sda2) will return ENOSPC ("No space left on device"). Explanation: on the original *nix filesystem design, the maximum number of inodes is set at filesystem creation time. Dedicated space is allocated for them. You can run out of inodes before you run out of space for data, or vice versa. The most common default Linux filesystem ext4 still has this limitation. For information about inode sizes on ext4, look at the manpage for mkfs.ext4. Linux supports other filesystems without this limitation. On btrfs, space is allocated dynamically. "The inode structure is relatively small, and will not contain embedded file data or extended attribute data." (ext3/4 allocates some space inside inodes for extended attributes). Of course you can still run out of disk space by creating too much metadata / directory entries. Thinking about it, tmpfs is another example where inodes are allocated dynamically. It's hard to know what the maximum number of inodes reported by df -i would actually mean in practice for these filesystems. I wouldn't attach any meaning to the value shown. "XFS also allocates inodes dynamically. So does JFS. So did/does reiserfs. So does F2FS. Traditional Unix filesystems allocate inodes statically at mkfs time, and so do modern FSes like ext4 that trace their heritage back to it, but these days that's the exception, not the rule. "BTW, XFS does let you set a limit on the max percentage of space used by inodes, so you can run out of inodes before you get to the point where you can't append to existing files. (Default is 25% for FSes under 1TB, 5% for filesystems up to 50TB, 1% for larger than that.) Anyway, this space usage on metadata (inodes and extent maps) will be reflected in regular df -h" – Peter Cordes in a comment to this answer
Can I run out of disk space by creating a very large number of empty files?
1,393,437,350,000
When I edit a file in the vi editor, the inode value of the file changes. But when edited with the cat command, the inode value does not change.
Most likely, you have set the backup option on, and backupcopy to "no" or "breakhardlink".
Why does inode value change when we edit in "vi" editor?
1,393,437,350,000
I find that under my root directory, there are some directories that have the same inode number: $ ls -aid */ .*/ 2 home/ 2 tmp/ 2 usr/ 2 var/ 2 ./ 2 ../ 1 sys/ 1 proc/ I only know that the directories' names are kept in the parent directory, and their data is kept in the inode of the directories themselves. I'm confused here. This is what I think when I trace the pathname /home/user1. First I get into the inode 2 which is the root directory which contains the directory lists. Then I find the name home paired with inode 2. So I go back to the disk to find inode 2? And I get the name user1 here?
They're on different devices. If we look at the output of stat, we can also see the device the file is on: # stat / | grep Inode Device: 801h/2049d Inode: 2 Links: 24 # stat /opt | grep Inode Device: 803h/2051d Inode: 2 Links: 5 So those two are on separate devices/filesystems. Inode numbers are only unique within a filesystem so there is nothing unusual here. On ext2/3/4 inode 2 is also always the root directory, so we know they are the roots of their respective filesystems. The combination of device number + inode is likely to be unique over the whole system. (There are filesystems that don't have inodes in the traditional sense, but I think they still have to fake some sort of a unique identifier in their place anyway.) The device numbers there appear to be the same as those shown on the device nodes, so /dev/sda1 holds the filesystem where / is on: # ls -l /dev/sda1 brw-rw---- 1 root disk 8, 1 Sep 21 10:45 /dev/sda1
Why do the directories /home, /usr, /var, etc. all have the same inode number (2)?
1,393,437,350,000
Say I am running a software, and then I run package manager to upgrade the software, I notice that Linux does not bring down the running process for package upgrade - it is still running fine. How does Linux do this?
The reason is Unix does not lock an executable file while it is executed or even if it does like Linux, this lock applies to the inode, not the file name. That means a process keeping it open is accessing the same (old) data even after the file has been deleted (unlinked actually) and replaced by a new one with the same name which is essentially what a package update does. That is one of the main differences between Unix and Windows. The latter cannot update a file being locked as it is missing a layer between file names and inodes making a major hassle to update or even install some packages as it usually requires a full reboot.
Why does a software package run just fine even when it is being upgraded?
1,393,437,350,000
I want to know how many files I have on my filesystem. I know I can do something like this: find / -type f | wc -l This seems highly inefficient. What I'd really like is to do is find the total number of unique inodes that are considered a 'file'. Is there a better way? Note: I would like to do this because I am developing a file synchronization program, and I would like to do some statistical analysis (like how many files the average user has total vs how many files are on the system). I don't, however, need to know anything about those files, just that they exist (paths don't matter at all). I would especially like to know this info for each mounted filesystem (and it's associated mount point).
The --inodes option to df will tell you how many inodes are reserved for use. For example: $ df --inodes / /home Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda1 3981312 641704 3339608 17% / /dev/sda8 30588928 332207 30256721 2% /home $ sudo find / -xdev -print | wc -l 642070 $ sudo find /home -print | wc -l 332158 $ sudo find /home -type f -print | wc -l 284204 Notice that the number of entries returned from find is greater than IUsed for the root (/) filesystem, but is less for /home. But both are within 0.0005%. The reason for the discrepancies is because of hard links and similar situations. Remember that directories, symlinks, UNIX domain sockets and named pipes are all 'files' as it relates to the filesystem. So using find -type f flag is wildly inaccurate, from a statistical viewpoint.
How can I find the number of files on a filesystem?
1,393,437,350,000
This is a bit of a theoretical question, but it's important to use proper names for things. In UNIX/Linux file systems, .. points to the parent directory. However, we know that hard links cannot point to directories, because that has the potential to break the acyclic graph structure of the filesystem and cause commands to run in an endless loop. So, is .. really a hard link (like .)? That would make it a special type of hard link, not subject to the directory restriction, but which for all purposes behaves like one. Or is that a special inode mapping, hardcoded into the filesystem, which ought not be called a hard link?
It depends on the filesystem. Most filesystems follow the traditional Unix design, where . and .. are hard links, i.e. they're actual directory entries in the filesystem. The hard link count of a directory is 2 + n where n is the number of subdirectories: that's the entry in the directory's parent, the directory's own . entry, and each subdirectory's .. entry. The hard link count is updated each time a subdirectory is created, removed or moved in or out of the directory. See Why does a new directory have a hard link count of 2 before anything is added to it? for a more detailed explanation. A few filesystems deviate from this tradition, in particular btrfs. we know that hard links cannot point to directories This is imprecise wording. More precisely, you can't create a hard link to a directory using the ln utility or the link system call or a similar method, because the kernel prevents you. Calling mkdir does create a hard link to the parent of the new directory. It's the only way to create a new hard link to a directory on a filesystem (and conversely removing a directory is the only way to remove a hard link to a directory). Also, note that it's misleading to think of hard links in terms of “pointing to” a primary file. Hard links are not directional, unlike symbolic links. When a file has multiple hard links, they're equivalent. After the following sequence: mkdir a b touch a/file ln a/file b/file there is nothing in the filesystem that makes b/file secondary to a/file. The two directory entries both refer to the same file. They're both hard links to the file.
Is '..' really a hard link?
1,393,437,350,000
Unix file systems usually have an inode table, and the number of entries in this table is usually fixed at the time the file system is created. This sometimes leads to people with plenty of disk space getting confusing error messages about no free space, and even after they figure out what the problem is, there is no easy solution for what to do about it. But it seems (to me) that it would be very desirable to avoid this whole mess by allocating inodes on demand, completely transparently to users and system administrators. If you're into cute hacks, you could even make the inode table itself be a file, and thus reuse the code you already have that finds free space on the disk. If you're lucky, you might even end up with the inodes near the files themselves, without explicitly trying to achieve this result. But nobody (that I know of) actually does this, so there's probably a catch that I'm missing. Any idea what it might be?
Say you did make the inode table a file; then the next question is... where do you store information about that file? You'd thus need "real" inodes and "extended" inodes, like an MS-DOS partition table. Given, you'd only need one (or maybe a few — e.g., to also have your journal be a file). But you'd actually have special cases, different code. Any corruption to that file would be disastrous, too. And consider that, before journaling, it was common for files that were being written e.g., when the power went out to be heavily damaged. Your file operations would have to be a lot more robust vs. power failure/crash/etc. than they were on, e.g., ext2. Traditional Unix filesystems found a simpler (and more robust) solution: put an inode block (or group of blocks) every X blocks. Then you find them by simple arithmetic. Of course, then it's not possible to add more (without restructuring the entire filesystem). And even if you lose/corrupt the inode block you were writing to when the power failed, that's only losing a few inodes — far better than a substantial portion of the filesystem. More modern designs use things like B-tree variants. Modern filesystems like btrfs, XFS, and ZFS do not suffer from inode limits.
Why is the inode table usually not resizable?
1,393,437,350,000
There is literally nothing on google that I can find that will help me answer this question. I presume it is passing some other parameter to ls -i?
Yes, the argument -i will print the inode number of each file or directory the ls command is listing. As you want to print the inode number of a directory, I would suggest using the argument -d to only list directories. For printing the inode number the directory /path/to/dir, use the following command line: ls -id /path/to/dir From man ls: -d, --directory list directory entries instead of contents, and do not derefer‐ ence symbolic links -i, --inode print the index number of each file
How do I find the inode of any directory?
1,393,437,350,000
If I run a command like this one: find / -inum 12582925 Is there a chance that this will list two files on separate mounted filesystems (from separate partitions) that happen to have been assigned the same number? Is the inode number unique on a single filesystem, or across all mounted filesystems?
An inode number is only unique on a single file system. One example you’ll run into quickly is the root inode on ext2/3/4 file systems, which is 2: $ ls -id / /home 2 / 2 /home If you run (assuming GNU find) find / -printf "%i %p\n" | sort -n | less on a system with multiple file systems you’ll see many, many duplicate inode numbers (although you need to take the output with a pinch of salt since it will also include hard links). When you’re looking for a file by inode number, you can use find’s -xdev option to limit its search to the file system containing the start path, if you have a single start path: find / -xdev -inum 12582925 will only find files with inode number 12582925 on the root file system. (-xdev also works with multiple start paths, but then its usefulness is reduced in this particular case.) It's the combination of inode number and device number (st_dev and st_ino in the stat structure, %D %i in GNU find's -printf) that identifies a file uniquely (on a given system). If two directory entries have the same inode and dev number, they refer to the same file (though possibly through two different mounts of a same file system for bind mounts). Some find implementations also have a -samefile predicate that will find files with the same device and inode number. Most [/test implementations also have a -ef operator to check that two files paths refer to the same file (after symlink resolution though).
Can two files on two separate filesystems share the same inode number? [duplicate]
1,393,437,350,000
I’m asking because string comparisons are slow, but indexing is fast, and a lot of scripts I write are in bash, which to my knowledge performs a full string lookup for every executable call. All those ls’s and grep’s would be a little bit faster without performing a string lookup on each step. Of course, this now delves into compiler optimization. Anyways, is there a way to directly invoke a program in Linux using only its inode number (assuming you only had to look it up once for all invocations)?
The short answer is no. The longer answer is that linux user API doesn't support accessing files by any method using the inode number. The only access to the inode number is typically through the stat() system call which exposes the inode number, which can be useful for identifying if two filenames are the same file, but is not used for anything else. Accessing a file by inode would be a security violation, as it would bypass permissions on the directories that contain the file linked to the inode. The closest you can get to this would be accessing a file by open file handle. But you can't run a program from that either, and this would still require opening the file by a path. (As noted in comments, this functionality was added to linux for security reasons along with the rest of the *at system calls, but is not portable. (yet? standards evolve.)) There's also numerous ways of using the inode number to find the file (basically, crawl the filesystem and use stat) and then run it normally, but this is the opposite of what you want, as it is enormously more expensive than just accessing the file by pathname and doesn't remove that cost either. Having said that, worrying about this type of optimization is probably moot, as Linux has already optimized the internal inode lookup a great deal. Also, traditionally, shells hash the path location of executables so they don't have to hunt for them from all directories in $PATH every time.
Does Linux support invoking a program directly via its inode number?
1,393,437,350,000
I have a filesystem with many small files that I erase regularly (the files are a cache that can easily be regenerated). It's much faster to simply create a new filesystem rather than run rm -rf or rsync to delete all the files (i.e. Efficiently delete large directory containing thousands of files). The only issue with creating a new filesystem to wipe the filesystem is that its UUID changes, leading to changes in e.g. /etc/fstab. Is there a way to simply "unlink" a directory from e.g. an ext4 filesystem, or completely clear its list of inodes?
Since you're using ext4 you could format the filesystem and the set the UUID to a known value afterwards. man tune2fs writes, -U UUID Set the universally unique identifier (UUID) of the filesystem to UUID. The format of the UUID is a series of hex digits separated by hyphens, like this c1b9d5a2-f162-11cf-9ece-0020afc76f16. And similarly, man mkfs.ext4 writes, -U UUID Set the universally unique identifier (UUID) of the filesystem to UUID. […as above…] Personally, I prefer to reference filesystems by label. For example in the /etc/fstab for one of my systems I have entries like this # <file system> <mount point> <type> <options> <dump> <pass> LABEL=root / ext4 errors=remount-ro 0 1 LABEL=backup /backup ext4 defaults 0 2 Such labels can be added with the -L flag for tune2efs and mkfs.ext4. They avoid issues with inode checksums causing rediscovery or corruption on a reformatted filesystem and they are considerably easier to identify visually. (But highly unlikely to be unique across multiple systems, so beware if swapping disks around.)
Reset ext4 filesystem without changing the filesystem UUID
1,393,437,350,000
The question is why exactly does a directory shrink after directory entries are removed? Is it due to how ext4 filesystem configured to retain directory metadata? Obviously removing the directory and recreating it isn't a solution, since it deletes original inode and creates a new one. What can be done to decrease the number manually?
Quoting a developer (in a linux kernel thread ext3/ext4 directories don't shrink after deleting lots of files): On Thu, May 14, 2009 at 08:45:38PM -0400, Timo Sirainen wrote: > > I was rather thinking something that I could run while the system was > fully operational. Otherwise just moving the files to a temp directory + > rmdir() + rename() would have been fine too. > > I just tested that xfs, jfs and reiserfs all shrink the directories > immediately. Is it more difficult to implement for ext* or has no one > else found this to be a problem? It's probably fairest to say no one has thought it worth the effort. It would require some fancy games to swap out block locations in the extent trees (life would be easier with non-extent-using inodes), and in the case of htree, we would have to keep track of the index block so we could remove it from the htree index. So it's all doable, if a bit tricky in terms of the technical details; it's just that the people who could do it have been busy enough with other things. It's hasn't been considered high priority because most of the time directories don't go from holding thousands of files down to a small handful. - Ted
Why directory with large amounts of entries does not shrink in size after entries are removed?
1,393,437,350,000
Device files are not files per se. They're an I/O interface to use the devices in Unix-like operating systems. They use no space on disk, however, they still use an inode as reported by the stat command: $ stat /dev/sda File: /dev/sda Size: 0 Blocks: 0 IO Block: 4096 block special file Device: 6h/6d Inode: 14628 Links: 1 Device type: 8,0 Do device files use physical inodes in the filesystem and why they need them at all?
The short answer is that it does only if you have a physical filesystem backing /dev (and if you're using a modern Linux distro, you probably don't). The long answer follows: This all goes back to the original UNIX philosophy that everything is a file. This philosophy is part of what made UNIX so versatile, because you could directly interact with devices from userspace without needing to have special code in your application to talk directly to the physical hardware. Originally, /dev was just another directory with a well-known name where you put your device files. Some UNIX systems still take this approach (I believe OpenBSD still does), and you can usually tell if a system is like this because it will have lots of device files for devices the system doesn't actually have (for example, files for every possible partition on every possible disk). This saves space in memory and time at boot at the cost of using a bit more disk space, which was a good trade off for early systems because they were generally very memory constrained and not very fast. This is generally referred to as having a static /dev. On modern Linux systems (and I believe also FreeBSD and possibly recent versions of Solaris), /dev is a temporary in-memory filesystem populated by the kernel (or udev if you use Systemd, because they don't trust the kernel to do almost anything). This saves some disk space at the price of some memory (usually less than a few MB) and a very small processing overhead. It also has a number of other advantages, with one of the biggest being that it's easier to detect hot-plugged hardware. This is generally referred to as having a dynamic /dev. In both cases though, device nodes are accessed through the regular VFS layer, which by definition means they have to have an inode (even if it's a virtual one that just exists so that stuff like stat() works like it's supposed to. From a practical perspective, this has zero impact on systems that use a dynamic /dev because they just store the inodes in memory or generate them as needed, and near zero impact where /dev is static because inodes take up near zero space on-disk and most filesystems either have no upper limit on them or provision way more than anybody is likely to ever need.
Why do special device files have inodes?
1,393,437,350,000
On many *nix systems like OS X and Ubuntu, We can see the inode of root directory is 2. Then what is the inode 1 used for?
Inode 0 is used as a NULL value to indicate that there is no inode. Inode 1 is used to keep track of any bad blocks on the disk; it is essentially a hidden file containing the bad blocks. Those bad blocks which are recorded using e2fsck -c. Inode 2 is used by the root directory, and indicates starting of filesystem inodes.
Why does '/' have the inode 2?
1,393,437,350,000
Possible Duplicate: How can I increase the number of inodes in an ext4 filesystem? I have a homemade NAS with Debian Wheezy 64bit. It has three disks - 2x2TB and 1.5TB, pooled together using RAID1/5 and LVM. The result is a LVM Logical Volume, about 3.16TB in size, formatted as ext4 and mounted as /home. However I just found out that roughly 50GB of this capacity is used by Inodes (exact count being 212 459 520, with 256B in size or to put it in another way - one Inode per every 16k of the partition size). While 50GB in 3.16TB is about 1.5% of the total capacity, it's still a lot of space. Since this is a storage NAS, mostly used for multimedia, I don't ever expect the /home partition to have 212 million files in it. So, my question is this - is it possible to lower/change the number of Inodes without actually re-creating the whole partition? While it might be possible to do it, I'd still prefer to find a way to do so instead of moving 2TB of data around and waiting for RAID to re-sync again.
From the mke2fs man page: Be warned that it is not possible to expand the number of inodes on a filesystem after it is created, so be careful deciding the correct value for this parameter. So the answer is no. What you could do is shrink the existing ext4 volume (this requires unmounting the filesystem), use the free space to create a new ext4 volume with fewer inodes, copy the data, remove the old volume and extend the new volume to occupy all the space.
Is it possible to change Inode count on an ext4 filesystem? [duplicate]
1,393,437,350,000
I changed the /home directory to a different partition and couldn't access the files from it, something I have been able to solve from this question - How do you access the contents of a previous mount after switching to a different the partition?. In case I had noted the directory's inode before would I be able to use that alone to rename the directory?
You can rename a file (directory or whatever) using only knowledge of the inode using find, but if (a) the filesystem containing it is not mounted, or if (b) there is another filesystem mounted over a non-empty directory that contains the file you are interested in, the file is simply not accessible by your system. In case (a), you need to mount the filesystem before you can do anything to the contents, including renaming, and in case (b), you need to unmount the filesystem which is mounted "over the top of" the directory containing the file you want to rename. It looks like you are asking about case (b). If I understand you correctly, you are trying to make your old /home directory (which is located on your root partition) accessible, while still using your new partition mounted at /home. If that's what you want, do the following: Close all files and log out. Then log in as root (use a virtual terminal for this—press Ctrl-Alt-F2) Run the following: umount /home mv /home /home-old mkdir /home mount -a ls /home ls /home-old If all is well, log out and log back in as yourself and all should be fine. Incidentally, the command to rename a file using only knowledge of its inode (assuming the file is in the current directory) is: find . -maxdepth 1 -inum 123456789 -exec mv {} mynewname \; Where 123456789 is the inode number, of course. (Note that find determines the filename and its path and passes this info to mv; there is no way at all to rename a file without involving the existing filename in any way, but if it's just that you don't know the filename, it is quite simple.)
Is it possible to rename a file or directory using the inode?
1,393,437,350,000
In an ext4 filesystem, suppose that file1 has inode number 1, and that file2 has inode number 2. Now, regardless of any crtime timestamp that might be available, is it wrong to assume that file1 was created earlier than file2 only because inode 1 is less than inode 2?
Lower inode number doesn't prove older. A simple case that would change that sequence is deleting a file which would free the inode. That inode therefore becomes available for future use.
Does inode number determine what files were created earlier than others?
1,393,437,350,000
I can do an ls -li to see a file's inode number, but how can I list information inside a particular inode by using that inode number.
If you have a ext2/3/4 filesystem you can use debugfs for a low-level look at an inode. For example, to play without being root: $ truncate -s 1M myfile $ mkfs.ext2 -F myfile $ debugfs -w myfile debugfs: stat <2> Inode: 2 Type: directory Mode: 0755 Flags: 0x0 Generation: 0 Version: 0x00000000 User: 0 Group: 0 Size: 1024 File ACL: 0 Directory ACL: 0 Links: 3 Blockcount: 2 Fragment: Address: 0 Number: 0 Size: 0 ctime: 0x5722081d -- Thu Apr 28 14:54:53 2016 atime: 0x5722081d -- Thu Apr 28 14:54:53 2016 mtime: 0x5722081d -- Thu Apr 28 14:54:53 2016 BLOCKS: (0):24 TOTAL: 1 The command stat takes a inode number inside <>.
How to see information inside inode data structure
1,393,437,350,000
What is the relation and the difference between xattr and chattr? I want to know when I set a chattr attribute in Linux what is happening inside the Linux kernel and inode metadata.
The attributes as handled by lsattr/chattr on Linux and some of which can be stored by quite a few file systems (ext2/3/4, reiserfs, JFS, OCFS2, btrfs, XFS, nilfs2, hfsplus...) and even queried over CIFS/SMB (when with POSIX extensions) are flags. Just bits than can be turned on or off to disable or enable an attribute (like immutable or archive...). How they are stored is file system specific, but generally as a 16/32/64 bit record in the inode. The full list of flags is found on Linux native filesystems (ext2/3/4, btrfs...) though not all of the flags apply to all of FS, and for other non-native FS, Linux tries to map them to equivalent features in the corresponding file system. For instance the simmutable flag as stored by OSX on HFS+ file systems is mapped to the corresponding immutable flag in Linux chattr. What flag is supported by what file system is hardly documented at all. Often, reading the kernel source code is the only option. Extended attributes on the other hand, as set with setfattr or attr on Linux store more than flags. They are attached to a file as well, and are key/value pairs that can be (both key and value) arbitrary arrays of bytes (though with limitation of size on some file systems). The key can be for instance: system.posix_acl_access or user.rsync.%stat. The system namespace is reserved for the system (you wouldn't change the POSIX ACLs with setfattr, but more with setfacl, POSIX ACLs just happen to be stored as extended attributes at least on some file systems), while the user namespace can be used by applications (here rsync uses it for its --fake-super option, to store information about ownership or permissions when you're not superuser). Again, how they are stored is filesystem specific. See WikiPedia for more information.
Difference between xattr and chattr
1,393,437,350,000
I recently discovered on a machine with RHEL6: ls -lbi 917921 -rw-r-----. 1 alex pivotal 5245 Dec 17 20:36 application.yml 917922 -rw-r-----. 1 alex pivotal 2972 Dec 17 20:36 application11.yml 917939 -rw-r-----. 1 alex pivotal 3047 Dec 17 20:36 application11.yml 917932 -rw-r-----. 1 alex pivotal 2197 Dec 17 20:36 applicationall.yml I was wondering how something like this can be achieved ?
I was able to reproduce that behavior. See for example: ls -lib 268947 -rw-r--r-- 1 root root 8 Dez 20 12:32 app 268944 -rw-r--r-- 1 root root 24 Dez 20 12:33 aрр This is on my system (Linux debian 4.9.0-7-amd64 #1 SMP Debian 4.9.110-3+deb9u2 (2018-08-13) x86_64 GNU/Linux). I have a UTF-8 locale and the character p in the above output is not the same, but it looks similar. In the first line it's a LATIN SMALL LETTER P and in the second line a CYRILLIC SMALL LETTER ER (see https://unicode.org/cldr/utility/confusables.jsp?a=p&r=None). This is just an example, it could be every character in the filename, even the dot. When I use a UTF-8 locale, my shell gives the above output. But if I use a locale that has not all unicode characters for example the default locale c, then the output looks as follows (you can change the local by setting LC_ALL): LC_ALL=c ls -lib 268947 -rw-r--r-- 1 root root 8 Dec 20 12:32 app 268944 -rw-r--r-- 1 root root 24 Dec 20 12:33 a\321\200\321\200 This is because the CYRILLIC SMALL LETTER ER is not present in ASCII.
Same file name different INODES
1,393,437,350,000
I have several files with encoding issues in their file names (German umlauts, burned on CD with Windows, read by Windows and synced to Linux with Seafile. Something, somewhere went wrong...). Bash and zsh only show "?" instead of umlauts, stat shows something like $ stat Erg�nzung.doc File: ‘Erg\344nzung.doc’ Size: 2609152 Blocks: 5096 IO Block: 4096 regular file Device: 806h/2054d Inode: 12321475 Links: 1 I can enter the filename only with autocompletion. How do I rename the file? The affected files seem to be unreadable by LibreOffice (or other programs for other file types), they complain about "No such file or device". I was thinking about mv --by-inode 12321475 Ergänzung.doc, but there's no --by-inode switch for mv. What else can I do?
You could try: find . -inum 12321475 -exec mv {} new-filename \; or find . -inum 12321475 -print0 | xargs -0 mv -t new-filename Generally I prefer xargs over exec. Google for why. It's tricky though. See Find -exec + vs find | xargs. Which one to choose?
"mv" file with garbled name by inode number?
1,393,437,350,000
A friend of mine who likes programming in the Linux environment, but doesn't know much about the administration of Linux recently ran into a problem where his OS (Ubuntu) was reporting "out of disk space on XXX volume." But when he went to check the volume, there was still 700 GB left. After much time wasted, he was eventually able to figure out that he was out of inodes. (He was storing lots of little incremental updates from a backup system on this volume and burned thru all his inodes.) He asked me why the Linux kernel reported the error message ("out of disk space") instead of properly reporting ("out of inodes"). I didn't know, so I figured I would ask StackExchange. Anyone know why this happens? and why it hasn't been fixed after all these years? (I remember a different friend telling me about this problem in 1995.)
A single error number, ENOSPC, is used to report both situations, hence the same error message. To keep compliance with the ISO C and POSIX standards, the kernel developers have no choice but to use a single error number for both events. Adding a new error number would break existing programs. However, as sticking to traditional error messages is not AFAIK mandatory, nothing should forbid a developer to make the single message clearer, like for example out of disk/inode space Technically, whether being out of inode space or out of data space is the same, i.e. it means there is not enough free disk space for the system call to succeed. I guess you weren't going to complain if your disk is reported as full while there are still free inodes slots. Note that file systems like JFS, XFS, ZFS and btrfs allocate inodes dynamically so do no exhibit this issue anymore.
Why does the Linux kernel report "out of disk space" when in reality it is out of i-nodes
1,393,437,350,000
At first I used stat -c %i file (to help detect the presence of a jail), which seemed to work on any Linux distribution under the sun. On OS X' I had to use ls -i file | cut -d ' ' -f 1. Is there some way to find the inode number of a file in a shell script which is portable across *nix platforms and does not depend on the notoriously capricious ls?
Possible solution: The POSIX spec for ls specifies -i, so maybe it's portable. Does anyone know of a popular implementation of ls which does not support this, or prints it in a different way from the following example: $ ls -di / 2 /
Portable way to find inode number
1,393,437,350,000
I'm aware that this article exists: Why are hard links only valid within the same filesystem? But it unfortunately didn't click with me. https://www.kernel.org/doc/html/latest/filesystems/ext4/directory.html I'm reading operating system concepts by Galvin and found some great beneficial resources like linux kernel documentation. There can be many directory entries across the filesystem that reference the same inode number--these are known as hard links, and that is why hard links cannot reference files on other filesystems. In the very beginning the author says this. But I don't understand the reason behind it. Information contained in an inode: Mode/permission (protection) Owner ID Group ID Size of file Number of hard links to the file Time last accessed Time last modified Time inode last modified https://www.grymoire.com/Unix/Inodes.html Now since the inode contains these information, what's the problem with letting hard links reference files on other filesystem? What problem would occur if hard link reference on other filesystems? About hard link: The term "hard link" is misleading, and a better term is "directory entry". A directory is a type of file that contains (at least) a pair considering of a file name and an inode. Every entry in a directory is a "hard link", including symbolic links. When you create a new "hard link", you're just adding a new entry to some directory that refers to the same inode as the existing directory entry. This is how I visualize what a directory concept looks like in an operating system. Each entry is a hardlink according to the above quoted text. The only problem that I can see is that multiple filesystem could have same range of inode(But I don't think so as inode is limited in an operating system). Also why would not it be nice to add new information about filesystem in inode itself? Would not that be really convenient?
A "hard link" just is the circumstance that two (or more) entries in the hierarchy of your file system refer to the same underlying data structure. Your figure illustrates that quite nicely! That's it; that's all there is to it. It's like if you have a cooking book with an index at the end, and the index says "Bread: see page 3", and "Bakery: see page 3". Now there's two names for what is on page 3. You can have as many index entries that point to the same page as you want. What does not work is that you have an index entry for something in another book. The other book simply doesn't exist within your current book, so referring to pages in it just can't work, especially because different versions of the other book could number pages differently over time. Because a single filesystem can only guarantee consistency for itself, you cannot refer to "underlying storage system details" like inodes of other filesystems without it breaking all the time. So, if you want to refer to a directory entry that's stored on a different file system, you'll have to do that by the path. UNIX helps you with that through the existence of symlinks. The only problem that I can see is that multiple filesystem could have same range of inode(But I don't think so as inode is limited in an operating system). That's both untrue and illogical: I can ship you my hard drive, right. How would I ensure that the file system on my hard drive has no inode numbers you already used in one of the many file systems that your computer might have? Also why would not it be nice to add new information about filesystem in inode itself? Would not that be really convenient? No. Think of a file system as an abstraction of "bytes on storage media": a file system in itself is an independent data structure containing data organized into files; it must not depend on any external data to be complete. Breaking that will just lead to inconsistencies, because independence means that I can change inode numbers on file system A without having to know about file system B. Now, if B depended on A, it would be broken afterwards.
Why can't hard links reference files on other filesystems?
1,393,437,350,000
I am currently using backintime to take "snapshots" of my file system. It is similar to rsnapshot in that it, makes hard links to unchanged files. I have recently run out of inodes on my EXT4 filesystem. df -hi reveals I have used 9.4 million inodes. A rough count of the number of current directories times the number of snapshots plus the number of current files suggests that I may in fact be using 9.4 million inodes. From what I understand the EXT4 filesystem can support around 2^32 inodes. I am considering reformatting the partition to use all 4 billion or so inodes, but I am concerned that this is a bad idea. What are the drawbacks of having so many inodes in an EXT4 filesystem? Is there a better choice of filesystem for an application like this?
That is a really bad idea. Every inode consumes 256 bytes (may be configured as 128). Thus just the inodes would consume 1TiB of space. Other file systems like btrfs can create inodes dynamically. Use one of them instead.
Drawbacks of increasing number of inodes in EXT4
1,393,437,350,000
I know that a directory is a file contained rows kind of “name = inode number”. When I request a path like /home/my_file.txt, next steps take place: Go to inode number 2 (root directory default inode) Get file to which inode #2 is pointing. Search through this file and find “home” entry. Get its inode number, for example 135. Get file to which inode #135 is pointing. Search through this file and find “my_file.txt” entry. Get its inode number, for example 245. Get file to which inode #245 is pointing. The question: how this process is different in case the home directory is the mount point of another filesystem, residing on another block device? When system understand, that this directory is the mount point and how it do that? Where this information is stored - in the inode, in the directory file or somewhere else? For example, part of my root directory listing with inode numbers displayed: ls -d1i /*/ inode # name 656641 /bin/ 2 /boot/ 530217 /cdrom/ 2 /dev/ 525313 /etc/ 2 /home/ 393985 /lib/ Here, home and boot directories are mount points and resided on own filesystems. Run my pseudocode algorithm (written above) and stuck on the step number 3 - in this case, home inode number is 2 and it is located in another filesystem and in another block device.
Your description of the process isn't quite right. The kernel keeps track of which paths are mount points. Exactly how it does that varies between kernel, but typically the information is stored in terms of paths. For example the kernel remembers “/ is this filesystem, /media/cdrom is this filesystem, /proc is this filesystem”, etc. Typically, rather than a table mapping path strings to data structures representing mounted filesystems, the kernel stores tables per directory. The data associated with a directory entry is classically called a dentry. There's a dentry for the root, and in each directory there's a dentry for each file in that directory that the kernel remembers. The dentry contains a pointer to an inode structure, and the inode contains a pointer to the filesystem data structure for the filesystem that the file is on. At a mount point, the associated filesystem is different from the parent dentry's associated filesystem, and there's additional metadata to keep track of the mount point. So in a typical unix kernel architecture, the dentry for / contains a pointer to information about the root filesystem, in addition to a pointer to the inode containing the root directory; the dentry for /proc (assuming that it's a mount point) contains a pointer to information about the proc filesystem, etc. If /media/cdrom is a mount point but not /media, the kernel remembers in the dentry for /media that it isn't allowed to forget about it: remembering about /media isn't just a matter of caching for performance, it's necessary to remember the existence of the mount point /media/cdrom. For Linux, you can find documentation in the kernel documentation, on this site and elsewhere on the web. Bruce Fields has a good presentation of the topic. When the kernel is told to access a file, it processes the file name one slash-separated component at a time and looks up the component each time. If it finds a symbolic link, it follows it. If it finds a mount point, no special processing is actually necessary: it's just that the inodes are attached to a different directory. The process does not use inode numbers, it follows pointers. Inode numbers are a way to give a unique identity to each file on a given filesystem outside of the kernel: on disk, and for applications. There are filesystems that don't have unique inode numbers; filesystem drivers normally try to make up one but that doesn't always work out, especially with network filesystems (e.g. if the server exports a directory tree which contains a mount point, there may be overlap between the set of inodes above and below that mount point). Rows that map name to inode number are the way a typical on-disk filesystem works if it supports hard links; filesystems that don't support hard links don't really need the concept of inode number. Note that information about mount points is stored only in memory. When you mount a filesystem, this does not modify the directory on top of which the filesystem is mounted. That directory is merely hidden by the root of the mounted filesystem.
How a mount point directory entry is different from an usual directory entry in a filesystem
1,393,437,350,000
I'm interested in the way Linux mmaps files into the main memory (in my context its for executing, but I guess the mmap process is the same for writing and reading as well) and which size it uses. So I know Linux uses paging with usually 4kB pagesize (where in the kernel can I find this size?). But what exactly does this mean for the memory allocated: Assume you have a binary of size of a few thousned bytes, lets just say 5812B and you execute it. What happens in the kernel: Does it allocate 2*4kB and then copy the 5812B into this space, wasting >3KB of main memory in the 2nd page? It would be great if anyone knew the file in the kernel source where the pagesize is defined. My 2nd question is also very simple I guess: I assumed 5812B as a filesize. Is it right, that this size is simply taken from the inode?
There is no direct relationship between the size of the executable and the size in memory. Here's a very quick overview of what happens when a binary is executed: The kernel parses the file and breaks it into section. Some sections are directly loaded into memory, in separate pages. Some sections aren't loaded at all (e.g. debugging symbols). If the executable is dynamically linked, the kernel calls the dynamic loader, and it loads the required shared libraries and performs link edition as required. The program starts executing its code, and usually it will request more memory to store data. For more information about executable formats, linking, and executable loading, you can read Linkers and Loaders by John R. Levine. In a 5kB executable, it's likely that everything is code or data that needs to be loaded into memory except for the header. The executable code will be at least one page, perhaps two, and then there will be at least one page for the stack, probably one page or for the heap (other data), plus memory used by shared libraries. Under Linux, you can inspect the memory mappings for an executable with cat /proc/$pid/maps. The format is documented in the proc(5) man page; see also Understanding Linux /proc/id/maps.
Memory size for kernel mmap operation
1,393,437,350,000
I'm still confused about the concept of kernel and filesystem. Filesystems contain a table of inodes used to retrieve the different files and directories in different memories. Is this inode table part of the kernel? I mean, is the inode table updated when the kernel mounts another filesystem? Or is it part of the filesystem itself that the kernel reads by somehow using a driver and inode table address?
There is some confusion here because kernel source and documentation is sloppy with how it uses the term 'inode'. The filesystem can be considered as having two parts -- the filesystem code and data in memory, and the filesystem on disk. The filesystem on disk is self contained and has all the non-volatile data and metadata for your files. For most linux filesystems, this includes the inodes on disk along with other metadata and data for the files. But when the filesystem is mounted, the filesystem code also keeps in memory a cached copy of the inodes of files being used. All file activity uses and updates this in memory copy of the inode, so the kernel code really only thinks about this in memory copy, and most kernel documentation doesn't distinguish between the on disk inode and the in memory inode. Also, the in memory inode contains additional ephemeral metadata (like where the cache pages for the file are in memory and which processes have the file open) that is not contained in the on disk copy of the inode. The in memory inode is periodically synchronized and written back to disk. The kernel does not have all the inodes in memory -- just the ones of files in use and files that recently were in use. Eventually inodes in memory get flushed and the memory is released. The inodes on disk are always there. Because file activity in unix is so tightly tied to inodes, filesystems (like vfat) that do not use inodes still have virtual inodes in kernel memory that the filesystem code constructs on the fly. These in memory virtual inodes still hold file metadata that is synchronized to the filesystem on disk as needed. In a traditional unix filesystem, the inode is the key data structure for a file. The filename is just a pointer to the inode, and an inode can have multiple filenames linked to it. In other filesystems that don't use inodes, a file can typically only have one name and the metadata is tied to the filename rather than an inode.
How Linux kernel sees the filesystems
1,393,437,350,000
I like the navigation and features of ncdu, but instead of ranking folders by size, I want to rank them by file-count. For example, folders containing more files are listed first, and you can navigate the hierarchy using your arrow keys. Are there any options to accomplish this? If not, I wonder how difficult it would be to modify the source code to provide the feature I'm wanting. Perhaps there is something else that does this already?
If you press C (capital “C”, so ShiftC or C with Caps Lock on) while in ncdu, the display will be sorted by file count rather than size. c (lower-case “c”) will show the file count in addition to the size (regardless of the sort criterion). This shows the file count sort in action: This feature was added in ncdu 1.10 (May 2013).
ncdu - Rank by File-Count instead of Size
1,393,437,350,000
Note: Question although says vice versa but it really does not have any meaning since both of them point to the same inode and its not possible to say which is head and which is tail. Say I have a file hlh.txt [root@FREL ~]# fallocate -l 100 hlh.txt Now if I see the change time for hlh.txt [root@FREL ~]# stat hlh.txt File: hlh.txt Size: 100 Blocks: 8 IO Block: 4096 regular file Device: fc00h/64512d Inode: 994 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Context: unconfined_u:object_r:admin_home_t:s0 Access: 2023-01-11 01:43:05.469703330 -0500 Modify: 2023-01-11 01:43:05.469703330 -0500 Change: 2023-01-11 01:43:05.469703330 -0500 Birth: 2023-01-11 01:43:05.469703330 -0500 Creating hard link [root@FREL ~]# ln hlh.txt hlt.txt Since both hlh.txt and hlt.txt points to same inode, so change time would be the ctime of the hard link tail file which is understood. [root@FREL ~]# stat hlt.txt File: hlt.txt Size: 100 Blocks: 8 IO Block: 4096 regular file Device: fc00h/64512d Inode: 994 Links: 2 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Context: unconfined_u:object_r:admin_home_t:s0 Access: 2023-01-11 01:43:05.469703330 -0500 Modify: 2023-01-11 01:43:05.469703330 -0500 Change: 2023-01-11 01:44:05.316842644 -0500 Birth: 2023-01-11 01:43:05.469703330 -0500 But if I unlink the head file, that changes ctime of the file as well. Why? I mean all we did is delete the head, what significance does change time have here internally. Why does it need to be change? [root@FREL ~]# unlink hlh.txt [root@FREL ~]# [root@FREL ~]# stat hlt.txt File: hlt.txt Size: 100 Blocks: 8 IO Block: 4096 regular file Device: fc00h/64512d Inode: 994 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Context: unconfined_u:object_r:admin_home_t:s0 Access: 2023-01-11 01:43:05.469703330 -0500 Modify: 2023-01-11 01:43:05.469703330 -0500 Change: 2023-01-11 01:47:49.588364704 -0500 Birth: 2023-01-11 01:43:05.469703330 -0500
This is a requirement on the unlink() library function by POSIX: Upon successful completion, unlink() shall mark for update the last data modification and last file status change timestamps of the parent directory. Also, if the file's link count is not 0, the last file status change timestamp of the file shall be marked for update. The standard document does not expand on this requirement. Since the link count is decreased by one, I'm assuming the ctime timestamp (the "last file status change timestamp") is updated to reflect the fact that the file's status changed.
Deleting a hard link's tail file changes the change time of the head or vice versa. Why?
1,393,437,350,000
I hope I've got this right: A file's inode contains data such as inode number, time of last modification, ownership etc. – and also the entry: »deletion time«. Which made me curious: Deleting a file means removing it's inode number, thus marking the storage space linked to it as available. There are tools to recover (accidentally) deleted files (e.g. from a journal, if available). And I know the stat command. Question What does a "deleted file" entry look like in the journal? My guess is a quite unspectacular looking output as such as if issued the stat command. I know that deleting a file and trying to recover it would be a first-hand experience, but then I'm not at a point where I could do this without outside help and I want to understand exactly what I'm doing. Getting into data resurrection would be sidetracking for me at the moment, as I try to get a firm grip on the basic stuff... I'm not lazy, this isn't homework, this is for private study.
When a file or directory is "deleted" its inode number is removed from the directory which contains the file. You can see the list of inodes that a given directory contains using the tree command. Example $ tree -a -L 1 --inodes . . |-- [9571121] dir1 |-- [9571204] dir2 |-- [9571205] dir3 |-- [9571206] dir4 |-- [9571208] dir5 |-- [9571090] file1 |-- [9571091] file2 |-- [9571092] file3 |-- [9571093] file4 `-- [9571120] file5 5 directories, 5 files Links It's important to understand how hardlinks work. This tutorial titled: Intro to Inodes has excellent details if you're just starting out in trying to get a fundamental understanding of how inodes work. excerpt Inode numbers are unique, but you may have noticed that some file name and inode number listings do show some files with the same number. The duplication is caused by hard links. Hard links are made when a file is copied in multiple directories. The same file exists in various directories on the same storage unit. The directory listing shows two files with the same number which links them to the same physical on te storage unit. Hard links allow for the same file to "exist" in multiple directories, but only one physical file exists. Space is then saved on the storage unit. For example, if a one megabyte file is placed in two different directories, the space used on the storage is one megabyte, not two megabytes. Deleting That same tutorial also had this to say about what happens when a inode is deleted. Deleting files causes the size and direct/indirect block entries are zeroed and the physical space on the storage unit is set as unused. To undelete the file, the metadata is restored from the Journal if it is used (see the Journal article). Once the metadata is restored, the file is once again accessible unless the physical data has been overwritten on the storage unit. Extents You might want to also brush up on extents and how they work. Again from the linux.org site, another good tutorial, titled: Extents will help you get the basics down. You can use the command filefrag to identify how many extents a given file/directory is using. Examples $ filefrag dir1 dir1: 1 extent found $ filefrag ~/VirtualBox\ VMs/CentOS6.3/CentOS6.3.vdi /home/saml/VirtualBox VMs/CentOS6.3/CentOS6.3.vdi: 5 extents found You can get more detailed output by using the -v switch: $ filefrag -v dir1 Filesystem type is: ef53 File size of dir1 is 4096 (1 block of 4096 bytes) ext: logical_offset: physical_offset: length: expected: flags: 0: 0.. 0: 38282243.. 38282243: 1: eof dir1: 1 extent found NOTE: Notice that a directory always consumes at a minimum, 4K bytes. Giving a file some size We can take one of our sample files and write 1MB of data to it like this: $ dd if=/dev/zero of=file1 bs=1k count=1k 1024+0 records in 1024+0 records out 1048576 bytes (1.0 MB) copied, 0.00628147 s, 167 MB/s $ ll | grep file1 -rw-rw-r--. 1 saml saml 1048576 Dec 9 20:03 file1 If we analyze this file using filefrag: $ filefrag -v file1 Filesystem type is: ef53 File size of file1 is 1048576 (256 blocks of 4096 bytes) ext: logical_offset: physical_offset: length: expected: flags: 0: 0.. 255: 35033088.. 35033343: 256: eof file1: 1 extent found Deleting and recreating a file quickly One interesting experiment you can do is to create a file, such as file1 above, and then delete it, and then recreate it. Watch what happens. Right after deleting the file, I re-run the dd ... command and file1 shows up like this to the filefrag command: $ filefrag -v file1 Filesystem type is: ef53 File size of file1 is 1048576 (256 blocks of 4096 bytes) ext: logical_offset: physical_offset: length: expected: flags: 0: 0.. 255: 0.. 255: 256: unknown,delalloc,eof file1: 1 extent found After a bit of time (seconds to minutes pass): $ filefrag -v file1 Filesystem type is: ef53 File size of file1 is 1048576 (256 blocks of 4096 bytes) ext: logical_offset: physical_offset: length: expected: flags: 0: 0.. 255: 38340864.. 38341119: 256: eof file1: 1 extent found The file finally shows up. I'm not entirely sure what's going on here, but it looks like it takes some time for the file's state to settle out between the journal & the disk. Running stat commands shows the file with an inode so it's there, but the data that filefrag uses hasn't been resolved so we're in a bit of a limbo state.
what does a "deleted file" entry look like in the journal
1,393,437,350,000
Softlinks are easily traceable to the original file with readlink etc... but I am having a hard time tracing hardlinks to the original file. $ ll -i /usr/bin/bash /bin/bash 1310813 -rwxr-xr-x 1 root root 1183448 Jun 18 21:14 /bin/bash* 1310813 -rwxr-xr-x 1 root root 1183448 Jun 18 21:14 /usr/bin/bash* ^ above is as expected - cool --> both files point to same inode 1310813 (but the number of links, indicated by ^, shows to be 1. From Gilles answer the reason for this can be understood) $ find / -samefile /bin/bash 2>/dev/null /usr/bin/bash above is as expected - so no problems. $ find / -samefile /usr/bin/bash 2>/dev/null /usr/bin/bash above is NOT cool. How do I trace the original file or every hardlink using the /usr/bin/bash file as reference? Strange - below did not help either. $ find / -inum 1310813 2>/dev/null /usr/bin/bash
First, there is no original file in the case of hard links; all hard links are equal. However, hard links aren’t involved here, as indicated by the link count of 1 in ls -l’s output: $ ll -i /usr/bin/bash /bin/bash 1310813 -rwxr-xr-x 1 root root 1183448 Jun 18 21:14 /bin/bash* 1310813 -rwxr-xr-x 1 root root 1183448 Jun 18 21:14 /usr/bin/bash* Your problem arises because of a symlink, the bin symlink which points to usr/bin. To find all the paths in which bash is available, you need to tell find to follow symlinks, using the -L option: $ find -L / -xdev -samefile /usr/bin/bash 2>/dev/null /usr/bin/rbash /usr/bin/bash /bin/rbash /bin/bash I’m using -xdev here because I know your system is installed on a single file system; this avoids descending into /dev, /proc, /run, /sys etc.
How to effectively trace hardlink in Linux?