date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,487,837,187,000 |
From my understanding of /etc/systemd options, noauto means that the device will not be mounted at boot time (or with mount -a).
Is there any situation where adding nofail changes the behaviour if noauto is already given, or is it totally redundant?
man systemd.mount(5) says:
With noauto, this mount will not be added as a dependency for local-fs.target or remote-fs.target. This means that it will not be mounted automatically during boot, unless it is pulled in by some other unit.
With nofail, this mount will be only wanted, not required, by local-fs.target or remote-fs.target. This means that the boot will continue even if this mount point is not mounted successfully.
What about automount situations?
|
Just for the record:
For an external USB disk which is usually not connected at startup, I have an fstab entry
/dev/disk/by-label/data /data xfs noauto,user,noatime 0 0
When booting there is no error as noautokeeps the system from trying to mount. When I try to mount manually without the drive connected, I immediately get the error
~$ mount /data
mount: special device /dev/disk/by-label/data does not exist
~$
If I change the line in fstab to
/dev/disk/by-label/data /data xfs noauto,nofail,user,noatime 0 0
there is no error reported, even when the drive is not available:
~$ mount /data
~$
System: Ubuntu 16.04 with systemd.
| /etc/fstab: meaning of "nofail" if "noauto" is already specified |
1,487,837,187,000 |
After doing some research about systemd units, I found two different kinds of mount unit: the .mount and the .automount. At first, it seemed logical to me that the automount unit would mount a mount automatically. However, as it turns out, when you enable a mount unit:
systemctl enable media-mydisk.mount
It will automatically be mounted at boot. I'm somewhat new to systemd, but this has been bugging me for quite some time already. I've also posted the code of the unit at the end.
So, my main question is: why does one need automounts if one can just enable a mount unit?
Here's my media-mydisk.mount if that makes any difference:
[Unit]
Description=My disk
[Mount]
What=/dev/sdb1
Where=/media/mydisk
Type=ext4
Options=defaults
[Install]
WantedBy=multi-user.target
I've searched the web but couldn't find any comparison between using an .automount and simply enabling a .mount
|
The "auto" part in automount does not refer to the boot process: automount units define mount points that are mounted on-demand, i.e. only when they are accessed.
automount units are optional; but, when they exist, corresponding mount units must also exist. The former are meant to add functionalities to existing instances of the latter. From man systemd.mount:
Optionally, a mount unit may be accompanied by an automount unit, to allow on-demand or parallelized mounting.
And, from man systemd.automount:
For each automount unit file a matching mount unit file (see systemd.mount(5) for details) must exist which is activated when the automount path is accessed. Example: if an automount unit home-lennart.automount is active and the user accesses /home/lennart the mount unit home-lennart.mount will be activated.
A typical use case for automount units is mounting file systems (e.g. on remote or removable or encrypted media) that are not required during the boot process and may slow it down, or that may be unavailable at boot, but that you still like having managed by systemd.
A simple, illustrative-only example. Given the mnt-foo.mount unit
[Unit]
Description=foo mount
[Mount]
Where=/mnt/foo
What=/home/user/foo
Type=ext4
(for simplicity, foo is just a regular file formatted as ext4), and the mnt-foo.automount unit
[Unit]
Description=foo automount
[Automount]
Where=/mnt/foo
[Install]
WantedBy=multi-user.target
after the latter is activated (or enabled, and the system rebooted)
# systemctl start mnt-foo.automount
you will be able to check that /home/user/foo is not mounted anywhere yet—mount gives
$ mount | grep foo
systemd-1 on /mnt/foo type autofs (...)
and indeed /home/user/foo is only mounted on /mnt/foo as soon as you access the mount point:
$ ls /mnt/foo
$ mount | grep foo
systemd-1 on /mnt/foo type autofs (...)
/home/user/foo on /mnt/foo type ext4 (rw,relatime)
| Mount vs automount systemd units: which one to use for what? |
1,487,837,187,000 |
Systemd-analyze is giving me different result depending on how much times I execute it, I'm doing Systemd-analyze verify mnt-HDDs.mount and getting:
local-fs.target: Found ordering cycle on HDDs-unlock.service/start
local-fs.target: Found dependency on sysinit.target/start
local-fs.target: Found dependency on systemd-update-done.service/start
local-fs.target: Found dependency on local-fs.target/start
local-fs.target: Job systemd-update-done.service/start deleted to break ordering cycle starting with local-fs.target/start
local-fs.target: Found ordering cycle on HDDs-unlock.service/start
local-fs.target: Found dependency on sysinit.target/start
local-fs.target: Found dependency on systemd-journal-catalog-update.service/start
local-fs.target: Found dependency on local-fs.target/start
local-fs.target: Job systemd-journal-catalog-update.service/start deleted to break ordering cycle starting with local-fs.target/start
local-fs.target: Found ordering cycle on HDDs-unlock.service/start
local-fs.target: Found dependency on sysinit.target/start
local-fs.target: Found dependency on systemd-machine-id-commit.service/start
local-fs.target: Found dependency on local-fs.target/start
local-fs.target: Job systemd-machine-id-commit.service/start deleted to break ordering cycle starting with local-fs.target/start
local-fs.target: Found ordering cycle on HDDs-unlock.service/start
local-fs.target: Found dependency on sysinit.target/start
local-fs.target: Found dependency on local-fs.target/start
local-fs.target: Job local-fs.target/start deleted to break ordering cycle starting with local-fs.target/start
sysinit.target: Found ordering cycle on plymouth-read-write.service/start
sysinit.target: Found dependency on local-fs.target/start
sysinit.target: Found dependency on mnt-HDDs.mount/start
sysinit.target: Found dependency on HDDs-unlock.service/start
sysinit.target: Found dependency on sysinit.target/start
sysinit.target: Job plymouth-read-write.service/start deleted to break ordering cycle starting with sysinit.target/start
sysinit.target: Found ordering cycle on local-fs.target/start
sysinit.target: Found dependency on mnt-HDDs.mount/start
sysinit.target: Found dependency on HDDs-unlock.service/start
sysinit.target: Found dependency on sysinit.target/start
sysinit.target: Job local-fs.target/start deleted to break ordering cycle starting with sysinit.target/start
sysinit.target: Found ordering cycle on systemd-update-done.service/start
sysinit.target: Found dependency on local-fs.target/start
sysinit.target: Found dependency on HDDs-unlock.service/start
sysinit.target: Found dependency on sysinit.target/start
sysinit.target: Job systemd-update-done.service/start deleted to break ordering cycle starting with sysinit.target/start
sysinit.target: Found ordering cycle on systemd-machine-id-commit.service/start
sysinit.target: Found dependency on local-fs.target/start
sysinit.target: Found dependency on HDDs-unlock.service/start
sysinit.target: Found dependency on sysinit.target/start
sysinit.target: Job systemd-machine-id-commit.service/start deleted to break ordering cycle starting with sysinit.target/start
sysinit.target: Found ordering cycle on systemd-tmpfiles-setup.service/start
sysinit.target: Found dependency on local-fs.target/start
sysinit.target: Found dependency on HDDs-unlock.service/start
sysinit.target: Found dependency on sysinit.target/start
sysinit.target: Job systemd-tmpfiles-setup.service/start deleted to break ordering cycle starting with sysinit.target/start
sysinit.target: Found ordering cycle on plymouth-read-write.service/start
sysinit.target: Found dependency on local-fs.target/start
sysinit.target: Found dependency on HDDs-unlock.service/start
sysinit.target: Found dependency on sysinit.target/start
sysinit.target: Job plymouth-read-write.service/start deleted to break ordering cycle starting with sysinit.target/start
sysinit.target: Found ordering cycle on local-fs.target/start
sysinit.target: Found dependency on HDDs-unlock.service/start
sysinit.target: Found dependency on sysinit.target/start
sysinit.target: Job local-fs.target/start deleted to break ordering cycle starting with sysinit.target/start
The units created directly involved in mnt-HDDs.mount:
mnt-HDDs.mount:
[Unit]
Description=Mount unit for encripted device /mnt/HDDs
After=HDDs-unlock.service
[Mount]
Where=/mnt/HDDs
What=/dev/mapper/cryptHDDB
Type=btrfs
Options=noatime,compress-force=zstd,autodefrag,flushoncommit
HDDs-unlock.service
[Unit]
Description=HDDB and HDDC unlock
After=media-key.mount umount.target local-fs-pre.target
Before=local-fs.target
Conflicts=umount.target
[Service]
Type=oneshot
RemainAfterExit=yes
KillMode=none
ExecStart=/usr/bin/HDDs-unlock.sh
ExecStop=/usr/bin/HDDs-lock.sh
[Install]
RequiredBy=mnt-HDDs.mount
media-key.mount
[Unit]
Description=HDDs key
StopWhenUnneeded=true
[Mount]
Where=/media/key
What=/dev/disk/by-id/usb-SMI_USB_DISK_AA00000000065845-0:0
Options=ro,offset=952320
DirectoryMode=0400
[Install]
RequiredBy=HDDs-unlock.service
There is a fstab entry that mounts /mnt/HDDs/@ that by automatic dependency calls for mnt-HDDs.mount, while rebooting in search for a solution it some times made my boot skip some services to the point of sometimes booting without network, any light on solving this ordering cycle?
|
Ok, after a lot of searching (and a lot of revisits to this answer here in StackExchange) It ticked to me that this answer was about the same problem I was facing.
Basically mount units implicitly occur between local-fs-pre.targe and local-fs.targe, before basic.target, the problem is services get implicit Requires=basic.target and After=basic.target. the solution was to disable default dependencies:
[Unit]
....
DefaultDependencies=no
....
| How can I solve this ordering cycle in a mount unit? |
1,487,837,187,000 |
This is a followup to another question.
I figured out something is unmounting my device right after I mount it.
This device is being used by a database (Vertica), which is down and not using the directory while I'm running the mount command.
I'm trying to figure out:
Is systemd the one which unmounts the device?
How can I debug why is that happening?
How do I fix it?
Here's an example of what is happening:
[root@mymachine systemd]# mount -t ext4 /dev/xvdx /vols/data5; ls -la /vols/data5; sleep 5; ls -la /vols/data5
total 36
drwxr-xr-x 5 dbadmin verticadba 4096 Jul 23 2017 .
drwxr-xr-x 9 root root 96 Jul 16 18:52 ..
drwxrwx--- 503 dbadmin verticadba 12288 Jul 23 13:51 somedb
drwx------ 2 root root 16384 Nov 30 2016 lost+found
drwxrwxrwx 2 dbadmin verticadba 4096 Jun 20 08:32 tmp
total 0
drwxr-xr-x 2 root root 6 Jun 8 2017 .
drwxr-xr-x 9 root root 96 Jul 16 18:52 ..
[root@mymachine ~]#
fstab:
#
# /etc/fstab
# Created by anaconda on Mon May 1 18:59:01 2017
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=29342a0b-e20f-4676-9ecf-dfdf02ef6683 / xfs defaults 0 0
/dev/xvdb swap swap defaults,nofail 0 0
/dev/xvdy /vols/data ext4 defaults 0 0
/dev/xvdx /vols/data5 ext4 defaults 0 0
Some more logs as per Filipe Brandenburger's suggestion:
Aug 01 16:55:19 mymachine kernel: EXT4-fs (xvdx): mounted filesystem with ordered data mode. Opts: (null)
Aug 01 16:55:19 mymachine systemd[1]: Unit vols-data5.mount is bound to inactive unit dev-xvdl.device. Stopping, too.
Aug 01 16:55:19 mymachine systemd[1]: Unmounting /vols/data5...
Aug 01 16:55:19 mymachine umount[353194]: umount: /vols/data5: target is busy.
Aug 01 16:55:19 mymachine umount[353194]: (In some cases useful info about processes that use
Aug 01 16:55:19 mymachine umount[353194]: the device is found by lsof(8) or fuser(1))
Aug 01 16:55:19 mymachine systemd[1]: vols-data5.mount mount process exited, code=exited status=32
Aug 01 16:55:19 mymachine systemd[1]: Failed unmounting /vols/data5.
|
Ok, that was in interesting debugging experience... Thanks Filipe Brandenburger for leading me to it!
Is systemd the one which unmounts the device?
Yes. journalctl -e shows a related message:
Aug 01 16:55:19 mymachine systemd[1]: Unit vols-data5.mount is bound to inactive unit dev-xvdl.device. Stopping, too.
Apparently I'm not the first one to encounter it. See this systemd issue:
systemd umounts manual mounts when it has a failed unit for that mount point #1741
How can I debug why is that happening?
Run journalctl -e for debugging.
How do I fix it?
This workaround worked for me: run the command below, then try mounting again.
systemctl daemon-reload
That's all, folks!
| How to check whether systemd is unmounting my device? (and why?) |
1,487,837,187,000 |
I have a CIFS mount of a single volume which then has 2 subdirectories bind mounted. Upon boot, systemd complains of an "ordering cycle" and fails to mount 1 of the binds but the other works fine. If I run mount -a, the missing bind is mounted. I have been able to recreate this behavior in a new VM.
/etc/fstab
//server.example.com /mnt/media cifs [snip] 0 0
/mnt/media/secure /var/www/media/secure none bind 0 0
/mnt/media/public /var/www/media/public none bind 0 0
The bind mounts are not altered, those are the actual names. I don't know how, but, I think that may be significant, perhaps due to sort order because when I change the order in fstab only the public bind mount fails. secure always works.
logs from journal
Mar 19 14:06:45 ubuntu systemd[1]: local-fs.target: Found dependency on var-www-media-public.mount/start
Mar 19 14:06:45 ubuntu systemd[1]: local-fs.target: Found dependency on mnt-media.mount/start
Mar 19 14:06:45 ubuntu systemd[1]: local-fs.target: Found dependency on network-online.target/start
Mar 19 14:06:45 ubuntu systemd[1]: local-fs.target: Found dependency on networking.service/start
Mar 19 14:06:45 ubuntu systemd[1]: local-fs.target: Found dependency on local-fs.target/start
Mar 19 14:06:45 ubuntu systemd[1]: local-fs.target: Breaking ordering cycle by deleting job var-www-media-public.mount/start
Mar 19 14:06:45 ubuntu systemd[1]: var-www-media-public.mount: Job var-www-media-public.mount/start deleted to break ordering cycle starting with local-fs.target/start
I've tried specifying x-systemd.requires=/mnt/media on the bind mount but it made no change. I am at a loss for where to go next with this issue.
|
I am not sure why even one of the bind mounts is able to work. Here is my suggestion why both might fail together, and how to fix it:
Your networking.service is ordered after local filesystems. The bind mounts are being treated as local filesystems. But, the bind mounts are also ordered after a network mount - systemd adds these logical dependencies automatically for bind mounts.
In which case you need to tell systemd that the bind mount is actually a network mount. There is an option deliberately for this sort of case. Simply add the mount option _netdev to the bind mounts. For further information, this option is defined in man systemd.mount.
| systemd mount cycle for bind mount of cifs mount |
1,487,837,187,000 |
I'd like to mount a directory from a remote machine in my /home/stew/shared. After installing sshfs and using ssh-copy-id to my remote machine, I can do this:
stew@stewbian:~$ sshfs [email protected]:/path/to/remote-dir ~/shared
and then unmount with
stew@stewbian:~$ umount ~/shared
or
stew@stewbian:~$ fusermount -u ~/shared
Works great, but I'd like to mount this automatically when stew logs in, and unmount it when stew logs out. One working option is to use a systemd .service on the user bus:
# ~/.config/systemd/user/shared.service
[Unit]
Description=Mount ~/shared
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=sshfs %[email protected]:/path/to/remote-dir %h/shared
ExecStop=umount %h/shared
[Install]
WantedBy=default.target
systemctl --user {start,stop} shared.service also works great! But I'm wondering if .mount units would be more robust.
I tried using a mount unit like so:
# ~/.config/systemd/user/home-stew-shared.mount
[Unit]
Description=~/shared
[Mount]
What=%[email protected]:/path/to/remote-dir
Where=%h/shared
Type=fuse.sshfs
[Install]
WantedBy=default.target
Starting this mount unit works great, but stopping it causes this:
$ systemctl --user status home-stew-shared.mount
● home-stew-shared.mount - ~/shared
Loaded: loaded (/home/stew/.config/systemd/user/home-stew-shared.mount; static)
Active: active (mounted) (Result: exit-code) since Mon 2021-05-24 16:49:40 CEST; 6min ago
...
May 24 16:49:40 stewbian systemd[1046]: Unmounting ~/shared...
May 24 16:49:40 stewbian umount[22256]: umount: /home/stew/shared: must be superuser to unmount.
May 24 16:49:40 stewbian systemd[1046]: home-stew-shared.mount: Mount process exited, code=exited, status=32/n/a
May 24 16:49:40 stewbian systemd[1046]: Failed unmounting ~/shared.
I can $ umount ~/shared to unmount the directory and fail the unit.
Questions:
Is there a reason why I should prefer *.mount units over *.service units?
If I really should be using *.mount, is there a trick to getting this to work on the user-bus, or do I need to go to the system bus and figure out how to do lazy mounting and manually set UIDs and GIDs?
One of the nice things about using the *.service is that I can add this service to the skel, so each user will automount their own private shared directories which are effectively sync'd between all machines in the house. The *.mount files need the username in the filename to access the correct home.
|
I have the same issue on Ubuntu 21.04 with systemd 246.6 - when you try to unmount a mount unit, systemd first tries to find a umount helper at /sbin/umount.<type> (i.e. for sshfs it would be /sbin/umount.fuse.sshfs), and when that fails - it will call umount2(<where>) - and that will fail when it is run by the user's systemd.
I'm not sure why this works for @fra-san - I think they may have an unmount helper.
As to those questions:
You can do anything with a service unit, like you could do anything with a SysV style init script, but the idea of systemd is that we understand common system management tasks and give the minimal descriptive syntax to achieve what you need without over complicating things (that makes it harder to maintain). If you can use a mount unit to mount file systems, that is preferable to basically writing a script. Of course the infrastructure has to be able to support what you need to do and while the current status of user mount units is much better than it was a few years ago, as of Ubuntu 21.04 - it is still not 100% there for FUSE file systems.
To get fuse.sshfs user mount units to stop (unmount), I created an umount helper in /sbin/umount.fuse.sshfs as
#!/bin/sh
/bin/fusermount -u "$1"
then stopping the mount unit works well - systemd will call the umount helper and will correctly unmount the filesystem (don't call umount from the umount helper because umount also calls helpers and you'd get into an infinite loop that will consume all pids). This is likely not a great solution and systemd should do what umount does when you call it as a user (which I can't actually figure out what is does), but it works for me.
| Using systemd to mount remote filesystems in user-bus |
1,487,837,187,000 |
I have a systemd service with this declaration:
RuntimeDirectory=plex
RuntimeDirectoryMode=750
which creates the in memory directory /run/plex.
How would I list the capabilities of this mount point as I would do with mount -l?
|
The RuntimeDirectory= directive does not create a new mount, it only creates a new directory under the existing /run. So, in a way, you're just reusing space from the existing /run.
In other words, look for /run in mount -l output to see the options of the mountpoint where /run/plex lives.
You can also use the findmnt(8) command and pass it the full path with -T to show where the mount point is and its options. For example:
$ findmnt -T /run/plex
TARGET SOURCE FSTYPE OPTIONS
/run tmpfs tmpfs rw,nosuid,nodev,seclabel,mode=755
If you want to know how much space has been allocated for the in-memory tmpfs, you can use the df(1) command:
$ df -h /run/plex
Filesystem Size Used Avail Use% Mounted on
tmpfs 3.9G 612K 3.9G 1% /run
| list systemd RuntimeDirectory mounts |
1,487,837,187,000 |
I want to create systemd mount-unit equivalent for next fstab line
/dev/sdc1 /жышы ext4 defaults 1 2
Something as
жышы.mount
[Unit]
Description= /dev/sdc1 to /жышы
[Mount]
What=/dev/sdc1
Where=/жышы
Type=ext4
[Install]
WantedBy=multi-user.target
Yes, I tried to use systemd-escape for unit file name and for Where, but without success. My better approach was:
xd0xb6xd1x8bxd1x88xd1x8b.mount
[Unit]
Description= /dev/sdc1 to /жышы
[Mount]
What=/dev/sdc1
Where='/жышы'
Type=ext4
[Install]
WantedBy=multi-user.target
This variant almost working (no error for unit file name), but mounts /dev/sdc1 to auto-created folder /xd0xb6xd1x8bxd1x88xd1x8b instead of /жышы.
Please help to fix this mess.
|
From man systemd.mount:
Mount units must be named after the mount point directories they control. Example: the mount point /home/lennart must be configured in a unit file home-lennart.mount. For details about the escaping logic used to convert a file system path to a unit name, see systemd.unit(5).
OK, so from man systemd.unit:
The escaping algorithm operates as follows: given a string, any "/" character is replaced by "-", and all other characters which are not ASCII alphanumerics, ":", "_" or "." are replaced by C-style "\x2d" escapes. In addition, "." is replaced with such a C-style escape when it would appear as the first character in the escaped string.
When the input qualifies as absolute file system path, this algorithm is extended slightly: the path to the root directory "/" is encoded as single dash "-". In addition, any leading, trailing or duplicate "/" characters are removed from the string before transformation. Example: /foo//bar/baz/ becomes "foo-bar-baz".
This escaping is fully reversible, as long as it is known whether the escaped string was a path (the unescaping results are different for paths and non-path strings). The systemd-escape(1) command may be used to apply and reverse escaping on arbitrary strings. Use systemd-escape --path to escape path strings, and systemd-escape without --path otherwise.
So, we run
systemd-escape --path /жышы
and get
\xd0\xb6\xd1\x8b\xd1\x88\xd1\x8b
So, \xd0\xb6\xd1\x8b\xd1\x88\xd1\x8b.mount is the right file name. The backslashes are important!
| How to mount folder with nonASCII (cyrillic) letters by systemd mount-unit? |
1,487,837,187,000 |
I have a server running on a RAID-1 mdraid on two SSDs with / encrypted.
After booting the machine, /boot, which is also the EFI partition is not being mounted automatically, which it should be.
State after a regular boot:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 931,5G 0 disk
├─sda1 8:1 0 500M 0 part
│ └─md127 9:127 0 499,9M 0 raid1
└─sda2 8:2 0 930G 0 part
└─md126 9:126 0 929,9G 0 raid1
└─root 253:0 0 929,9G 0 crypt /
sdb 8:16 0 931,5G 0 disk
├─sdb1 8:17 0 500M 0 part
│ └─md127 9:127 0 499,9M 0 raid1
└─sdb2 8:18 0 930G 0 part
└─md126 9:126 0 929,9G 0 raid1
└─root 253:0 0 929,9G 0 crypt /
$ cat /proc/cmdline
initrd=\intel-ucode.img initrd=\initramfs-linux.img root=UUID=2b9dc231-ed74-442c-841d-8254d3d7e91a rw rd.luks.options=bf4b8b61-5907-408c-8fcb-65f77e34cc86=keyfile-timeout=10s rd.luks.key=bf4b8b61-5907-408c-8fcb-65f77e34cc86=/keyfile:UUID=B332-1C72 rd.luks.name=bf4b8b61-5907-408c-8fcb-65f77e34cc86=root init=/usr/lib/systemd/systemd quiet
$ sudo mdadm --detail /dev/md126
/dev/md126:
Version : 1.2
Creation Time : Fri Nov 13 12:32:44 2020
Raid Level : raid1
Array Size : 975043584 (929.87 GiB 998.44 GB)
Used Dev Size : 975043584 (929.87 GiB 998.44 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Fri Jul 16 13:50:39 2021
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : bitmap
Name : archiso:cryptroot
UUID : e4353e5f:188eaaa7:afbb8df6:a1a914bb
Events : 5329
Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
2 8 18 1 active sync /dev/sdb2
$ sudo mdadm --detail /dev/md127
/dev/md127:
Version : 1.0
Creation Time : Fri Nov 13 15:41:39 2020
Raid Level : raid1
Array Size : 511936 (499.94 MiB 524.22 MB)
Used Dev Size : 511936 (499.94 MiB 524.22 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Fri Jul 16 13:46:13 2021
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : resync
Name : archiso:efi
UUID : 457d2b2d:e0f98963:65fb2d50:43d742ad
Events : 150
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
2 8 17 1 active sync /dev/sdb1
$ mount
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,size=4007800k,nr_inodes=1001950,mode=755,inode64)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,inode64)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,size=1613448k,nr_inodes=819200,mode=755,inode64)
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate,memory_recursiveprot)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime)
none on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
/dev/mapper/root on / type ext4 (rw,relatime)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=31,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=13578)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
tmpfs on /tmp type tmpfs (rw,nosuid,nodev,nr_inodes=409600,inode64)
tracefs on /sys/kernel/tracing type tracefs (rw,nosuid,nodev,noexec,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime)
configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime)
none on /run/credentials/systemd-sysusers.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=806720k,nr_inodes=201680,mode=700,uid=1000,gid=1001,inode64)
$ cat /etc/fstab
# /dev/mapper/root LABEL=root
UUID=2b9dc231-ed74-442c-841d-8254d3d7e91a / ext4 rw,relatime 0 1
# tracefs
tracefs /sys/kernel/tracing tracefs rw,nosuid,nodev,noexec 0 0
# /dev/md127 LABEL=EFI
UUID=B332-1C72 /boot vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro 0 2
$ cat /etc/mkinitcpio.conf
# vim:set ft=sh
# MODULES
# The following modules are loaded before any boot hooks are
# run. Advanced users may wish to specify all system modules
# in this array. For instance:
# MODULES="piix ide_disk reiserfs"
MODULES=(vfat)
# BINARIES
# This setting includes any additional binaries a given user may
# wish into the CPIO image. This is run last, so it may be used to
# override the actual binaries included by a given hook
# BINARIES are dependency parsed, so you may safely ignore libraries
BINARIES=()
# FILES
# This setting is similar to BINARIES above, however, files are added
# as-is and are not parsed in any way. This is useful for config files.
FILES=()
# HOOKS
# This is the most important setting in this file. The HOOKS control the
# modules and scripts added to the image, and what happens at boot time.
# Order is important, and it is recommended that you do not change the
# order in which HOOKS are added. Run 'mkinitcpio -H <hook name>' for
# help on a given hook.
# 'base' is _required_ unless you know precisely what you are doing.
# 'udev' is _required_ in order to automatically load modules
# 'filesystems' is _required_ unless you specify your fs modules in MODULES
# Examples:
## This setup specifies all modules in the MODULES setting above.
## No raid, lvm2, or encrypted root is needed.
# HOOKS="base"
#
## This setup will autodetect all modules for your system and should
## work as a sane default
# HOOKS="base udev autodetect block filesystems"
#
## This setup will generate a 'full' image which supports most systems.
## No autodetection is done.
# HOOKS="base udev block filesystems"
#
## This setup assembles a pata mdadm array with an encrypted root FS.
## Note: See 'mkinitcpio -H mdadm' for more information on raid devices.
# HOOKS="base udev block mdadm encrypt filesystems"
#
## This setup loads an lvm2 volume group on a usb device.
# HOOKS="base udev block lvm2 filesystems"
#
## NOTE: If you have /usr on a separate partition, you MUST include the
# usr, fsck and shutdown hooks.
HOOKS=(base systemd autodetect keyboard sd-vconsole modconf block mdadm_udev sd-encrypt filesystems fsck)
# COMPRESSION
# Use this to compress the initramfs image. By default, gzip compression
# is used. Use 'cat' to create an uncompressed image.
#COMPRESSION="gzip"
#COMPRESSION="bzip2"
#COMPRESSION="lzma"
#COMPRESSION="xz"
#COMPRESSION="lzop"
#COMPRESSION="lz4"
COMPRESSION="cat"
# COMPRESSION_OPTIONS
# Additional options for the compressor
#COMPRESSION_OPTIONS=""
$ sudo find /run/systemd/ -name *.mount
/run/systemd/generator/boot.mount
/run/systemd/generator/sys-kernel-tracing.mount
/run/systemd/generator/local-fs.target.requires/boot.mount
/run/systemd/generator/local-fs.target.requires/sys-kernel-tracing.mount
/run/systemd/generator/local-fs.target.requires/-.mount
/run/systemd/generator/-.mount
/run/systemd/units/invocation:sys-fs-fuse-connections.mount
/run/systemd/units/invocation:sys-kernel-config.mount
/run/systemd/units/invocation:tmp.mount
/run/systemd/units/invocation:sys-kernel-tracing.mount
/run/systemd/units/invocation:sys-kernel-debug.mount
/run/systemd/units/invocation:dev-mqueue.mount
/run/systemd/units/invocation:dev-hugepages.mount
/run/systemd/units/invocation:sysroot.mount
$ sudo cat /run/systemd/generator/boot.mount
# Automatically generated by systemd-fstab-generator
[Unit]
Documentation=man:fstab(5) man:systemd-fstab-generator(8)
SourcePath=/etc/fstab
Before=local-fs.target
After=blockdev@dev-disk-by\x2duuid-B332\x2d1C72.target
[Mount]
Where=/boot
What=/dev/disk/by-uuid/B332-1C72
Type=vfat
Options=rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro
$ cat /run/systemd/generator/local-fs.target.requires/boot.mount
# Automatically generated by systemd-fstab-generator
[Unit]
Documentation=man:fstab(5) man:systemd-fstab-generator(8)
SourcePath=/etc/fstab
Before=local-fs.target
After=blockdev@dev-disk-by\x2duuid-B332\x2d1C72.target
[Mount]
Where=/boot
What=/dev/disk/by-uuid/B332-1C72
Type=vfat
Options=rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro
dmesg
journal
When I run sudo mount -a after a reboot, the /boot partition is mounted without errors.
Why is /boot not mounted automatically? Why does it only work manually?
Update
$ systemctl status boot.mount
○ boot.mount - /boot
Loaded: loaded (/etc/fstab; generated)
Active: inactive (dead) since Thu 2021-08-05 10:07:28 CEST; 48s ago
Where: /boot
What: /dev/disk/by-uuid/B332-1C72
Docs: man:fstab(5)
man:systemd-fstab-generator(8)
CPU: 5ms
Aug 05 10:07:28 offsrv systemd[1]: Unmounting /boot...
Aug 05 10:07:28 offsrv systemd[1]: boot.mount: Deactivated successfully.
Aug 05 10:07:28 offsrv systemd[1]: Unmounted /boot.
$ journalctl -au boot.mount
-- Journal begins at Fri 2021-07-23 02:09:41 CEST, ends at Thu 2021-08-05 10:08:02 CEST. --
Jul 26 19:53:05 offsrv systemd[1]: Unmounting /boot...
Jul 26 19:53:05 offsrv systemd[1]: boot.mount: Deactivated successfully.
Jul 26 19:53:05 offsrv systemd[1]: Unmounted /boot.
-- Boot c3526248def14c31869cf62cc450e680 --
Jul 26 19:53:26 offsrv systemd[1]: Unmounting /boot...
Jul 26 19:53:26 offsrv systemd[1]: boot.mount: Deactivated successfully.
Jul 26 19:53:26 offsrv systemd[1]: Unmounted /boot.
-- Boot 6b6baf91260148e5a3e70e94a48c26a9 --
Aug 03 14:06:56 offsrv systemd[1]: Unmounting /boot...
Aug 03 14:06:56 offsrv systemd[1]: boot.mount: Deactivated successfully.
Aug 03 14:06:56 offsrv systemd[1]: Unmounted /boot.
-- Boot 5bf6c69ef8e0452388e425f7fbd0d75e --
Aug 03 14:07:15 offsrv systemd[1]: Unmounting /boot...
Aug 03 14:07:15 offsrv systemd[1]: boot.mount: Deactivated successfully.
Aug 03 14:07:15 offsrv systemd[1]: Unmounted /boot.
Aug 05 10:07:06 offsrv systemd[1]: Unmounting /boot...
Aug 05 10:07:06 offsrv systemd[1]: boot.mount: Deactivated successfully.
Aug 05 10:07:06 offsrv systemd[1]: Unmounted /boot.
-- Boot 3845a0a027984e5995d42ac52c87813f --
Aug 05 10:07:28 offsrv systemd[1]: Unmounting /boot...
Aug 05 10:07:28 offsrv systemd[1]: boot.mount: Deactivated successfully.
Aug 05 10:07:28 offsrv systemd[1]: Unmounted /boot.
|
After some more tinkering, it turned out, that the filesystem had a minor issue that I detected and repaired using dosfstools:
$ sudo dosfsck /dev/md126
fsck.fat 4.2 (2021-01-31)
There are differences between boot sector and its backup.
This is mostly harmless. Differences: (offset:original/backup)
65:01/00
1) Copy original to backup
2) Copy backup to original
3) No action
[123?q]? 1
Dirty bit is set. Fs was not properly unmounted and some data may be corrupt.
1) Remove dirty bit
2) No action
[12?q]? 1
*** Filesystem was changed ***
The changes have not yet been written, you can still choose to leave the
filesystem unmodified:
1) Write changes
2) Leave filesystem unchanged
[12?q]? 1
/dev/md126: 17 files, 11536/127730 clusters
After that, /boot is mounted automatically (tested with several reboots).
It's a pity, that systemd's corresponding mount unit did not log/report this issue, so I had no idea what was going on.
| /boot partition not mounted on boot |
1,487,837,187,000 |
I'm encountering an issue with a systemd service starting before a local partition is mounted, particularly after a power loss event. I've investigated and found that this problem arises due to a system reboot and subsequent fsck operation.
To easily replicate the problem, I've created a minimal proof of concept. Here are the details.
I have a systemd service called test.service with the following configuration:
[Unit]
Description=fsck order POC
After=network-online.target local-fs.target remote-fs.target swap.target
[Service]
ExecStart=/bin/bash -c "date > /mnt/storage/hello"
[Install]
WantedBy=multi-user.target
In my /etc/fstab, I've configured a mount point as follows:
/dev/sda1 /mnt/storage auto nosuid,nodev,nofail 0 2
Running sudo tune2fs -l /dev/sda1 | grep -i "mount count" returns:
Mount count: 1
Maximum mount count: 1
This indicates that /dev/sda1 undergoes fsck at every boot. However, the hello file is created in the root filesystem's /mnt/storage directory instead of the expected sda1 mount point.
As a result, I don't see anything in /mnt/storage after booting. However, I can see the hello file if I umount /mnt/storage.
My question is: What changes should I make in test.service to ensure that my script is executed after all filesystems are mounted and checked? (Note: The mount point's name might vary, as I don't have control over it when deployed at a client's site.)
The system is running Ubuntu. This problem does not happen on Debian.
Any insights or suggestions on resolving this issue would be greatly appreciated. Thank you in advance for your assistance!
|
solution from my comment
from systemd.mount manpage:
nofail
With nofail, this mount will be only wanted, not required,
by local-fs.target or remote-fs.target. Moreover the mount unit is not ordered
before these target units. This means that the boot will continue without
waiting for the mount unit and regardless whether the mount point can be
mounted successfully.
if you try without that flag , local-fs.target
first attempt to help (didn,t was a solution in this case)
Run:
systemctl list-units --type=mount
Identify your mount-unit, and add it to After= in your systemd-file.
Maybe helpful to identify the correct unit:
systemctl status <unit>
| Systemd service starting before filesystem mount after power loss |
1,487,837,187,000 |
According to manual I thought that systemd-remount-fs.service is responsible for parsing and applying /etc/fstab entries. So I tried to test it: I removed ExecStart part (ExecStart=/lib/systemd/systemd-remount-fs) and rebooted the system. After booting and logging in I still had fstab entries in mount.
And now I am wondering if it's the kernel's job itself? How can I do a job before fstab entries get mounted (in case it's kernel's job)?
|
The kernel usually mounts the root filesystem at the very end of the boot sequence.
It is usually mounted readonly and irrespective of whatever mount option set as part of the /etc/fstab file.
Control is then, given to the init system.
As specified in the manual you linked to, systemd-remount-fs.service :
ignores normal file systems and only changes the root
file system (i.e. /), /usr/, and the virtual kernel API file systems
such as /proc/, /sys/ or /dev/.
You can also read that this service :
is usually pulled in by systemd-fstab-generator
systemd-fstab-generator is in fact responsible for instantiating the initial mount of filesystems according to fstab entries.
This will instantiate mount and swap units as necessary.
It is therefore normal that if you inhibit the automatic execution of systemd-remount-fs.service and reboot, you'll still see your filesystems mounted according to the /etc/fstab entries.
| What is parsing/applying /etc/fstab entries |
1,487,837,187,000 |
I have got two readonly root partitions (say roota and rootb) which operating system is installed. This is for a basic A/B partition update scheme and after updating my system these partitions are selected for boot in a roundrobin fashion.
I have two other partitions (say data1 and data2) and I would like to mount on of these partitions based on the partition I boot.
So, the scenario is like this:
I boot from roota, automatically data1 is mounted. I updated system writing updated image to rootb. I boot from rootb and data2 is automatically mounted. Again I updated system writing updated image to roota, I boot from roota and data1 is mounted... etc.
roota and rootb partitions are readonly (squashfs). data1 and data2 are rw partitions. How can I achieve this behavior in an elegant way for debian 11 bullseye?
|
No idea what your configuration is, but the script will essentially be something like this:
#! /bin/bash
default=/dev/partition1
root=`mount | grep -w / | awk '{print $1}'` # verify this works for you
test "$root" = "partitionB" && default=/dev/partition2
mount $default /mnt/somewhere
| Dynamically select which partition to mount based on root partition |
1,487,837,187,000 |
How mount a mount point available in desktop/file manager under Removable Media category but has not ever been clicked so not been recognized from shell yet, by use of command line?
Then only if this first click was done, it further can be recognized and used on shell by commands
|
On a systemd system you likely have udisksctl, from the udisks2 package. Quoting the udisks(8) manual:
udisks provides interfaces to enumerate and perform operations on disks and storage devices. Any application (including unprivileged ones) can access the udisksd(8) daemon via the name org.freedesktop.UDisks2 on the system message bus[1].
Use
$ udisksctl status
to list the devices you can act upon,
$ udisksctl info --block-device /dev/sdXn
to inspect them (either the block device sdX or one of its partitions, sdXn) and
$ udisksctl mount --block-device /dev/sdXn
to mount a volume. The command will output the full path of its mount point. Finally, to unmount and power-off a device:
$ udisksctl unmount --block-device /dev/sdXn
$ udisksctl power-off --block-device /dev/sdX
See also
How to mount an image file without root permission?
Mounting from dolphin vs commandline
Eject / safely remove vs umount
| mount point available in desktop Removable Media using shell command line |
1,487,837,187,000 |
Two RAID SSDs partitioned as follows.
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
Disklabel type: gpt
Disk identifier: BF152674-71D5-491B-8C35-09F3AA0015EE
Device Start End Sectors Size Type
/dev/sda1 2048 2099199 2097152 1G EFI System
/dev/sda2 2099200 4196351 2097152 1G Linux filesystem
/dev/sda3 4196352 759171071 754974720 360G Linux LVM
/dev/sda4 759171072 885000191 125829120 60G Linux LVM
/dev/sda5 885000192 918554623 33554432 16G Linux swap
Disk /dev/mapper/vg_root-root: 48 GiB, 51535413248 bytes, 100655104 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
Disk /dev/mapper/vg_u-u: 288 GiB, 309233451008 bytes, 603971584 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
The /u partition is mounted as remote-fs by systemd.
# ll /run/systemd/generator/remote-fs.target.requires/
total 0
lrwxrwxrwx 1 root root 10 Sep 26 21:32 u.mount -> ../u.mount
This causes issues when services start after reboot, as /u is not mounted until network services have started.
Have engineer a workaroun by inserting the following for effected services.
RequiresMountsFor=/u
Looking for a better solution of /u being mounted as local-fs (as it should be) to avoid future issues.
Any suggestions?
|
Provided nothing on /u vitally depends on /, you can force /u to mount before other local filesystems. One method would be to add the x-systemd.before=local-fs.target option in fstab. E.g.:
/dev/mapper/vg_u-u /u xfs defaults,x-systemd.before=local-fs.target 0 0
Then force systemd to regenerate its unit files:
sudo systemctl daemon-reload
| Local filesystem mounted as remote by systemd in OL8.6 |
1,487,837,187,000 |
I have a inotify-based service that backs up my LAN's git directory to the Dropbox. I tried keeping the git directory in the Dropbox but I have multiple git clients so often get error files there.
In this early stage of development, this is a fairly busy and chatty system service that wants to log to a ram drive. I don't want to use /tmp because other applications depend on having space there.
To create the ram drive in my fstab I have this :
tmpfs /mnt/ram tmpfs nodev,nosuid,noexec,nodiratime,size=1024M 0 0
I need to be sure that the ram drive is mounted before the backup service starts. I want to put a condition to the service that delays its start.
I see suggestions that people use the *.mnt service as a precondition but I don't see any file in /lib/systemd/system that gives me the name of the service I need.
How can I identify this mount? Is there another approach?
|
On Arch, at least, systemd mounts generated from /etc/fstab are deployed to /run/systemd/generator
For example on my system, with the listing below I can add to my service file
[Unit]
Description=backup logging to temp
After=mnt-ram.mount
ls -la /run/systemd/generator
:> ls -la
total 32
-rw-r--r-- 1 root root 362 Jun 20 17:01 -.mount
drwxr-xr-x 5 root root 260 Jun 20 17:01 .
drwxr-xr-x 22 root root 580 Jun 21 04:40 ..
-rw-r--r-- 1 root root 516 Jun 20 17:01 boot.mount
drwxr-xr-x 2 root root 120 Jun 20 17:01 local-fs.target.requires
drwxr-xr-x 2 root root 80 Jun 20 17:01 local-fs.target.wants
-rw-r--r-- 1 root root 168 Jun 20 17:01 mnt-3T.automount
-rw-r--r-- 1 root root 515 Jun 20 17:01 mnt-3T.mount
-rw-r--r-- 1 root root 168 Jun 20 17:01 mnt-4T.automount
-rw-r--r-- 1 root root 515 Jun 20 17:01 mnt-4T.mount
-rw-r--r-- 1 root root 260 Jun 20 17:01 mnt-ram.mount
-rw-r--r-- 1 root root 349 Jun 20 17:01 mnt-sda.mount
drwxr-xr-x 2 root root 80 Jun 20 17:01 remote-fs.target.requires
| systemd service to start after mount of ram drive |
1,487,837,187,000 |
Can multiple instances of Unit= exist in a systemd.path or systemd.timer unit? Or, must one instead specify multiple instances of the path or timer unit, each with a single instance of Unit=? I haven't been able to find or derive any guidance elsewhere.
The former obviously is easier.
The specific application is to have a path unit activate two mount units.
In particular, the path unit monitors a virtual machine's log file, which is quiet until the VM runs. The mounts are of shares on the virtual machine and are defined in the host's fstab entries, each of which uses the x-systemd.requires= mount option to specify the path unit, so that the mounts don't occur until the virtual machine is running. This works well with a single share.
So, the more specific questions are (a) whether the path unit knows to simply propagate the mount units as instructed, leaving the mount units to mount the shares, or gets confused and can only propate a single mount unit; or (b) whether calling the same path unit twice in fstab creates conflicts or errors when the path unit has many Unit= directives (i.e., by re-creating all the mount points specified) or simply is an expression of a dependency.
Many thanks.
|
man systemd.timer says:
Unit=
The unit to activate when this timer elapses. The argument is a unit name, whose suffix is not ".timer". If not specified, this value defaults to a service that has the same name as the timer unit, except for the suffix. (See above.) It is recommended that the unit name that is activated and the unit name of the timer unit are named identically, except for the suffix.
man systemd.path similarly says:
Unit=
The unit to activate when any of the configured paths changes. The argument is a unit name, whose suffix is not ".path". If not specified, this value defaults to a service that has the same name as the path unit, except for the suffix. (See above.) It is recommended that the unit name that is activated and the unit name of the path unit are named identical, except for the suffix.
Neither of these suggest that you can have multiple Unit= lines or multiple arguments per Unit= line. Even if you try it and find it works, it's not guaranteed that it will work in future releases of systemd because it would be undocumented behaviour.
Therefore it's safest to create a single *.path/*.timer for each unit you need to trigger, even if it means identical *.path or *.timer units. There are probably already several *.timer units with OnCalendar=daily on your system.
Hoestly, it would be a little scary to trigger two independent services if I touch a single path. It invites race conditions. You could consider changing your service to use multiple ExecStartPre= or ExecStartPost= to sequence the operations, ensuring they always happen in a deterministic order.
| Multiple Instances of Unit= in Path or Timer Unit? |
1,487,837,187,000 |
I'm trying to mount an external usb drive to raspberry pi 4b with debian 11 bullseye.
Whatever I've tried so far to set mount options gets ignored.
/etc/fstab
UUID="9f32de87-6800-4585-a5c5-e6a3946ba2bb" /data ext4 defaults,nofail 0 0
UUID="9f32de87-6800-4585-a5c5-e6a3946ba2bb" /data ext4 rw,suid,dev,exec,auto,nouser,async,nofail 0 0
PARTUUID=20df08a4-01 /data ext4 rw,suid,dev,exec,auto,nouser,async,nofail 0 0
systemd mount unit
root@srv:/etc/systemd/system# cat data.mount
[Unit]
Description=Mount /data with systemd
[Mount]
What=/dev/disk/by-uuid/9f32de87-6800-4585-a5c5-e6a3946ba2bb
Where=/data
Type=ext4
Options=rw,suid,dev,exec,auto,nouser,async,nofail
[Install]
WantedBy=multi-user.target
mount command
root@srv:~# mount -t ext4 -o rw,suid,dev,exec,auto,nouser,async,nofail /dev/sda1 /data
Output is always:
root@srv:~# mount -l | grep data
/dev/sda1 on /data type ext4 (rw,relatime) [data]
I know that most of the options are in the ext4 defaults mount option included, but also other options I tried are completely ignored.
Any hints how to do this? Any constraints with USB drives here I'm missing?
Thanks
|
async, suid, dev and exec are the default states for an ext4 mount, so only the non-default options (sync, nosuid, nodev and/or noexec) may be displayed.
auto and nouser affect mainly the mount command itself, and these are also the default states for these options. Normally all /etc/fstab entries that are not specifically marked with a noauto option will be mounted if/when mount -a is executed; once the filesystem is mounted, the auto/noauto option has already fulfilled its purpose and so there is no reason for the kernel to track it.
If user was specified, the mount command would have to keep track who mounted the filesystem (classically in /etc/mtab if it's a regular file; nowadays in /run/mount/libmount instead) as only root or the user who originally mounted the filesystem will be allowed to unmount it. But with nouser, the default classic Unix behavior of "only root can mount/unmount filesystems" prevails.
Out of all these options you've specified, nofail is the only non-default one, and it also only affects the the mounting process, causing it to not report an error if this filesystem cannot be mounted. Once the filesystem has been successfully mounted, the kernel has no reason to track the state of that option.
The reason the rw and relatime options are explicitly displayed is essentially historical: showing the rw/ro state explicitly is a long-standing practice, and relatime highlights the fact that the handling of the atime timestamp is not done strictly the classic Unix way. The other alternatives for relatime would be noatime (which can cause problems with e.g. the classic way to detect if you have unread email or not in /var/mail), and strictatime which would enforce the classic Unix behavior (and cause lots of mostly-unnecessary small write operations, harming SSD life and preventing disks from going into power-saving states). relatime has been the default since kernel version 2.6.30.
So your mount options are not actually getting ignored: you are just specifying a set of options that is essentially equivalent to the default way to mount the filesystem.
| mount options ignored - debian 11 bullseye on raspberry with ext. usb drive |
1,487,837,187,000 |
Since I haven't booted into Windows in over a month and use Linux frequently (without restarting except to boot back into Linux immediately), why did I discover it mounted?
If I hadn't been looking through my disks for used and free space I never would have even noticed.
Then again, recently I'd siphoned a portion of the Windows Partition to feed the Linux partition. That's the only interaction I've had with the Windows partition I've had in quite some time. I've also checked up on it twice since.
A little later, I notice that Windows was mounted while my Linux distro was running. Is this normal? I can't help but think that something else must be involved, but I truly haven't made any new installs or significant changes other than the Mint Kernal and other updates.
Linux is also my primary Operating system using the Grub for a boot menu and Windows far down the list.
|
Yes. It is normal for every OS to mount all the partitions on its startup disk.
| Why is a Windows partition mounted while Linux is in use? |
1,281,507,991,000 |
I am trying to learn Linux system programming, which is the best book to learn this?
|
Linux Systems Programming
you can refer this also link
| What is the best book to learn Linux system programming? [closed] |
1,281,507,991,000 |
What's the difference between slow system calls and fast system calls? I have learned that slow system call can block if the process catches some signals, because the caught signals may wake up the blocked system call, but I can't exactly understand this mechanism. Any examples would be appreciated.
|
There are in fact three gradations in system calls.
Some system calls return immediately. “Immediately” means that the only thing they need is a little processor time. There's no hard limit to how long they can take (except in real-time systems), but these calls return as soon as they've been scheduled for long enough.
These calls are usually called non-blocking. Examples of non-blocking calls are calls that just read a bit of system state, or make a simple change to system state, such as getpid, gettimeofday, getuid or setuid. Some system calls can be blocking or non-blocking depending on the circumstances; for example read never blocks if the file is a pipe or other type that supports non-blocking reads and the O_NONBLOCK flag is set.
A few system calls can take a while to complete, but not forever. A typical example is sleep.
Some system calls will not return until some external event happens. These calls are said to be blocking. For example, read called on a blocking file descriptor is blocking, and so is wait.
The distinction between “fast” and “slow” system calls is close to non-blocking vs. blocking, but this time from the point of view of the kernel implementer. A fast syscall is one that is known to be able to complete without blocking or waiting. When the kernel encounters a fast syscall, it knows it can execute the syscall immediately and keep the same process scheduled. (In some operating systems with non-preemptive multitasking, fast syscalls may be non-preemptive; this is not the case in normal unix systems.) On the other hand, a slow syscall potentially requires waiting for another task to complete, so the kernel must prepare to pause the calling process and run another task.
Some cases are a bit of a gray area. For example a disk read (read from a regular file) is normally considered non-blocking, because it's not waiting for another process; it's only waiting for the disk, which normally takes only a little time to answer, but won't take forever (so that's case 2 above). But from the kernel's perspective, the process has to wait for the disk driver to complete, so it's definitely a slow syscall.
| Difference between slow system calls and fast system calls |
1,281,507,991,000 |
Is there an easy way to find out which header file a C function declaration is in? cding into /usr/include and running (grep -E 'system.*\(' *.h -R) works with some trial and error, but isn't there an easier way to do this?
|
$ man 2 read
...
READ(2) Linux Programmer's Manual READ(2)
NAME
read - read from a file descriptor
SYNOPSIS
#include <unistd.h>
...
| How to find the header file where a c function is defined? |
1,281,507,991,000 |
Can some one explain in an easy to understand way the concept of memory mappings (achieved by mmap() system call) in Unix like systems ? When do we require this functionality ?
|
Consider: two processes can have the same file open for reading & writing at the same time, so some kind of communication is possible between the two.
When process A writes to the file, it first populates a buffer inside its own process-specific memory with some data, then calls write which copies that buffer into another buffer owned by the kernel (in practise, this will be a page cache entry, which the kernel will mark as dirty and eventually write back to disk).
Now process B reads from same point in the same the file; read copies the data from the same place in the page cache, into a buffer in B's memory.
Note that two copies are required: first the data is copied from A into the "shared" memory, and then copied again from the "shared" memory into B.
A could use mmap to make the page cache memory available directly in its own address space. Now it can format its data directly into the same "shared" memory, instead of populating an intermediate buffer, and avoiding a copy.
Similarly, B could mmap the page directly into its address space. Now it can directly access whatever A put in the "shared" memory, again without having to copy it into a separate buffer.
(Obviously some kind of synchronization is required if you really want to use this scheme for IPC, but that's out of scope).
Now consider the case where A is replaced by the driver for whatever device this file is stored on. By accessing the file with mmap, B still avoids a redundant copy (the DMA or whatever into the page cache is unavoidable, but it doesn't need to be copied again into B's buffer).
There are also some drawbacks, of course. For example:
if your device and OS support asynchronous file I/O, you can avoid blocking reads/writes using that ... but reading or writing a mmapped page can cause a blocking page fault which you can't handle directly (although you can try to avoid it using mincore etc.)
it won't stop you trying to read off the end of a file, or help you append to it, in a nice way (you need to check the length or explicitly truncate the file larger)
| Concept of memory mapping in Unix like systems |
1,281,507,991,000 |
I'm trying to wrap my head how shells move the screen cursor around (Moving around input the arrow keys and such).
I've been doing a lot of testing, and I haven't found any system call that allows for movement of the screen cursor in a terminal. Is this done using specific terminal extensions? If so, is there any documentation as to how this is achieved for, say, xterm?
This has been driving me crazy, any light you can shed on this is appreciated.
|
There is no such system call, or set of system calls. It's all done by convention.
Back in the old days, when a "terminal" was a large clunky piece of equiment linked to a computer via a cable, "smart terminals" would do things like move the cursor, or draw a line, or fill in a polygon with colors. The smart terminal usually did this via "escape sequenes". A program issued a special sequence of byte values, usually beginning with ASCII 0x1b, 'ESC', to move a cursor, or color a word, or some special thing other than imitating a teletypewriter.
That's the "convention". Nowadays, a "terminal" is almost always just one window among many on a bitmapped screen, xterm, rxvt, etc etc. It's up to xterm (or rxvt or whatever) to read the bytes coming from "the program" running in it, and interpret those bytes according to a given convention. I just typed this in one of my xterm windows:
6 % echo $TERM
xterm
But I know that old, old xterms used to say "vt100" in that case. Any program that "moves the cursor" has to output escape sequences for the "xterm" convention to move the cursor in an xterm instance.
You could do man curses and man terminfo for more information.
PS
People have done whole windowing systems with escape sequences. See MGR as an example.
| Cursor movement in terminals |
1,281,507,991,000 |
I am currently working on implementing my own UNIX shell in C. The basis of the shell is working, you could find the source code here.
Right now my next task is to implement a logging feature for the shell. So here's my question:
What are the UNIX/Linux best practices to do that? The simple way would be to open a file and write to it every time. But I was wondering if there is a function that would take care of that for me, helping me produce ... "cleaner"... code.
Thank you in advance for your help, I'm quite new to programming, but I want to do things right.
PS: If you have any general remarks about the rest of the code, they'd be greatly appreciated :-)
|
Typically command history is saved to a hidden file in the user's home directory. Other than that, you can log to stderr or syslog.
| Implementing a Unix shell in C: Logging [closed] |
1,281,507,991,000 |
I have a machine dual booted with Arch Linux and Ubuntu (16.04).
I have recently started using the Kakoune text editor, and noticed that its startup time is drastically different depending on which OS I am using. However I believe the underlying problem is not due to kakoune directly.
On startup, kakoune runs a bunch of shell scripts to enable integration with x11 and tmux, git, syntax highlighting/colorschemes, etc. This can be disabled to just load the 'vanilla' editor using the -n flag.
The command: kak -e q will start kakoune, run all startup scripts and exit immediately.
On Arch:
time kak -e q takes 1 second
time kak -n -e q (no shell scripts) finishes in under 20 millis.
On Ubuntu:
time kak -e q takes about 450 millis
time kak -n -e q is again under 20 millis
After trimming the fat and removing some of the startup scripts I did see an improvement on both OS's proportional to the amount removed.
I ran some benchmarks with UnixBench and found that the major differences between the two systems are seen in the 'process creation' and 'shell scripts' tests.
The shells scripts test measures the number of times per minute a process can start and reap a set of one, two, four and eight concurrent copies of a shell scripts where the shell script applies a series of transformation to a data file.
Here is the relevant output. Units in 'loops per second' more is better:
Process creation (1 parallel copy of tests)
Arch: 3,822
Ubuntu: 5,297
Process creation (4 parallel copies of tests)
Arch: 18,935
Ubuntu: 30,341
Shell Scripts (1 concurrent) (1 parallel copy of tests)
Arch: 972
Ubuntu: 5,141
Shell Scripts (1 concurrent) (4 parallel copies of tests)
Arch: 7,697
Ubuntu: 24,942
Shell Scripts (8 concurrent) (1 parallel copy of tests)
Arch: 807
Ubuntu: 2,257
Shell Scripts (8 concurrent) (4 parallel copies of tests)
Arch: 1,289
Ubuntu: 3,001
As you can see the Ubuntu system performs much better.
I have tested using different login shells, terminal emulators, recompiling kakoune, removing unneeded software to clean up the disk, etc. I am certain this is the bottleneck.
My question is: what can I do to further investigate this and improve performance of the Arch Linux system to match Ubuntu? Should I look into tuning the kernel?
Additional notes:
both systems use the same type of filesystem (ext4)
I tend to use the Archlinux system more, and have noticed performance degrading over time
Arch is on /dev/sda1 and is ~200GB. Ubuntu is on /dev/sda2, ~500GB. 1TB HDD.
Arch uname -a: Linux ark 4.14.13-1-ARCH #1 SMP PREEMPT Wed Jan 10 11:14:50 UTC 2018 x86_64 GNU/Linux
Ubuntu uname -a: Linux sierra 4.4.0-62-generic #83-Ubuntu SMP Wed Jan 18 14:10:15 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
Thanks
|
Debian and Ubuntu use dash as /bin/sh, it's somewhat faster than Bash:
$ time for x in {1..1000} ; do /bin/bash -c 'true' ; done
real 0m1.894s
$ time for x in {1..1000} ; do /bin/sh -c 'true' ; done
real 0m1.057s
That's around the same area (in ratio) as your numbers.
Changing /bin/sh to dash instead of Bash in Debian and Ubuntu was greatly because of performance:
The major reason to switch the default shell was efficiency. bash is an excellent full-featured shell ... However, it is rather large and slow to start up and operate by comparison with dash.
(https://wiki.ubuntu.com/DashAsBinSh)
| Process creation time, shell script and system call overhead |
1,281,507,991,000 |
I have recently purchased myself an HP rack server for use as a personal fileserver. This server currently lives under my bed as I have nowhere else to put it. For those not aware (as I was not fully) this server is VERY LOUD.
I need to be able to access my files a lot of the time during the day, and due to the situation of my server, turning it off every night at the wall (it likes to suddenly spring into action for no apparent reason) isn't really an option. I would really like if the server could remain powered on all the time, but when not in use enter a sleep state such that the fans turn off, if nothing else, over LAN. The server also runs Debian.
If this kind of setup can't happen for whatever reason, I could settle for the machine shutting down at a certain time of day (or night) and starting up again in the morning, or something to that effect.
I have very little idea about how to go about such a task, other than to use wake/sleep-on-LAN.
|
It seems after trial and tribulation of innumerable ways to get my server to do what it's told, the best way to solve my problem of it being loud is just to put it in the garage and hope no water damage occurs during cold nights (which it shouldn't, as the server will be on 24/7).
Thanks to everyone who offered actual technical help, but it seems what I wanted ideally cannot be done.
| How can I make my Linux server sleep and wake on LAN when not in use? |
1,281,507,991,000 |
From Robert Love's Linux System Programming (2007, O'Reilly), this is what is given in the first paragraph (Chapter 1, Page 10):
The file position’s maximum value is bounded only by the size of the C type used to store it, which is 64-bits in contemporary Linux.
But in the next paragraph he says:
A file may be empty (have a length of zero), and thus contain no valid bytes. The maximum file length, as with the maximum file position, is bounded only by limits on the sizes of the C types that the Linux kernel uses to manage files.
I know this might be very, very basic, but is he saying that the file size is limited by the FILE data type or the int data type?
|
He's saying it's bound by a 64-bit type, which has a maximum value of (2 ^ 64) - 1 unsigned, or (2 ^ 63) - 1 signed (1 bit holds the sign, +/-).
The type is not FILE; it's what the implementation uses to track the offset into the file, namely off_t, which is a typedef for a signed 64-bit type.1 (2 ^ 63) - 1 = 9223372036854775807. If a terabyte is 1000 ^ 4 bytes, that's ~9.2 million TB. Presumably the reason a signed type is used is so that it can hold a value of -1 (for errors, etc), or a relative offset.
Functions like fseek() and ftell() use a signed long, which on 64-bit GNU systems is also 64-bits.
1. See types.h and typesizes.h in /usr/include/bits.
| FILE size limitation according to Robert Love's textbook |
1,281,507,991,000 |
Is there any SDK for developers under Linux? I mean more than man pages, it would be more like MSDN Library or Windows SDK under windows, where all documentation about programming and developer guide is documented and structured in one place.
|
For GNOME, you can check out devhelp or help.gnome.org.
But as everyone has already stated, there is no single SDK for "Linux". You need to have some idea of what you want before we can recommend specific developer resources.
| SDK for Developer |
1,281,507,991,000 |
I have recently started programming in UNIX environment. I need to write a program which creates an empty file with name and size given in the terminal using this commands
gcc foo.c -o foo.o
./foo.o result.txt 1000
Here result.txt means the name of the newly created file, and 1000 means the size of the file in bytes.
I know for sure that lseek function moves the file offset, but the trouble is that whenever I run the program it creates a file with a given name, however the size of the file is 0.
Here is the code of my small program.
#include <unistd.h>
#include <stdio.h>
#include <fcntl.h>
#include <ctype.h>
#include <sys/types.h>
#include <sys/param.h>
#include <sys/stat.h>
int main(int argc, char **argv)
{
int fd;
char *file_name;
off_t bytes;
mode_t mode;
if (argc < 3)
{
perror("There is not enough command-line arguments.");
//return 1;
}
file_name = argv[1];
bytes = atoi(argv[2]);
mode = S_IWUSR | S_IWGRP | S_IWOTH;
if ((fd = creat(file_name, mode)) < 0)
{
perror("File creation error.");
//return 1;
}
if (lseek(fd, bytes, SEEK_SET) == -1)
{
perror("Lseek function error.");
//return 1;
}
close(fd);
return 0;
}
|
If you're seeking after the end of the file, you have to write at least one byte at that position:
write(fd, "", 1);
to have the OS fill the hole with zeros.
So if you want to create an empty file of a certain size 1000 with lseek, do:
lseek(fd, 999, SEEK_SET); //<- err check
write(fd, "", 1); //<- err check
ftruncate is probably better and it seems to create sparse files without any fuss too:
ftruncate(fd, 1000);
| Trouble with creating an empty file using C programming language in UNIX environment |
1,281,507,991,000 |
I am trying my hand on Linux Signals. Where I have created a scenario mentioned below:
Initially block all SIGINT signals using sigprocmask().
If sender send SIGUSR1 signal then unblock SIGINT for rest of the process life.
However first step is successfully implemented but not able to unblock (or change) process mask using sigprocmask().
What am I doing wrong?
#include<stdio.h>
#include<signal.h>
#include<stdlib.h>
sigset_t block_list, unblock_list;
void sigint_handler(int sig)
{
printf("Ouch!!\n");
}
void sigusr1_handler(int sig)
{
sigemptyset(&unblock_list);
sigprocmask(SIG_SETMASK, &unblock_list, NULL);
}
int main(int argc, char **argv)
{
int count;
signal(SIGINT, &sigint_handler);
signal(SIGUSR1, &sigusr1_handler);
sigemptyset(&block_list);
sigaddset(&block_list, SIGINT);
sigprocmask(SIG_SETMASK, &block_list, NULL);
for(count=1; ;++count)
{
printf("Process id: %ld\t%d\n", (long)getpid(), count);
sleep(4);
}
exit(EXIT_SUCCESS);
}
$kill -n SIGINT <pid>
$kill -n SIGUSER1 <pid> //This call should unblock sigint_handler() for rest of the process life, but it is only unblocking for one time. Everytime I have call $kill -n SIGUSER1 <pid> to unblock SIGINT.
Note: Error handling has been removed for simplicity.
|
The kernel will restore the signal mask upon returning from a signal handler. This is specified by the standard:
When a thread's signal mask is changed in a signal-catching function
that is installed by sigaction(), the restoration of the signal mask
on return from the signal-catching function overrides that change
(see sigaction()). If the signal-catching function was installed
with signal(), it is unspecified whether this occurs.
On Linux, signal(2) is just a deprecated compatibily wrapper for sigaction(2), and that does also occur when using signal(2).
| Why below code is not able to unblock SIGINT signal |
1,281,507,991,000 |
Yeah, I know there's actkbd which allows to assign global keyboard shortcuts which will work everywhere including the text console and a graphical session but I don't want to run an extra daemon (long unmaintained as well) for a single keyboard shortcut. I want something a lot simpler with no configuration options and which has the absolute minimum amount of code.
The task is to run a command when this key combination is pressed:
Win + End -> systemctl suspend
This is probably worth posting in stackoverflow.com instead but I'm not entirely sure.
|
So, Linux has quite a nice framework for such things: uinput; evdev is a nice interface to that that doesn't hide anything. It's slim.
For basically all Linux distros there's a python3-evdev package (at least that's the package name on debian, ubuntu and fedora).
Then, it's a few lines of code to write your daemon; this is very much just slightly modified example code, where I added some explanations so you know what you're doing
#!/usr/bin/python3
# Copyright 2022 Marcus Müller
# SPDX-License-Identifier: BSD-3-Clause
# Find the license text under https://spdx.org/licenses/BSD-3-Clause.html
from evdev import InputDevice, ecodes
from subprocess import run
# /dev/input/event8 is my keyboard. you can easily figure that one
# out using `ls -lh /dev/input/by-id`
dev = InputDevice("/dev/input/event8")
# we know that right now, "Windows Key is not pressed"
winkey_pressed = False
# suspending once per keypress is enough
suspend_ongoing = False
# read_loop is cool: it tells Linux to put this process to rest
# and only resume it, when there's something to read, i.e. a key
# has been pressed, released
for event in dev.read_loop():
# only care about keyboard keys
if event.type == ecodes.EV_KEY:
# check whether this is the windows key; 125 is the
# keyboard for the windows key
if event.code == 125:
# now check whether we're releasing the windows key (val=00)
# or pressing (1) or holding it for a while (2)
if event.value == 0:
winkey_pressed = False
# clear the suspend_ongoing (if set)
suspend_ongoing = False
if event.value in (1, 2):
winkey_pressed = True
# We only check whether the end key is *held* for a while
# (to avoid accidental suspend)
# key code for END is 107
elif winkey_pressed and event.code == 107 and event.value == 2:
run(["systemctl", "suspend"])
# disable until win key is released
suspend_ongoing = True
And that's it. Your daemon in 16 lines of code.
You can directly run it with sudo python, but you probably want to start it automatically:
Save it as a file /usr/local/bin/keydaemon, sudo chmod 755 /usr/local/bin/keydaemon to make it executable. Add a file /usr/lib/systemd/system/keydaemon.unit with contents
[Unit]
Description=Artem's magic suspend daemon
[Service]
ExecStart=/usr/local/bin/keydaemon
[Install]
WantedBy=multi-user.target
With sudo systemctl enable --now keydaemon you can then make sure the daemon is started (instantly, and on every future boot).
| A simple global keyboard shortcut handler |
1,281,507,991,000 |
Is there a good description of exactly what each of the flags you can pass to personality(2) does? I'm particularly interested in STICKY_TIMEOUTS, but a general detailed description of all of them would be nice.
I've googled a bunch for this and can't find it. And I like knowing these things.
This is a very specific programming question, but it's also very Unix/Linux specific. I wasn't sure if it should go here or on StackOverflow.
|
According to man 2 personality:
STICKY_TIMEOUTS (since Linux 1.2.0)
With this flag set, select(2), pselect(2), and ppoll(2) do not modify the returned time‐out argument when interrupted by a signal handler.
You can read the rest of the man page for detailed descriptions of each of the available flags.
| What does the STICKY_TIMEOUTS flag for personality(2) do? |
1,281,507,991,000 |
Good days.
I have a little problem.
I have this function in bash script
function denyCheck(){
file=$(readlink -f $(which $(printf "%s\n" "$1")))
if [[ -x $file ]];then
return false
else
return true
fi
}
denyCheck "firefox"
This function I pass her a string, what is a comman, and this resolv the orginal path where is the command (by example: /usr/lib/firefox/firefox.sh) and if the file is executable return true else false.
But the problem is...The parameter (in the example "firefox") run it as command and open the browser firefox.
How can I do for that the parameter not run it?
Thank you very much.
|
Should rather be:
cmd_canonical_path() {
local cmd_path
cmd_path=$(command -v -- "$1") &&
cmd_path=$(readlink -e -- "$cmd_path") &&
[ -x "$cmd_path" ] &&
REPLY=$cmd_path
} 2> /dev/null
And then use for instance as:
if cmd_canonical_path firefox; then
printf '%s\n' "firefox found in $REPLY executable"
else
printf >&2 '%s\n' "no executable file found for firefox"
fi
In shells, all commands have an exit status that indicates either success (0) or failure (any other number whose values can help discriminate between different types of failure).
For instance, the [ command, when passed -x, /path/to/some/file and ] as arguments will return true if the process that executes it has execute permission for the /path/to/some/file file (or directory).
A function can also have an exit status, and that's the exit status of the last command it ran before it returned (or the number passed to return if called there).
The success or failure status of commands is checked by shell constructs such as if/until/while statements or its boolean operators || and &&. Commands are the shells' booleans if you like.
cmd1 && cmd2 && cmd3 returns success / true is all of cmd1, cmd2 and cmd3 succeed. And cmd2 is not even executed if cmd1 fails, same for cmd3.
So above cmd_canonical_path succeeds if command -v -- "$1" (the standard builtin¹ command to lookup a command and output its path²; while which was some similar command but intended for csh users only) and then that readlink -e (which works like readlink -f to canonicalise³ a path but will return failure if it can't while readlink -f will do a best effort) succeeds and then that the current process has execute permission for that file, and then (and only then) that that path is successfully assigned to the REPLY4 variable (plain assignments are always successful).
If not, the function will return failure: the exit status of the first command that failed in that cmd && cmd && cmd chain.
You'll also notice that $REPLY is left alone and not modified if the canonical path to the command cannot be determined or it's not executable by you.
Some other problems with your approach:
That $(printf '%s\n' "$1") doesn't make much sense. In sh/bash, it's the same as $1 (at least with the default value of $IFS) and is equally wrong as it's not quoted. If you want to pass the contents of the first positional parameter to a command, the syntax is cmd "$1".
When you want some arbitrary data you pass to a command to be treated as a non-option arguments, you need to pass it after the -- argument that marks the end of options: which -- "$1", readlink -f -- "$(...)".
return takes a (optional) number as argument (the exit code for the function), not true/false (you also seem to have them reversed as you said the function was to return true if the file was executable). If not passed a number, the function's exit status will be that of the last command run in the function. So if cmd; then return 0; else return 1; fi is better written as just cmd; return. Though here you don't even need the return since cmd ([[ -x $file ]] in your case) is the last command in your function.
You're blindly passing the output of which (incorrectly as you forgot the -- and to quote the $(...)) to readlink without having first checked that it succeeded.
As a final note, had you used zsh instead of bash, you could have done:
firefox_path==firefox
firefox_canonical_path=$firefox_path:P
To get the path of the firefox command. =firefox expands to the path of the firefox command and aborts the shell with an error if it can't be found. You can intercept that error in an always block if need be. The :P modifier does a realpath() equivalent on the parameter it's applied to, similar to what GNU readlink -f does. No need to check for execute permissions as zsh's =cmd operator would only expand to a path that is executable.
¹ and as it's a builtin of the shell, you'll find its documentation in the shell's manual. For bash, see info bash command for instance.
² with the caveat that command -v cmd outputs cmd is cmd is a function or shell builtin which could be a problem here.
³ a canonical absolute path starts with /, has no symlink component (they've all be resolved to their target), no . / .. component, no sequences of two or more /s.
4 $REPLY is commonly used (though that's only a convention) as the default variable for a function or shell builtin to return a value in. Alternatively, your function could output the result, and the caller would run firefox_path=$(cmd_canonical_path firefox).
| How to pass a parameter in a command without run the parameter content? |
1,281,507,991,000 |
Say I'm implementing a programming language which has an interactive mode, and that interactive mode reads some ~/.foo_rc file in the user's home directory. The file contains code in that language which can be used to customize some preferences. The language isn't sandboxed when reading this file; the file can do "anything".
Should I bother doing a permission check on the file? Like:
$ foo -i
Not reading ~/.foo_rc because it is world-writable, you goof!
P.S. you don't even own it; someone else put it there.
> _
I'm looking at the Bash source and it doesn't bother with permission checks for ~/.bash_profile (other than that it exists and is readable, preconditions for doing anything with it at all).
[Update]
After considering thrig's answer, I implemented the following check on the file:
If the file is not owned by the effective user ID of the caller, then it is not secure.
If the file is writable to others, then it is not secure.
If the file is writable to the owning group, then it is not secure if the group includes users other than caller. (I.e. the group must either be empty, or have the caller as its one and only member).
Otherwise it is deemed secure.
Note that the group check makes no assumptions about any correspondence between numeric user ID's and group ID's, or their names; it is based on checking that the name of the user is listed as the sole member.
(A note is added to the documentation for the function which performs this check that it's subject to a time-of-check to time-of-use race condition. After the check is applied, an innocent superuser can extend the group with additional members, who may be malicious, and modify the file by the time it is accessed.)
|
Reasonable and prudent, provided there are clear warnings on what file is failing, and why, so the user can fix the permissions issue. bash probably dates from a more trusting (and prank-ridden) day. Note that user files can legitimately be group writable, if the site has a policy of each user going into a group that only that user is in, otherwise not.
(Parent directory checks may also be prudent, to detect chmod 777 $HOME goofs.)
| Permission check on profile file in home directory: should it be done? |
1,281,507,991,000 |
I need to check filesystem type on a thumb drive in my C++ application. It must be done before mounting a new partition. I also prefer not to call system() function. I tried to use the following test code:
#include <blkid/blkid.h>
#include <stdio.h>
int main()
{
blkid_probe pr;
const char *ptname;
const char* devname = "/dev/sdb1";
pr = blkid_new_probe_from_filename(devname);
if (!pr)
printf("faild to open device\n");
else
{
blkid_probe_enable_partitions(pr, true);
blkid_do_fullprobe(pr);
blkid_probe_lookup_value(pr, "PTTYPE", &ptname, NULL);
printf("%s partition type detected\n", ptname);
blkid_free_probe(pr);
}
}
When I plug in thumb drive with ntfs this piece of code shows that my partions is dos. When I plug in thumb drive with fat or ext4 the code returns strange string but the same for these two filesystems:
AWAVI��AUATL�%� .
What causes these strange outputs? Maybe there is a better way to check a filesystem?
Thank you in advance for any help.
|
If you are interested in what filesystem is on sdb1 you should use USAGE to first check whether it is a filesystem and then get filesystem type with TYPE.
You'll need to set these flags to enable filesystem lookup:
blkid_probe_set_superblocks_flags(probe, BLKID_SUBLKS_USAGE | BLKID_SUBLKS_TYPE |
BLKID_SUBLKS_MAGIC | BLKID_SUBLKS_BADCSUM);
About your result: you need to check return value of blkid_probe_lookup_value (or use blkid_probe_has_value first), you are not initializing ptname to NULL so you'll get garbage if the lookup fails. And the lookup will fail, because partitions don't have PTTYPE. (I'm not sure why it worked for you with NTFS, in my case both NTFS and ext4 partition don't have PTTYPE.)
Version with usage and type could look like this
#include <blkid/blkid.h>
#include <stdio.h>
#include <string.h>
int main()
{
blkid_probe pr;
const char *value = NULL;
const char* devname = "/dev/sdb1";
int ret = 0;
pr = blkid_new_probe_from_filename(devname);
if (!pr)
printf("faild to open device\n");
else
{
blkid_probe_enable_partitions(pr, 1);
blkid_probe_set_superblocks_flags(pr, BLKID_SUBLKS_USAGE | BLKID_SUBLKS_TYPE |
BLKID_SUBLKS_MAGIC | BLKID_SUBLKS_BADCSUM);
blkid_do_fullprobe(pr);
ret = blkid_probe_lookup_value(pr, "USAGE", &value, NULL);
if (ret != 0) {
printf("lookup failed\n");
return 1;
} else
printf("usage: %s\n", value);
if (strcmp(value, "filesystem") != 0) {
printf("not filesystem\n");
return 1;
}
ret = blkid_probe_lookup_value(pr, "TYPE", &value, NULL);
if (ret != 0) {
printf("lookup failed\n");
return 1;
} else
printf("type: %s\n", value);
blkid_free_probe(pr);
}
}
| blkid: blkid_probe_lookup_value() - strange partition types |
1,281,507,991,000 |
The program in general is I want to implement telnet program.
On the client side user send its logging name and password, and if it is correct he starts to send commands to the server
On the Server side I am check on the user authentication on the system and if it is true I must run his default shell which is stored in the passwd file, to execute his commands.
The problem that when I tried to run his shell using the system C function the shell is run and then closed, because the system() use fork() and exec(), and after it finished the opened shell for the user is closed.
How can I run a new shell in my process and keep it running for the following command execution?
|
You can't just use system. You have to fork and exec yourself and use pipes in both directions. You'd also have to use some kind of terminal interface if you wanted to catch Ctrl key input.
| telnet implementation using C [closed] |
1,281,507,991,000 |
I too many search on internet to found algorithm calculated %Us , %Sy , %Id , ... on result of top command . but can not found any documentation .
some documents like this or this calculate cpu utilization but output not equal with output of top command . (too diff !!!)
How do the top or mpstat commands calculate cpu statistics ?
|
I calculate with this formula :
result=(CurrentUse-PrevUse)*100/(CurrentTotal-PrevTotal)
this is a example script calculate [us,sys,idle] of cpu .
#!/bin/bash
prev_total=0
prev_idle=0
prev_us=0
prev_sys=0
while true
do
line=$(head -n1 /proc/stat)
us=$(echo $line | awk '{print $2}')
ni=$(echo $line | awk '{print $3}')
sy=$(echo $line | awk '{print $4}')
id=$(echo $line | awk '{print $5}')
io=$(echo $line | awk '{print $6}')
irq=$(echo $line | awk '{print $7}')
si=$(echo $line | awk '{print $8}')
st=$(echo $line | awk '{print $9}')
g=$(echo $line | awk '{print $10}')
gn=$(echo $line | awk '{print $11}')
total=$(expr $us + $ni + $sy + $id + $io + $irq + $si + $st + $g + $gn)
let "diff_total=$total-$prev_total"
let "diff_idle=$id-$prev_idle"
let "diff_us=$us-$prev_us"
let "diff_sys=$sy-$prev_sys"
let "result_us=$diff_us * 100 / $diff_total"
let "result_idle=$diff_idle * 100 / $diff_total"
let "result_sys=$diff_sys * 100 / $diff_total"
echo -en "\rCpu us:$result_us% sys:$result_sys% idle:$result_idle%\b\b"
prev_total=$total
prev_idle=$id
prev_us=$us
prev_sys=$sy
sleep 1
done
output like this :
Cpu us:1% sys:0% idle:97%
| CPU statistics calculation algorithm |
1,281,507,991,000 |
I've been working on a Routing Protocol and looking at legacy code for different routing protocols. I constantly find different macros where I it is very hard to find it in the header files because they include ~20-50 headers. Besides looking up the macro on the Internet is there any way by finding their definitions in the man pages?
For instance: INADDR_ALLHOSTS_GROUP macro which I eventually found in "netinet/in.h" but the man page never discussed the macro. Is there a way to use the man pages when you are trying to search for such things or would I need to go another way?
|
I posed this question to my mentor and he said to use grep, so I tried it out and I succeeded. Grep is an absolutely amazing tool! The code I used to find the macro was grep -rl "INADDR_ALLHOSTS_GROUP" * and I ran it from the /usr/include directory.
| How to find documentation about macros in the manpages? |
1,281,507,991,000 |
I wrote a c++ program using some libraries called linbox, givaro, gmp.
Now because my computer is to slow I want to run my program on a supercomputer.
I am not very familiar with networks and my programming skills are not very high. I managed to upload the data that the program uses and the c++ program itself. But of course the supercomputer has not the libraries I need and so I can not compile/link.
Can you tell me how can I proceed to get my program working or can you give me a good reference where I can learn to run c++ programs on supercomputers?
I am using the supercomputer brutuswiki.ethz.ch/brutus/Getting_started_with_Euler
|
If the hpc doesn't contain required libs, you have 2 options:
Ask the admins to install the required libs
Build a static executable, which contains all the libs.
If possible to go with option (2), just compile it on your machine, then upload to the hpc and run as is.
I suspect unless you have mpi/pgas as part of your code, that the performance gain would not be great - supercomputers for the most part are a cluster of "ordinary" nodes, with fast interconnects.
Being able to run concurrently is what makes an app take advantage of hpc.
| Use supercomputer to run a program [closed] |
1,485,910,569,000 |
Wanting to play around with Trusted Platform Module stuff, I installed TrouSerS and tried to start tcsd, but I got this error:
TCSD TDDL ERROR: Could not find a device to open!
However, my kernel has multiple TPM modules loaded:
# lsmod | grep tpm
tpm_crb 16384 0
tpm_tis 16384 0
tpm_tis_core 20480 1 tpm_tis
tpm 40960 3 tpm_tis,tpm_crb,tpm_tis_core
So, how do I determine if my computer is lacking TPM vs TrouSerS having a bug?
Neither dmidecode nor cpuid output anything about "tpm" or "trust". Looking in /var/log/messages, on the one hand I see rngd: /dev/tpm0: No such file or directory, but on the other hand I see kernel: Initialise system trusted keyrings and according to this kernel doc trusted keys use TPM.
EDIT: My computer's BIOS setup menus mention nothing about TPM.
Also, looking at /proc/keys:
# cat /proc/keys
******** I--Q--- 1 perm 1f3f0000 0 65534 keyring _uid_ses.0: 1
******** I--Q--- 7 perm 3f030000 0 0 keyring _ses: 1
******** I--Q--- 3 perm 1f3f0000 0 65534 keyring _uid.0: empty
******** I------ 2 perm 1f0b0000 0 0 keyring .builtin_trusted_keys: 1
******** I------ 1 perm 1f0b0000 0 0 keyring .system_blacklist_keyring: empty
******** I------ 1 perm 1f0f0000 0 0 keyring .secondary_trusted_keys: 1
******** I------ 1 perm 1f030000 0 0 asymmetri Fedora kernel signing key: 34ae686b57a59c0bf2b8c27b98287634b0f81bf8: X509.rsa b0f81bf8 []
|
TPMs don't necessarily appear in the ACPI tables, but the modules do print a message when they find a supported module; for example
[ 134.026892] tpm_tis 00:08: 1.2 TPM (device-id 0xB, rev-id 16)
So dmesg | grep -i tpm is a good indicator.
The definitive indicator is your firmware's setup tool: TPMs involve ownership procedures which are managed from the firmware setup. If your setup doesn't mention anything TPM-related then you don't have a TPM.
TPMs were initially found in servers and business laptops (and ChromeBooks, as explained by icarus), and were rare in desktops or "non-business" laptops; that’s changed over the last few years, and Windows 11 requires a TPM now. Anything supporting Intel TXT has a TPM.
| How to determine if computer has TPM (Trusted Platform Module) available |
1,485,910,569,000 |
I'm experiencing system freezes and looking in the journal I see kernel (4.14.15-1-MANJARO) errors such as:
kernel: tpm_crb MSFT0101:00: [Firmware Bug]: ACPI region does not cover the entire command/response buffer. [mem 0xfed40000-0xfed4087f flags 0x201] vs fed40080 f80
kernel: tpm_crb MSFT0101:00: [Firmware Bug]: ACPI region does not cover the entire command/response buffer. [mem 0xfed40000-0xfed4087f flags 0x201] vs fed40080 f80
(Yes the message is repeated, with exactly the same timestamp)
A bit later, I get:
tpm tpm0: A TPM error (379) occurred attempting get random
I'm running the latest version of firmware (v3.05) for my Asus UX330. My kernel is:
4.16.0-1-MANJARO #1 SMP PREEMPT Wed Mar 21 09:02:49 UTC 2018 x86_64 GNU/Linux
Is there any workaround besides praying for an updated UEFI / BIOS firmware from Asus?
|
I emailed Asus support and they say that the laptop only supports Windows.
You could consider disabling TPM if it is not being used - please comment if you work out how to do this.
| ACPI region does not cover the entire command/response buffer |
1,485,910,569,000 |
I am currently aware of two recent methods to bind a LUKS encrypted root partition to a TPM2: systemd-cryptenroll and clevis. Both of them seem to release the encryption key after successfully checking the PCRs the key was sealed against.
But I don't like the idea of the volume being decrypted without user interaction. I'd rather have a solution like it is offered by BitLocker for Windows: Either TPM and an additional PIN or a recovery key.
Even though I searched the web quite exhaustively I was not able to find any hints in this direction. Is anybody aware of a solution?
EDIT: There is a --recovery-key option for systemd-cryptenroll. I'm only concerned with the question how to get an additional PIN requirement when using the TPM.
|
2022-05-21 - systemd v251
Support for TPM2 + PIN has been merged in systemd-cryptenroll and is available as part of release v251.
Changes in disk encryption:
systemd-cryptenroll can now control whether to require the user to
enter a PIN when using TPM-based unlocking of a volume via the new
--tpm2-with-pin= option.
Option tpm2-pin= can be used in /etc/crypttab.
Source
| LUKS + TPM2 + PIN |
1,485,910,569,000 |
I have an Ubuntu 20.04 machine setup that I am trying to configure for disk encryption. I am trying to setup auto unlock, but my configuration has not worked so far, and I am always prompted for a password.
To do this I followed the following steps:
sudo apt-get update and sudo apt-get install cryptsetup
Check /dev/nvme0n1p3 -> sudo cryptsetup luksDump /dev/nvme0n1p3 -> No Tokens or Keyslots
Install clevis, clevis-luks, clevis-dracut, clevis-udisks2, clevis-systemd, clevis-tpm2
sudo clevis luks list -d /dev/nvme0n1p3 -> Empty
echo <my password> | sudo clevis luks bind -d /dev/nvme0n1p3 tpm2 '{ "pcr_bank":"sha256", "pcr_ids": "7,11" }'
sudo dracut -fv --regenerate-all
Check sudo clevis luks list -d /dev/nvme0n1p3 -> 1: tpm2 '{"hash":"sha256","key":"ecc","pcr_bank":"sha256","pcr_ids":"7,11"}'
lsblk -o NAME,UUID,MOUNTPOINT ->
├─nvme0n1p1 <uuid1> /boot/efi
├─nvme0n1p2 <uuid2> /boot
└─nvme0n1p3 <uuid3>
└─dm_crypt-0 <uuid4>
└─ubuntu--vg-ubuntu--lv <uuidd5> /
cat /etc/crypttab -> dm_crypt-0 UUID=<uuid3> none luks
When booting I do not notice any errors for cryptsetup, luks, tpm2. Googling around and checking others questions, I have also verified tried:
sudo systemctl enable clevis-luks-askpass.path
update-initramfs -c -k all -> Runs successfully
My fstab file doesn't actually list the encrypted partition:
cat /etc/fstab ->
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/ubuntu-vg/ubuntu-lv during curtin installation
/dev/disk/by-id/<Some id which I don't know the origin of> / ext4 defaults 0 1
# /boot was on /dev/nvme0n1p2 during curtin installation
/dev/disk/by-uuid/<uuid2> /boot ext4 defaults 0 1
# /boot/efi was on /dev/nvme0n1p1 during curtin installation
/dev/disk/by-uuid/<uuid1> /boot/efi vfat defaults 0 1
/swap.img none swap sw 0 0
I've also tried manually adding in the partition to fstab but did not work.
No matter what I try, it always asks for password on boot.
What could I do to fix this?
|
I was missing: clevis-initramfs that needed to be installed. Once added the auto unlocker worked.
| Ubuntu 20.04 clevis-luks setup auto unlocking not working |
1,485,910,569,000 |
TPMs are supposed to solve a chicken and egg problem of where to store unencrypted disk encryption keys such that someone can't simply pop another hard drive in the machine, boot a different OS and read the keys right off the disk / flash / BIOS / ...
AFAIK TPMs basically do this by checking what software booted and, if that software doesn't match a preset hash, it will remain locked and refuse to give out the disk encryption keys.
I've read numerous articles pointing to the fact that systemd can help embed my LUKS keys in a TPM with systemd-cryptenroll. But these only speak of embedding the key in the TPM, and not preventing attackers reading those keys.
Where I'm stuck is figuring out how to ensure there's a solid chain of trust from BIOS firmware to login ensuring that if the OS is tampered with it will either not boot, or the TPM will refuse to hand over the encryption key.
For example there's not much use in encrypting my hard drive if someone at the terminal could simply press E at the grub prompt and boot linux with init=/bin/bash to give themselves a root login without needing a password. Encryption would be utterly pointless in that situation.
I'm stuck on two fairly specific points:
What does a typical systemd based distribution (Debian or Ubuntu) do to lock the TPM in the first place. What files does this protect from tampering?
What other things in the boot sequence must I harden from tampering?
eg: grub EFI binary, grub.cfg in EFI, grub passwordles editing boot entries, initramfs, ...
|
Where I'm stuck is figuring out how to ensure there's a solid chain of trust from BIOS firmware to login ensuring that if the OS is tampered with it will either not boot, or the TPM will refuse to hand over the encryption key.
That's built into the process; it's the --tpm-pcrs= option that you give to systemd-cryptenroll.
The TPM keeps an in-memory "event log" of various events provided to it by system firmware, by the bootloader, and occassionally by the OS. Each event "measures" some part of the system – e.g. the firmware will log a hash of the grubx64.efi executable as it is loaded; GRUB will log every command and (probably?) log a hash of the kernel image being executed; the kernel will log a hash of the initrd and a hash of the command line – and each event "extends" a hash chain that's also stored in the TPM's memory.
The destination of all those measurements is a set of PCRs, "platform configuration registers", each of which holds a SHA hash. Each event extends one of the PCRs using the formula:
new_value = H(old_value || event_data_hash)
so that the final hash in the PCR authenticates the entire event chain for that PCR, exactly like how a single Git commit hash authenticates the entire history of commits leading up to it. (Although there is only one event log, each event only updates a single PCR, so each of the PCRs acts as an independent hash chain.)
On their own, the PCRs do absolutely nothing, but the "seal" operation can be used to specify a policy that may require specific PCRs to have specific values; if the policy is not met, the TPM will refuse to unseal the data.
So whenever systemd-cryptenroll asks the TPM to seal (encrypt) the LUKS volume key, it includes a policy based on whatever --tpm-pcrs= you have specified. What exactly is being secured depends on which PCRs you select.
For example, PCR[4] will contain firmware-generated events that measure every .efi executable that's being run. (This might include the Linux kernel if it's being run as an "EFI stub" with its own embedded bootloader.) This can get annoying as the hashes will change on every upgrade, so Windows with BitLocker often ties the tamper protection to Secure Boot – it binds to PCR[7] which instead contains events for the certificates that were used to sign the executables; anything signed by Microsoft's "official Windows first-party certificate" will pass.
Then, if you also wanted to measure the initramfs, with modern (5.17+) Linux kernels you would include PCR[9] in the policy. For the kernel cmdline, systemd-boot would update PCR[8]. (I'm not sure what GRUB uses or when.) Recently systemd developers have put more emphasis on the "Unified Kernel Images" that contain the kernel and the initramfs and the command line bundled together; the entire bundle would be authenticated as a single .efi binary.
(I have an old project that's able to dump the event log in text format, in addition to its regular functions.)
Specifically for Linux there's a non-authoritative guide to which PCR indexes are used for which purposes here.
On Ubuntu, the package tpm2-tools includes the command tpm2_eventlog which can be used to read the TPM's event log via the kernel:
tpm2_eventlog /sys/kernel/security/tpm0/binary_bios_measurements
This can help determine exactly what content was used to generate the PCR values and therefore which files and settings are effectively protected from tampering.
| How must I configure Debian or Ubuntu to ensure there's a chain of trust from TPM to Login? |
1,485,910,569,000 |
We currently have UEFI booting up GRUB which boots up Linux. We need to implement secureboot. We're using a TPM to store our keys. Does GRUB2 support TPM - I read the only version of GRUB that supports TPM, i.e. TrustedGRUB does not support UEFI.
Is there a GRUB version that supports TPM? Or, is the only alternative to replace GRUb with LinuxBoot, i.e. UEFI ->(Secureboot) LinuxBoot -> (secureboot) Linux Kernel
instead of
UEFI -> (Secureboot) GRUB -> (secureboot) Linux Kernel
Are there any significant advantages in using LinuxBoot over GRUB?
|
grub2 supports TPM in the sense that it updates the PCR to include grub entries and it supports secure boot. Subsequent bootloader pieces (including clevis) can use the PCR to verify the grub binary, kernel and initrd binaries, and kernel command line have not been tampered with.
| Does GRUB2 support TPM with UEFI? |
1,485,910,569,000 |
How to find out if TPM device supports "TPM 2.0 FIFO Interface" (TCG_TIS) and "TPM 2.0 FIFO Interface - (SPI)" (TCG_TIS_SPI), when they don't specify it?
I'm particularly interested in TPM SLB 9665 Xenonboard, as it has JTPM1, which is on my board (Supermicro H11SSL-i board, which recommends Infineon AOM-TPM-9665, which is actually TPM SLB 9665 Xenonboard.
|
From your third link you can get this PDF in which a product summary table specifies that only products with a sales code starting with "SLB 9670" will use the SPI interface. As you are talking about SLB 9665, that excludes TCG_TIS_SPI.
On my system, I have an Asus-branded Infineon SLB 9665 TPM which works fine with the TCG_TIS driver in Linux.
Note that there seems to be two possible LPC connector types: a 20-pin connector with pin 4 removed (known as 20-1), and a 14-pin connector with smaller pins and a finer pitch, known as 14-1.
According to the manual of your motherboard (as mentioned in your previous question in the Security SE), you would need one with the 20-1 style connector. And the page you linked with the specifications of the TPM SLB 9665 Xenonboard explicitly says it has the 20-pin connector, so both physical and driver compatibility seem to be covered.
| TPM 2.0 device which supports Linux kernel via TCG_TIS or TCG_TIS_SPI |
1,485,910,569,000 |
For passwordless decryption of a LUKS volume I want to use clevis with my TPM 2.0 module. This module is recognised in Debian Testing (bullseye): /dev/tpm0 and /dev/tpmrm0 exist (so that I am able to run the necessary clevis commands in Debian).
However, the clevis initramfs scripts fail. Having investigated this in an init=premount shell, I discovered that in initramfs the /dev/tpm* devices mentioned above do not exist. How can I change this? Using Debian, I generate my initramfs with initramfs-tools.
|
Make sure that the kernel modules that drive the TPM get loaded within initramfs by listing them in /etc/initramfs-tools/modules. Then the initramfs udev should create the devices for you.
First, run lsmod | grep tpm to find your TPM driver module(s). For me, the output looks like this:
# lsmod |grep tpm
tpm_tis 16384 0
tpm_tis_core 20480 1 tpm_tis
tpm 61440 2 tpm_tis,tpm_tis_core
rng_core 16384 2 tpm
tpm_tis is the driver for the most common TPM implementations on x86 hardware. From the output, we can see that it depends on other modules: tpm_tis_core, tpmand rng_core. The lsmod list is built up from bottom up, so the optimal loading order would be to load the rng_core first.
So, to make sure these modules get loaded in initramfs, you would add four lines to the /etc/initramfs-tools/modules file:
rng_core
tpm
tpm_tis_core
tpm_tis
(This is probably overkill. I think initramfs-tools can now handle module dependencies automatically, so just mentioning tpm_tis alone would probably be enough. But I like to specify the modules explicitly to minimize the need to retry things...)
Once you've edited the /etc/initramfs-tools/modules file, you'll need to recreate your initramfs file. In Debian, that's easiest done with update-initramfs -u.
The next step would be to reboot and use the init=premount shell again to confirm that the /dev/tpm* devices now get created for you.
| How to get /dev/tpm* in initramfs? |
1,485,910,569,000 |
I want to load unlock my LUKS partition (root file system) at boot time using a TPM 2.0.
I've had no success using a keyscript=/path/to/script in my /etc/crypttab file, however I made progress using methods I found here.
I am using dracut to build initial ram fs images.
So in /usr/lib/dracut/modules.d/90crypt, I have made modifications to several files (according to the guide I linked):
module-setup.sh
# gives me access to these binaries at boot time in initramfs
function install() {
# existing code
# ...
inst /sbin/tpm2_nvread
inst /bin/tail
inst /bin/perl
inst /sbin/resourcemgr
}
cryptroot-ask.sh
resourcemgr &
# yum is only at tpm2-tools 1.1.0, so I can't read keys to a file
# this is my solution to grab from tpm, and convert the spaced out hex to binary
function gettpmkeyfile() {
key=`tpm2_nvread -x 0x1500001 -a 0x40000001 -s 32 -o 0 | tail -n 1`
key=${key//[[:blank:]]/}
key=`echo $key | /bin/perl -ne 's/([0-9a-f]{2})/print chr hex $1/gie'`
printf $key
}
gettpmkeyfile | cryptsetup luksOpen $device $luksname --key-file=-
/etc/dracut.conf
omit_dracutmodules+="systemd"
add_dracutmodules+="crypt"
i know the binaries are loaded properly, and I've added the keys from the TPM with luksAddKey, and I've tested my function on the command line in a shell after booting with a passphrase.
The problem I have is that the tpm2_nvread is throwing an error about the resource manager failing to initialize (error 0x1).
I noticed however that in a normal bootup, the resource manager fails here too, but doesn't prevent me from using the tpm2-tools commands.
I've tried upgrading to the latest kernel from elrepo (4.something), and I've added kernel drievrs with dracut like so:
dracut --add-drivers tpm_crb --force
This doesn't seem to help.
Any advice on how I can get tpm2_nvread to work in the initrd?
|
I found a solution!
I added /sbin/strace to my installed binaries in module-setup.sh so i could manually inspect the tpm2_nvread command in the dracut shell. Turns out the error was that my network was unreachable.
The tpm2 commands uses libtcti to communicate with the tpm, which uses a socket at 127.0.0.1:2323.
Now, as far as why the loopback was down, I'm not sure. My guess is either 90crypt in dracut runs before networking is available, or something to do with the fact that I disabled systemd.
So I added /sbin/ifup to module-setup.sh, and added this to my cryptroot-ask.sh:
ifup lo inet loopback
sleep 3
Not sure if I need the sleep but I put it anyways.
| How can I execute tpm2_nvread in the initramfs image created by dracut for centOS 7? |
1,485,910,569,000 |
I have Debian and Linux 5.x kernel. I get the following error:
# /etc/init.d/tpm2-abrmd status
● tpm2-abrmd.service - TPM2 Access Broker and Resource Management Daemon
Loaded: loaded (/lib/systemd/system/tpm2-abrmd.service; enabled; vendor preset: enabled)
Active: activating (auto-restart) (Result: exit-code) since Wed 2019-11-27 08:45:01 +0330; 2s ago
Process: 5385 ExecStart=/usr/sbin/tpm2-abrmd (code=exited, status=1/FAILURE)
Main PID: 5385 (code=exited, status=1/FAILURE)
/dev/tpm0 file doesn't exist.
Does Linux need TPM?
Is it mandatory for Linux?
If it's needed, how can I solve my problem?
|
No, Linux doesn’t need a TPM (of any version).
Some programs which run on Linux do need a TPM; that’s the case of tpm2-abrmd which is the source of your error. If you don’t have a TPM (version 2) there’s no point in keeping that package installed, you should remove it.
(tpm2-abrmd implements the TCG access broker and resource manager specification, i.e. it multiplexes access to a TPM2, allowing multiple applications to share it, and therefore is only useful if a TPM2 is available in some form.)
| Is TPM2 mandatory on Linux? |
1,485,910,569,000 |
I have enabled TPM 2.0 using bios.
$ [ -c /dev/tpmrm0 ] && echo "TPM 2.0"
TPM 2.0
When I am trying to install tpm-tools, it is giving the following error:
% sudo apt install tpm-tools
Reading package lists... Done
Building dependency tree
Reading state information... Done
tpm-tools is already the newest version (1.3.9.1-0.2ubuntu3).
0 upgraded, 0 newly installed, 0 to remove and 3 not upgraded.
2 not fully installed or removed.
After this operation, 0 B of additional disk space will be used.
Do you want to continue? [Y/n] Y
Setting up trousers (0.3.14+fixed1-1build1) ...
Job for trousers.service failed because the control process exited with error code.
See "systemctl status trousers.service" and "journalctl -xe" for details.
invoke-rc.d: initscript trousers, action "start" failed.
● trousers.service - LSB: starts tcsd
Loaded: loaded (/etc/init.d/trousers; generated)
Active: failed (Result: exit-code) since Wed 2021-02-10 03:59:26 AEST; 3ms ago
Docs: man:systemd-sysv-generator(8)
Process: 7414 ExecStart=/etc/init.d/trousers start (code=exited, status=30)
Feb 10 03:59:26 blueray-i5 systemd[1]: Starting LSB: starts tcsd...
Feb 10 03:59:26 blueray-i5 trousers[7414]: * Starting Trusted Computing daemon tcsd
Feb 10 03:59:26 blueray-i5 trousers[7414]: /etc/init.d/trousers: 32: [: /dev/tpm0: unexpected operator
Feb 10 03:59:26 blueray-i5 tcsd[7420]: TCSD TDDL[7420]: TrouSerS ioctl: (25) Inappropriate ioctl for device
Feb 10 03:59:26 blueray-i5 tcsd[7420]: TCSD TDDL[7420]: TrouSerS Falling back to Read/Write device support.
Feb 10 03:59:26 blueray-i5 tcsd[7420]: TCSD TCS[7420]: TrouSerS ERROR: TCS GetCapability failed with result = 0x1e
Feb 10 03:59:26 blueray-i5 trousers[7414]: ...fail!
Feb 10 03:59:26 blueray-i5 systemd[1]: trousers.service: Control process exited, code=exited, status=30/n/a
Feb 10 03:59:26 blueray-i5 systemd[1]: trousers.service: Failed with result 'exit-code'.
Feb 10 03:59:26 blueray-i5 systemd[1]: Failed to start LSB: starts tcsd.
dpkg: error processing package trousers (--configure):
installed trousers package post-installation script subprocess returned error exit status 1
dpkg: dependency problems prevent configuration of tpm-tools:
tpm-tools depends on trousers; however:
Package trousers is not configured yet.
dpkg: error processing package tpm-tools (--configure):
dependency problems - leaving unconfigured
Errors were encountered while processing:
trousers
tpm-tools
E: Sub-process /usr/bin/dpkg returned an error code (1)
So, I tried to start trousers service. It is giving the following message:
% systemctl start trousers.service
Job for trousers.service failed because the control process exited with error code.
See "systemctl status trousers.service" and "journalctl -xe" for details.
% systemctl status trousers.service
● trousers.service - LSB: starts tcsd
Loaded: loaded (/etc/init.d/trousers; generated)
Active: failed (Result: exit-code) since Wed 2021-02-10 04:04:56 AEST; 23s ago
Docs: man:systemd-sysv-generator(8)
Process: 9114 ExecStart=/etc/init.d/trousers start (code=exited, status=30)
Feb 10 04:04:56 blueray-i5 systemd[1]: Starting LSB: starts tcsd...
Feb 10 04:04:56 blueray-i5 trousers[9114]: * Starting Trusted Computing daemon tcsd
Feb 10 04:04:56 blueray-i5 trousers[9114]: /etc/init.d/trousers: 32: [: /dev/tpm0: unexpected operator
Feb 10 04:04:56 blueray-i5 tcsd[9120]: TCSD TDDL[9120]: TrouSerS ioctl: (25) Inappropriate ioctl for device
Feb 10 04:04:56 blueray-i5 tcsd[9120]: TCSD TDDL[9120]: TrouSerS Falling back to Read/Write device support.
Feb 10 04:04:56 blueray-i5 tcsd[9120]: TCSD TCS[9120]: TrouSerS ERROR: TCS GetCapability failed with result = 0x1e
Feb 10 04:04:56 blueray-i5 trousers[9114]: ...fail!
Feb 10 04:04:56 blueray-i5 systemd[1]: trousers.service: Control process exited, code=exited, status=30/n/a
Feb 10 04:04:56 blueray-i5 systemd[1]: trousers.service: Failed with result 'exit-code'.
Feb 10 04:04:56 blueray-i5 systemd[1]: Failed to start LSB: starts tcsd.
What can I do?
|
Addressing the comment made by the OP here, in which they want to take the code here and rewrite it into a neater form.
I'm repeating the code here just in case it would disappear from the external site:
if [ ! -e /dev/tpmrm ]
then
log_warning_msg "device driver not loaded, skipping."
exit 0
fi
for tpm_dev in /dev/tpmrm; do
TPM_OWNER=$(stat -c %U $tpm_dev)
if [ "x$TPM_OWNER" != "xtss" ]
then
log_warning_msg "TPM device owner for $tpm_dev is not 'tss', this can cause problems."
fi
done
if [ ! -e /dev/tpm0 ]
then
log_warning_msg "device driver not loaded, skipping."
exit 0
fi
for tpm_dev in /dev/tpm0; do
TPM_OWNER=$(stat -c %U $tpm_dev)
if [ "x$TPM_OWNER" != "xtss" ]
then
log_warning_msg "TPM device owner for $tpm_dev is not 'tss', this can cause problems."
fi
done
Sorting out the formatting and rewriting it into a single loop:
for tpm_dev in /dev/tpmrm /dev/tpm0; do
if [ ! -e "$tpm_dev" ]; then
log_warning_msg "device driver not loaded, skipping."
continue
fi
TPM_OWNER=$(stat -c %U "$tpm_dev")
if [ "$TPM_OWNER" != "tss" ]; then
log_warning_msg "TPM device owner for $tpm_dev is not 'tss', this can cause problems."
fi
done
It's unclear whether the exit 0 that the original script contained should still be executed if a device file doesn't exist. I opted for using continue instead to skip ahead to the next device path (since the message said "skipping").
The only other things I changed was to remove the antiquated safe-guard with x in the second test, and I added a set of missing double quotes.
Alternatively, without continue:
for tpm_dev in /dev/tpmrm /dev/tpm0; do
if [ -e "$tpm_dev" ]; then
TPM_OWNER=$(stat -c %U "$tpm_dev")
if [ "$TPM_OWNER" != "tss" ]; then
log_warning_msg "TPM device owner for $tpm_dev is not 'tss', this can cause problems."
fi
else
log_warning_msg "device driver not loaded, skipping."
fi
done
| can not start trousers service - giving error 'TrouSerS ioctl: (25) Inappropriate ioctl for device' |
1,485,910,569,000 |
System: Fedora 37, Gnome 43
I enabled LUKS encryption on setup and enabled auto-decrypt via TPM 2 with following an article from Fedora Magazine. Auto-decrypt works but while it decrypts, it shows the passphrase screen until system boots. How can I hide this screen?
|
This worked for me, after reading up and working out what was asking for the password I tried a sleep for the ask password. I tried a few values 8 seconds the boot screen showed then switched to the password before continuing, 15 didn't show up.
It means that if the TPM decrypt fails the password will eventually show up.
To Hide the Password on startup for 15 seconds (enough time for auto unencrypt) edit systemd-ask-password-plymouth.service in /usr/lib/systemd/system, add:
[Service]
ExecStartPre=/bin/sleep 15
Don't forget to dracut -f to rebuild the initramfs and then rebind your disk to the TPM chip depending on what PCR ID's you used.
| I Have LUKS Enabled And Integrated With TPM 2. How To Hide Passphrase Screen? |
1,485,910,569,000 |
At reboot, with USB sticks inserted, the TPM will not allow passphraseless booting of the server. With a USB HDD inserted passphraseless booting of the server is possible.
Our servers are running Centos 8.3 with Linux kernel version 4.18.0-240.
TPM2 modules are used with LUKS version 1 encryption of an LVM group consisting of several partitions including / /home swap etc. LUKS header slot 0 is used for the passphrase, and slot 1 for the TPM.
The LUKS header slot 1 is bound to PCR values 0,1,2 and 3. Thus the BIOS (0), BIOS configuration (1), Options ROM (2) and Options ROM configuration (3).
From what I read the Options ROM consists of the firmware that is loaded during the POST boot process. If any changes occur from the state of the system when the TPM was signed, the TPM won't allow the system to boot without a passphrase. As USB sticks have firmware that might get loaded during the boot process, I initially thought that binding only to PCR values 0 and 1, ie without the Options ROM, would solve the problem. This did not work.
Any advice on why it won't boot from the TPM with a USB stick attached will be appreciated.
|
We have figured out which PCR value changes after a USB stick gets inserted and the system reboots. By not binding to that PCR value we managed to boot off the TPM with a USB stick inserted.
In our case PCR 1 (BIOS config) kept changing when a USB stick was inserted and the system rebooted.
The command we used to query the PCR values was tpm2_pcrread. This lists the PCR values in sha1 and sha256. We redirected the stdout of the PCR values to a file before and after the changes and using diff command we monitored the changes between each file.
| Not booting off TPM with USB disk inserted |
1,485,910,569,000 |
I have this issue with latest fedora 35 beta.
Clevis encrypt does not work, although I can find the TPM being active in the logs. Tried the enable operation from bios with no luck.
Please, see details here:
dmesg | grep -i tpm
[ 0.000000] efi: ACPI=0x45bfe000 ACPI 2.0=0x45bfe014 TPMFinalLog=0x45ac5000 SMBIOS=0x439e3000 SMBIOS 3.0=0x439e1000 MEMATTR=0x3f8dc018 ESRT=0x3f8ea298 MOKvar=0x3f8df000 RNG=0x439e4b18 TPMEventLog=0x39f43018
[ 0.008084] ACPI: SSDT 0x0000000045BE1000 00077B (v02 INSYDE Tpm2Tabl 00001000 INTL 20160422)
[ 0.008086] ACPI: TPM2 0x0000000045BE0000 00004C (v04 INSYDE TGL-ULT 00000002 ACPI 00040000)
[ 0.008128] ACPI: Reserving TPM2 table memory at [mem 0x45be0000-0x45be004b]
[ 1.192488] tpm_tis NTC0702:00: 2.0 TPM (device-id 0xFC, rev-id 1)
sudo echo hi | clevis encrypt tpm2 '{}' > my.jwe
Place your finger on the fingerprint reader
ERROR:tcti:src/tss2-tcti/tcti-device.c:442:Tss2_Tcti_Device_Init() Failed to open specified TCTI device file /dev/tpmrm0: Permission denied
ERROR:tcti:src/tss2-tcti/tctildr-dl.c:154:tcti_from_file() Could not initialize TCTI file: device
ERROR:tcti:src/tss2-tcti/tctildr.c:428:Tss2_TctiLdr_Initialize_Ex() Failed to instantiate TCTI
Error executing command: TPM error: response code not recognized
EDIT: Opened a bug: https://bugzilla.redhat.com/show_bug.cgi?id=2018978
|
Fixed with latest Fedora 35 Update from 04.11.2021
| TPM support does not work on Fedora 35 |
1,590,065,378,000 |
I recently needed a single webcam to be shared simultaneously by 3 applications (a web browser, a videoconferencing app, and ffmpeg to save the stream).
It's not possible to simply share the /dev/video* stream because as soon as one application is using it, the others cannot, and anything else will get a "device or resource busy" or equivalent.
So I turned to v4l2-loopback with the intention of mirroring the webcam to 3 loopbacks.
Using 3 loopbacks does work as expected, but what has really surprised me is it turns out I don't actually need 3 loopbacks, but only 1.
If I create a single loopback and feed it with ffmpeg, then the single mirrored loopback can be used by all 3 applications at the same time, with no "device or resource busy" issue.
So this is even better than I planned, and there is no practical problem I need help with.
But my question is, how is this possible with the loopback? And why not using the original source directly?
Example command to create the single loopback:
sudo modprobe v4l2loopback video_nr=30 exclusive_caps=1 card_label="loopback cam"
Example command using ffmpeg to mirror /dev/video5 to the loopback (/dev/video30). This will default to raw, but recent builds of ffmpeg can use an alternative stream like MJPEG, the behaviour is the same regardless:
ffmpeg -f v4l2 -i /dev/video5 -codec copy -f v4l2 /dev/video30
After doing this, try to access /dev/video30 with multiple applications, here are some examples:
ffmpeg -f v4l2 -i /dev/video30 -codec libx264 recordstream.mp4
ffplay -f video4linux2 -i /dev/video30
System info in case it's relevant:
Ubuntu 20.04
Kernel: 5.4.0-31-generic
package: v4l2loopback-dkms 0.12.3-1
|
It's by design. First, let it be said that multiple processes can open the /dev/video0 device, but only one of them will be able to issue certain controls (ioctl()) until streaming starts.
Those V4L2 controls define things like bitrate. After you start streaming, the kernel will not let you change them and returns EBUSY (Device or resource busy) if you try. See this note in the kernel source. That effectively blocks other consumers, since you should set those before you start streaming.
What does v4l2loopback do differently? It adds logic and data structures for multiple openers and by default will not try to apply new controls by provinding its own setter.
Note that v4l2loopback needs to have multiple openers, at least two to be useful. One reader and one writer.
| Why can multiple consumers access a *single* v4l2-loopback stream from a webcam |
1,590,065,378,000 |
I would like to join some videoconference, but I don't own a webcam and the conference software requires one.
So my question is, can I create a dummy one? I don't care what the cam will cast, I just need to appear to have one.
|
There is loopback device for that:
https://github.com/umlaeute/v4l2loopback
Just add device with modprobe and stream to it with ffmpeg or gstreamer whatever video you want, or anything else for that matter:
https://github.com/umlaeute/v4l2loopback/wiki
| How to create dummy webcam? |
1,590,065,378,000 |
I have a Logitech Webcam C930e on /dev/video0. I can use this for doing video conferences (e.g. jitsi). However, the video from this webcam is too high and too broad. I would like to have a "cropped" version of /dev/video0 that does not show the seaside picture on the wall.
First, I tried to set v4l2 options to achieve this, but did not succeed:
$ v4l2-ctl -d /dev/video0 --get-cropcap
Crop Capability Video Capture:
Bounds : Left 0, Top 0, Width 640, Height 360
Default : Left 0, Top 0, Width 640, Height 360
Pixel Aspect: 1/1
$ v4l2-ctl -d /dev/video0 --get-selection target=crop_bounds
Selection: crop_bounds, Left 0, Top 0, Width 640, Height 360, Flags:
$ v4l2-ctl -d /dev/video0 --set-selection target=crop_bounds,flags=crop,top=10,left=10,width=100,height=100
VIDIOC_S_SELECTION: failed: Inappropriate ioctl for device
After that, I followed another idea: I tried to use v4l2loopback to create another device /dev/video2. After that I would have tried to use ffmpeg to connect /dev/video0 to /dev/video2 (see https://github.com/umlaeute/v4l2loopback/wiki and https://video.stackexchange.com/questions/4563/how-can-i-crop-a-video-with-ffmpeg).
So now, I am out of ideas. Can someone give advice?
|
Lines below create a loopback video device /dev/video5. After that ffmpeg is used to connect /dev/video0 to /dev/video5, but crop and hflip the stream on its way.
sudo apt-get install v4l2loopback-dkms
sudo modprobe v4l2loopback video_nr=5
ffmpeg -i /dev/video0 -f v4l2 -pix_fmt yuv420p -filter:v "hflip,crop=400:400:0:0" /dev/video5
| How to create a v4l2 device that is a cropped version of a webcam? |
1,590,065,378,000 |
I am trying to create a virtual webcam from OBS(26.1.1) so I can feed it to Zoom. I am Linux Mint 20.1 Cinnamon, version 4.8.6, Kernel 5.4.0-64-generic.
I did:
sudo apt-get install v4l2loopback-dkms
sudo apt-get install v4l2loopback-utils
but v4l2loopback is not showing up as an option in zoom
I went to the v4l2loopback github page, where it suggested I should build it from scratch and install it into my kernel. I tried to build from scratch and immediately ran into problems with the make command.
make -C /lib/modules/`uname -r`/build M=/home/berggren/Downloads/v4l2loopback-main modules
make[1]: Entering directory '/usr/src/linux-headers-5.4.0-64-generic'
Building modules, stage 2.
MODPOST 1 modules
make[1]: Leaving directory '/usr/src/linux-headers-5.4.0-64-generic'
make -C utils
make[1]: Entering directory '/home/berggren/Downloads/v4l2loopback-main/utils'
cc -I.. v4l2loopback-ctl.c -o v4l2loopback-ctl
v4l2loopback-ctl.c:1:10: fatal error: sys/types.h: No such file or directory
1 | #include <sys/types.h>
| ^~~~~~~~~~~~~
compilation terminated.
make[1]: *** [<builtin>: v4l2loopback-ctl] Error 1
make[1]: Leaving directory '/home/berggren/Downloads/v4l2loopback-main/utils'
make: *** [Makefile:85: utils/v4l2loopback-ctl] Error 2
I didn't go further because I wasn't sure I was going in the right direction with that.
Could someone explain the correct procedure for installing v4l2loopback?
|
Installing v4l2loopback-dkms will install the modules on your system (at least: if all goes well), but it will not load the modules for you. So, you need to manually load the module with something like
modprobe v4l2loopback
In order for zoom to use the device, you will first have to attach OBS Studio to it.
You probably need to pass the exlusive_caps=1 option when loading the module, in order for zoom to recognise it.
| How to best install v4l2loopback on Linux Mint? |
1,590,065,378,000 |
I run a debian 10 with kernel 5.9.0.0
I installed v4l2loopback from the official repo, as in sudo apt install v4l2*, which installed
sudo apt install v4l2*
Reading package lists... Done
Building dependency tree
Reading state information... Done
Note, selecting 'v4l2loopback-source' for glob 'v4l2*'
Note, selecting 'v4l2ucp' for glob 'v4l2*'
Note, selecting 'v4l2loopback-dkms' for glob 'v4l2*'
Note, selecting 'v4l2loopback-modules' for glob 'v4l2*'
Note, selecting 'v4l2loopback-utils' for glob 'v4l2*'
I have linux-headers-5.9.0-0.bpo.2-amd64 installed, and
uname -a
Linux debian 5.9.0-0.bpo.2-amd64 #1 SMP Debian 5.9.6-1~bpo10+1 (2020-11-19) x86_64 GNU/Linux
When I try to modprobe for v4l2 though, this is what happens:
sudo modprobe v4l2loopback
modprobe: FATAL: Module v4l2loopback not found in directory /lib/modules/5.9.0-0.bpo.2-amd64
Folder exist, I can't see this module in it though. I tried purging v4l2, reinstalling, rebooting, nothing.
Any help?
Thanks!
EDIT: when trying to install them I actually have some error, here is the full output
sudo apt install v4l2loopback-dkms v4l2loopback-utils
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
v4l2loopback-dkms v4l2loopback-utils
0 upgraded, 2 newly installed, 0 to remove and 1 not upgraded.
Need to get 0 B/54.6 kB of archives.
After this operation, 153 kB of additional disk space will be used.
Selecting previously unselected package v4l2loopback-dkms.
(Reading database ... 378603 files and directories currently installed.)
Preparing to unpack .../v4l2loopback-dkms_0.12.1-1_all.deb ...
Unpacking v4l2loopback-dkms (0.12.1-1) ...
Selecting previously unselected package v4l2loopback-utils.
Preparing to unpack .../v4l2loopback-utils_0.12.1-1_all.deb ...
Unpacking v4l2loopback-utils (0.12.1-1) ...
Setting up v4l2loopback-dkms (0.12.1-1) ...
Loading new v4l2loopback-0.12.1 DKMS files...
Building for 5.9.0-0.bpo.2-amd64
Building initial module for 5.9.0-0.bpo.2-amd64
Error! Bad return status for module build on kernel: 5.9.0-0.bpo.2-amd64 (x86_64)
Consult /var/lib/dkms/v4l2loopback/0.12.1/build/make.log for more information.
dpkg: error processing package v4l2loopback-dkms (--configure):
installed v4l2loopback-dkms package post-installation script subprocess returned error exit status 10
Setting up v4l2loopback-utils (0.12.1-1) ...
Processing triggers for man-db (2.8.5-2) ...
Errors were encountered while processing:
v4l2loopback-dkms
E: Sub-process /usr/bin/dpkg returned an error code (1)
The output is not very eloquent as to what the problem may be, I tried with a sudo dpkg --configure v4l2loopback-dkms but got the same error
|
I'll answer my own question.
Since my kernel is 5.9.0.0, which I installed from buster-backports, while v4l2 was installed from buster repo, it was off.
I solved it by just installing that too from buster-backports and it works fine
| modprobe: FATAL: Module v4l2loopback not found in directory |
1,590,065,378,000 |
Im using OBS with v4l2sink and v4l2loopback to edit my video for a remote trainig.
The preview in obs looks fine, but the video has some serious color shifts in any tool I try to display the v4l2loopack I'm directing the sink to.
View from OBS:
View from Browser:
You can see that all colors have a green shadow about half the grid width.
Is there any setting that I could change to fix this?
The video format selected in the V4l2sinkProperties is YUV420 as all others result in "format not supported"
I run Ubuntu 20.04.1 LTS (Linux 5.4.0-42-generic x86_64). OBS Studio is 25.0.8 installed via apt. obs-v4l2sink and v4lsloopback are built and installed from the current GitHub sources.
|
After reinstalling everything, it works fine for me now. So I have no idea which setting I fiddled with to create this problem :/
| Why are my colors separated/shifted when using obs/v4l2sink/v4l2loopback? |
1,590,065,378,000 |
I've created a couple of v4l2loopback devices for use as virtual webcams, and have been able to get Chrome to recognize them via navigator.mediaDevices.enumerateDevices(). I've also been able to construct gstreamer pipelines to send video and image data to these virtual webcams. what I haven't been able to do is designate any of these devices as front-facing, as reported by InputDeviceInfo.getCapabilities(). is this possible to do with v4l2loopback parameters? is it possible to do by configuring my gstreamer pipeline somehow?
|
The generic v4l2 standard doesn't know anything about "front-facing" and "back-facing" or "side-facing" cameras.
Such an attribute mostly (only?) makes sense when it comes to smartphones.
It doesn't make sense for my good old analog camera nor my USB Webcam nor my builtin laptop webcam, all of which I've used with v4l2.
It doesn't make much sense for my panoramic camera either (as the front/back dichotomy is too coarse in this case), but that is not supported by v4l2 anyhow...
So:
v4l2 doesn't expose a standard property to communicate that camera orientation
as a consequence, v4l2loopback does neither
I've never seen any GStreamer stream that would expose the camera orientation either, but it seems there was discussion about such a thing (and obviously it was turned down due to the lack of a standardized source https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/issues/520)
i guess the answer is: "no, not possible"
| how do I set properties of a v4l2loopback device and make them visible to my web browser? |
1,590,065,378,000 |
I'm trying to switch from one advertised resolution/framerate to a different one on the fly, preferably while other applications are consuming the v4l2loopback feed. As an example, I feed a 1920x1080 black screen video into /dev/video2, and then open it in vlc. This works fine:
$ ffmpeg -f lavfi -i color=c=black:s=1920x1080:r=25/1 -vcodec rawvideo -pix_fmt yuv420p -f v4l2 /dev/video2
$ ffmpeg -f v4l2 -list_formats all -i /dev/video2
ffmpeg version n4.3.1 Copyright (c) 2000-2020 the FFmpeg developers
built with gcc 10.1.0 (GCC)
configuration: --prefix=/usr --disable-debug --disable-static --disable-stripping --enable-avisynth --enable-fontconfig --enable-gmp --enable-gnutls --enable-gpl --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libdav1d --enable-libdrm --enable-libfreetype --enable-libfribidi --enable-libgsm --enable-libiec61883 --enable-libjack --enable-libmfx --enable-libmodplug --enable-libmp3lame --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librav1e --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libtheora --enable-libv4l2 --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxcb --enable-libxml2 --enable-libxvid --enable-nvdec --enable-nvenc --enable-omx --enable-shared --enable-version3
libavutil 56. 51.100 / 56. 51.100
libavcodec 58. 91.100 / 58. 91.100
libavformat 58. 45.100 / 58. 45.100
libavdevice 58. 10.100 / 58. 10.100
libavfilter 7. 85.100 / 7. 85.100
libswscale 5. 7.100 / 5. 7.100
libswresample 3. 7.100 / 3. 7.100
libpostproc 55. 7.100 / 55. 7.100
[video4linux2,v4l2 @ 0x55864f9b06c0] Raw : yuv420p : Planar YUV 4:2:0 : 1920x1080
/dev/video2: Immediate exit requested
However, killing the old feed and then streaming a different resolution into the device does not change the advertised capabilities, and just scrambles the screen in vlc.
$ ffmpeg -f lavfi -i color=c=black:s=1280x800:r=25/1 -vcodec rawvideo -pix_fmt yuv420p -f v4l2 /dev/video2
# The list_formats options are still the same (only 1920x1080)
# vlc shows a green instead of a black screen
Is it possible to change this on the fly?
|
as long as the device is opened, it's resolution (and format) are fixed.
so the answer to your question is: no, you can't change these settings on the fly.
(unless you quit all consumers (in your case: VLC) before starting the new ffmpeg; but that's not what i would call "on the fly")
this is a limitation of the V4L2 API, where you cannot signal a resolution/format change to any attached application.
there also seems to be an issue with ffmpeg, which should either
adjust the output frame-size if it cannot change it
or refuse to run
(it could also be a bug in the v4l2loopback module (not properly reporting back that the framesize didn't change), but with GStreamer it seems to work as expected, so I'm not sure about it)
| How do I change the resolution/capabilities of a v4l2loopback device on the fly? |
1,590,065,378,000 |
Running: Arch Linux, using package "v4l2loopback-dkms"
Software I'm trying to get running:
https://github.com/fangfufu/Linux-Fake-Background-Webcam
This software uses v4l2loopback, which I've setup a few times successfully.
I have two files to load v4l2loopback on boot, with these contents:
The first file simply loads the module on boot.
/etc/modules-load.d/v4l2loopback.conf
Contents: v4l2loopback
This second file creates a dummy output device at /dev/video2
/etc/modprobe.d/linux-fake-background.conf
Contents: options v4l2loopback devices=1 exclusive_caps=1 video_nr=2 card_label="fake-cam"
However, I do not have a /dev/video2 which these files should create. The "video_nr=2" is what should make it map directly to /dev/video2
The module is loaded, trying to unload it results in this error:
$ sudo modprobe -r v4l2loopback
modprobe: FATAL: Module v4l2loopback is in use.
However, if I try to manually create a video output, it just simply hangs for hours with seemingly no progress or errors:
$ sudo modprobe v4l2loopback devices=1 exclusive_caps=1 video_nr=2 card_label="fake-cam"
Nothing happens, and I've let it sit there for over 60 minutes
I've been reading the README, they state that /sys/devices/virtual/video4linux should contain a list of the devices, but I don't even have the video4linux folder. I've tried reinstalling the v4l2loopback package to no avail.
I also already have my linux-headers installed.
I have tried rebooting.
|
After installing a kernel update to 5.18.2, everything was suddenly working. I'm assuming it was something with that version.
| v4l2loopback /dev/video2 Not being created |
1,590,065,378,000 |
I'm trying to build v4l2loopback by simply entering:
make
Building v4l2-loopback driver...
make -C /lib/modules/`uname -r`/build M=/home/user/dev/labs/v4l2loopback modules
make[1]: Entering directory '/usr/lib/modules/5.13.0-22-generic/build'
make[1]: *** No rule to make target 'modules'. Stop.
make[1]: Leaving directory '/usr/lib/modules/5.13.0-22-generic/build'
make: *** [Makefile:46: v4l2loopback.ko] Error 2
uname -r
5.13.0-22-generic
ls -l /usr/src/linux-headers-$(uname -r)
total 1772
drwxr-xr-x 3 root root 4096 Dec 20 00:26 arch
lrwxrwxrwx 1 root root 41 Nov 9 15:21 block -> ../linux-hwe-5.13-headers-5.13.0-22/block
lrwxrwxrwx 1 root root 41 Nov 9 15:21 certs -> ../linux-hwe-5.13-headers-5.13.0-22/certs
lrwxrwxrwx 1 root root 42 Nov 9 15:21 crypto -> ../linux-hwe-5.13-headers-5.13.0-22/crypto
lrwxrwxrwx 1 root root 49 Nov 9 15:21 Documentation -> ../linux-hwe-5.13-headers-5.13.0-22/Documentation
lrwxrwxrwx 1 root root 43 Nov 9 15:21 drivers -> ../linux-hwe-5.13-headers-5.13.0-22/drivers
lrwxrwxrwx 1 root root 38 Nov 9 15:21 fs -> ../linux-hwe-5.13-headers-5.13.0-22/fs
drwxr-xr-x 4 root root 4096 Dec 20 00:26 include
lrwxrwxrwx 1 root root 40 Nov 9 15:21 init -> ../linux-hwe-5.13-headers-5.13.0-22/init
lrwxrwxrwx 1 root root 39 Nov 9 15:21 ipc -> ../linux-hwe-5.13-headers-5.13.0-22/ipc
lrwxrwxrwx 1 root root 42 Nov 9 15:21 Kbuild -> ../linux-hwe-5.13-headers-5.13.0-22/Kbuild
lrwxrwxrwx 1 root root 43 Nov 9 15:21 Kconfig -> ../linux-hwe-5.13-headers-5.13.0-22/Kconfig
drwxr-xr-x 2 root root 4096 Dec 20 00:26 kernel
lrwxrwxrwx 1 root root 39 Nov 9 15:21 lib -> ../linux-hwe-5.13-headers-5.13.0-22/lib
lrwxrwxrwx 1 root root 44 Nov 9 15:21 Makefile -> ../linux-hwe-5.13-headers-5.13.0-22/Makefile
lrwxrwxrwx 1 root root 38 Nov 9 15:21 mm -> ../linux-hwe-5.13-headers-5.13.0-22/mm
-rw-r--r-- 1 root root 1783838 Nov 9 15:21 Module.symvers
lrwxrwxrwx 1 root root 39 Nov 9 15:21 net -> ../linux-hwe-5.13-headers-5.13.0-22/net
lrwxrwxrwx 1 root root 43 Nov 9 15:21 samples -> ../linux-hwe-5.13-headers-5.13.0-22/samples
drwxr-xr-x 7 root root 12288 Dec 20 00:26 scripts
lrwxrwxrwx 1 root root 44 Nov 9 15:21 security -> ../linux-hwe-5.13-headers-5.13.0-22/security
lrwxrwxrwx 1 root root 41 Nov 9 15:21 sound -> ../linux-hwe-5.13-headers-5.13.0-22/sound
drwxr-xr-x 4 root root 4096 Dec 20 00:26 tools
lrwxrwxrwx 1 root root 42 Nov 9 15:21 ubuntu -> ../linux-hwe-5.13-headers-5.13.0-22/ubuntu
lrwxrwxrwx 1 root root 39 Nov 9 15:21 usr -> ../linux-hwe-5.13-headers-5.13.0-22/usr
lrwxrwxrwx 1 root root 40 Nov 9 15:21 virt -> ../linux-hwe-5.13-headers-5.13.0-22/virt
Installed the headers with:
sudo apt install linux-headers-$(uname -r)
What else am I missing?
|
/lib/modules/$(uname -r)/build isn’t supposed to be a directory, it’s supposed to be a symbolic link to /usr/src/linux-headers-$(uname -r). If you fix that with
sudo rmdir "/lib/modules/$(uname -r)/build"
sudo ln -s "/usr/src/linux-headers-$(uname -r)" "/lib/modules/$(uname -r)/build"
your build should work.
| Problem with make when building v4l2loopback |
1,590,065,378,000 |
I installed the v4l2loopback kernel module on my machine and enabled it with sudo modprobe v4l2loopback exclusive_caps=1.
I created a camera on /dev/video0 with the rust bindings and started piping a static image to the camera (command from the wiki):
sudo ffmpeg -loop 1 -re -i 60828015.jpg -f v4l2 -vcodec rawvideo -pix_fmt yuv420p /dev/video0
I'm running this on a VPS with no desktop environment, so I quickly spun up a Docker container with noVNC and a desktop environment to test everything (sudo docker run --rm -it --device /dev/video0 --privileged -p 8090:8080 theasp/novnc). Everything looks fine and if i run ffplay /dev/video0, I can successfully view the image in the camera. The problem now is with other applications.
Cheese doesn't even detect the camera. Chromium detects the camera but cannot use it:
[15749:15755:1024/025614.520951:ERROR:v4l2_capture_delegate.cc(1138)] Dequeued v4l2 buffer contains invalid length (11441 bytes).
Not sure what I'm doing wrong and why Chromium cannot read from the camera.
kernel version: 5.4.0-164-generic
v4l2loopback version: commit 5bb9bed on the main branch (latest one right now)
module paramaters: exclusive_caps=1
FFmpeg pipe logs:
ffmpeg version 4.2.7-0ubuntu0.1 Copyright (c) 2000-2022 the FFmpeg developers
built with gcc 9 (Ubuntu 9.4.0-1ubuntu1~20.04.1)
configuration: --prefix=/usr --extra-version=0ubuntu0.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-nvenc --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
libavutil 56. 31.100 / 56. 31.100
libavcodec 58. 54.100 / 58. 54.100
libavformat 58. 29.100 / 58. 29.100
libavdevice 58. 8.100 / 58. 8.100
libavfilter 7. 57.100 / 7. 57.100
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 5.100 / 5. 5.100
libswresample 3. 5.100 / 3. 5.100
libpostproc 55. 5.100 / 55. 5.100
Input #0, image2, from 'input.jpg':
Duration: 00:00:00.04, start: 0.000000, bitrate: 1854 kb/s
Stream #0:0: Video: mjpeg (Baseline), yuvj420p(pc, bt470bg/unknown/unknown), 360x287 [SAR 96:96 DAR 360:287], 25 fps, 25 tbr, 25 tbn, 25 tbc
Stream mapping:
Stream #0:0 -> #0:0 (mjpeg (native) -> rawvideo (native))
Press [q] to stop, [?] for help
[swscaler @ 0x55b19c9fab80] deprecated pixel format used, make sure you did set range correctly
Output #0, video4linux2,v4l2, to '/dev/video0':
Metadata:
encoder : Lavf58.29.100
Stream #0:0: Video: rawvideo (I420 / 0x30323449), yuv420p, 360x287 [SAR 1:1 DAR 360:287], q=2-31, 30996 kb/s, 25 fps, 25 tbn, 25 tbc
Metadata:
encoder : Lavc58.54.100 rawvideo
frame= 2277 fps= 25 q=-0.0 size=N/A time=00:01:31.08 bitrate=N/A speed= 1x
|
From https://github.com/umlaeute/v4l2loopback/wiki/Faq
Depending on the color encoding, odd-sized frames can be problematic (eg YUV420p requires that the U and V planes are downsampled by a factor of 2, which works best if the width and height can be divided by 2). See also Issue #561
I changed the image size and the error disappeared.
| Chromium cannot read camera: Dequeued v4l2 buffer contains invalid length #561 |
1,590,065,378,000 |
I'm trying to replicate the following command in OSX:
sudo modprobe v4l2loopback video_nr=3 card_label="TestVideo"
So far I've tested the following advice here but I keep getting an error erroneous pipeline: no element "videotestsrc"
My first question is are the two commands v4l2loopback=gst-launch-1.0 equal?
My second question is whether there any ways to use v4l2loopback in OSX?
|
My first question is are the two commands v4l2loopback=gst-launch-1.0 equal?
no. they are not.
gst-launch-1.0 is a command that launches a GStreamer pipeline (as specified by its arguments). GStreamer is a generic, cross-platform media framework
v4l2loopback is not a command at all.
My second question is whether there any ways to use v4l2loopback in OSX?
no.
the v4l2loopback (which is an abbreviation for "video for linux (version 2) loopback") integrates tightly with the linux kernel.
the kernel is the core component of your operating system. a Linux kernel is not compatible with a Darwin kernel (which is what runs your OSX)
| Replicating v4l2loopback command in OSX Big Sur? |
1,364,380,590,000 |
On Linux, the openat syscall can be used to create files and to test for their existence. Speaking in terms of the C/C++ memory model, creating a file and verifying its existence creates a synchronizes-with relationship. What I need to know is whether these synchronizations are all sequentially-consistent with each other. (I certainly hope so, but I haven't actually seen this documented anywhere.)
For example, given processes p1 and p2, and paths A and B:
if p1 does this: create(A), then create(B)
and p2 does this: try to open(B), then try to open(A)
and no other processes interfere with A or B, is it possible for p2 to open B successfully but fail to find A?
If it makes a difference, we can assume all operations are within one filesystem.
|
Only for files in the same directory.
There are 6 rules:
read access. Locking rules: caller locks directory we are accessing.
The lock is taken shared.
object creation. Locking rules: same as above, but the lock is taken
exclusive.
object removal. Locking rules: caller locks parent, finds victim,
locks victim and calls the method. Locks are exclusive.
rename() that is not cross-directory. Locking rules: caller locks
the parent and finds source and target. In case of exchange (with
RENAME_EXCHANGE in flags argument) lock both. In any case,
if the target already exists, lock it. If the source is a non-directory,
lock it. If we need to lock both, lock them in inode pointer order.
Then call the method. All locks are exclusive.
NB: we might get away with locking the the source (and target in exchange
case) shared.
link creation. Locking rules:
lock parent
check that source is not a directory
lock source
call the method.
All locks are exclusive.
cross-directory rename. The trickiest in the whole bunch. Locking
rules:
lock the filesystem
lock parents in "ancestors first" order.
find source and target.
if old parent is equal to or is a descendent of target fail with -ENOTEMPTY
if new parent is equal to or is a descendent of source fail with -ELOOP
If it's an exchange, lock both the source and the target.
If the target exists, lock it. If the source is a non-directory, lock it. If we need to lock both, do so in inode pointer order.
call the method.
All ->i_rwsem are taken exclusive. Again, we might get away with locking
the the source (and target in exchange case) shared.
The rules above obviously guarantee that all directories that are going to be
read, modified or removed by method will be locked by caller.
Locking enforces linearizability, so operations on a single directory are totally ordered. However, read access (1), object creation (2), and object removal (3) don't take any broader locks than the directory lock, so there are no guarantees about ordering of directory operations in different directories; different observers may see the directories' linear histories interleaved in different ways.
| Is file creation totally ordered? |
1,364,380,590,000 |
I followed the steps here and compiled the kernel successfully in usermode:
https://btrfs.wiki.kernel.org/index.php/Debugging_Btrfs_with_GDB
But when I start ./linux in various ways it always gives me a very similar error:
pc@linux-94q0:~/linux-4.11-rc4> ./linux root=/mnt
Core dump limits :
soft - 0
hard - NONE
Checking that ptrace can change system call numbers...OK
Checking syscall emulation patch for ptrace...OK
Checking advanced syscall emulation patch for ptrace...OK
Checking environment variables for a tempdir...none found
Checking if /dev/shm is on tmpfs...OK
Checking PROT_EXEC mmap in /dev/shm...OK
Adding 33251328 bytes to physical memory to account for exec-shield gap
Linux version 4.11.0-rc4 (pc@linux-94q0) (gcc version 4.8.5 (SUSE Linux) ) #1 Fri Mar 31 12:40:07 CEST 2017
Built 1 zonelists in Zone order, mobility grouping on. Total pages: 16087
Kernel command line: root=/mnt
PID hash table entries: 256 (order: -1, 2048 bytes)
Dentry cache hash table entries: 8192 (order: 4, 65536 bytes)
Inode-cache hash table entries: 4096 (order: 3, 32768 bytes)
Memory: 26140K/65240K available (3518K kernel code, 770K rwdata, 948K rodata, 114K init, 195K bss, 39100K reserved, 0K cma-reserved)
NR_IRQS:15
clocksource: timer: mask: 0xffffffffffffffff max_cycles: 0x1cd42e205, max_idle_ns: 881590404426 ns
Calibrating delay loop... 6966.47 BogoMIPS (lpj=34832384)
pid_max: default: 32768 minimum: 301
Mount-cache hash table entries: 512 (order: 0, 4096 bytes)
Mountpoint-cache hash table entries: 512 (order: 0, 4096 bytes)
Checking that host ptys support output SIGIO...Yes
Checking that host ptys support SIGIO on close...No, enabling workaround
devtmpfs: initialized
Using 2.6 host AIO
clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604462750000 ns
futex hash table entries: 256 (order: 0, 6144 bytes)
xor: measuring software checksum speed
8regs : 19749.600 MB/sec
8regs_prefetch: 17312.000 MB/sec
32regs : 18694.400 MB/sec
32regs_prefetch: 17317.600 MB/sec
xor: using function: 8regs (19749.600 MB/sec)
NET: Registered protocol family 16
raid6: int64x1 gen() 4139 MB/s
raid6: int64x1 xor() 2318 MB/s
raid6: int64x2 gen() 3758 MB/s
raid6: int64x2 xor() 2685 MB/s
raid6: int64x4 gen() 3413 MB/s
raid6: int64x4 xor() 2153 MB/s
raid6: int64x8 gen() 2865 MB/s
raid6: int64x8 xor() 1626 MB/s
raid6: using algorithm int64x1 gen() 4139 MB/s
raid6: .... xor() 2318 MB/s, rmw enabled
raid6: using intx1 recovery algorithm
clocksource: Switched to clocksource timer
VFS: Disk quotas dquot_6.6.0
VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
NET: Registered protocol family 2
TCP established hash table entries: 512 (order: 0, 4096 bytes)
TCP bind hash table entries: 512 (order: 0, 4096 bytes)
TCP: Hash tables configured (established 512 bind 512)
UDP hash table entries: 256 (order: 1, 8192 bytes)
UDP-Lite hash table entries: 256 (order: 1, 8192 bytes)
NET: Registered protocol family 1
console [stderr0] disabled
mconsole (version 2) initialized on /home/pc/.uml/y33GMV/mconsole
Checking host MADV_REMOVE support...OK
workingset: timestamp_bits=62 max_order=13 bucket_order=0
io scheduler noop registered
io scheduler deadline registered (default)
io scheduler mq-deadline registered
NET: Registered protocol family 17
Initialized stdio console driver
Console initialized on /dev/tty0
console [tty0] enabled
Initializing software serial port version 1
console [mc-1] enabled
Failed to initialize ubd device 0 :Couldn't determine size of device's file
Btrfs loaded, crc32c=crc32c-generic, debug=on
VFS: Cannot open root device "/mnt" or unknown-block(0,0): error -6
Please append a correct "root=" boot option; here are the available partitions:
Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)
CPU: 0 PID: 1 Comm: swapper Not tainted 4.11.0-rc4 #1
Stack:
6381bd80 60066344 602a250a 62cab500
602a250a 600933ba 6381bd90 60297e6f
6381beb0 60092b41 6381be30 60380ea1
Call Trace:
[<600933ba>] ?
printk+0x0/0x94
[<6001c4d8>]
show_stack+0xfe/0x158
[<60066344>] ?
dump_stack_print_info+0xe1/0xea
[<602a250a>] ?
bust_spinlocks+0x0/0x4f
[<602a250a>] ?
bust_spinlocks+0x0/0x4f
[<600933ba>] ?
printk+0x0/0x94
[<60297e6f>]
dump_stack+0x2a/0x2c
[<60092b41>]
panic+0x173/0x322
[<60380ea1>] ?
klist_next+0x0/0xa6
[<600929ce>] ?
panic+0x0/0x322
[<600cac33>] ?
kfree+0x0/0x8a
[<600f01da>] ?
SyS_mount+0xae/0xc0
[<600933ba>] ?
printk+0x0/0x94
[<600f012c>] ?
SyS_mount+0x0/0xc0
[<60002378>]
mount_block_root+0x356/0x374
[<6029e3f9>] ?
strcpy+0x0/0x18
[<60002432>]
mount_root+0x9c/0xa0
[<6029e543>] ?
strncmp+0x0/0x25
[<60002614>]
prepare_namespace+0x1de/0x238
[<600eb9d3>] ?
SyS_dup+0x0/0x5e
[<60001ee1>]
kernel_init_freeable+0x300/0x31b
[<600933ba>] ?
printk+0x0/0x94
[<603835e9>]
kernel_init+0x1c/0x14a
[<6001b140>]
new_thread_handler+0x81/0xa3
Aborted (core dumped)
I have now tried everything I could think of to satisfy the ./linux root= option but nothing seems to work.
I created a root filesystem with https://buildroot.org/, passed that as .gz, .tar, .tar.gz, uncompressed folder
I put the contents of the buildroot.org into the btrfs loop device, then right clicked in the disks utility and created an .img file. Tried starting with that.
Of course I tried all the usual options I could think of, like ./linux root=/mnt, ./linux root=/dev/loop0
I don't know what else to try. Why is this not working?
I tried finding out what the -6 error code means, but it seems all the Linux kernel error codes are positive numbers.
https://gist.github.com/nullunar/4553641
I really don't know what else to do, I guess I could start reading up for hours and hours about what exactly the udb stuff means, but I was really hoping somebody could just tell me what I need to pass to the command line as my interest right now is only in debugging btrfs, not Linux in general.
|
From https://www.linux.com/news/how-run-linux-inside-linux-user-mode-linux :
./linux-2.6.19-rc5 ubda=FedoraCore5-x86-root_fs mem=128M
The ubda parameter in this command is giving the UML kernel the name of a file to use to create the virtual machine's /dev/ubda virtual block device, which will be its root filesystem.
| How do I run the usermode linux kernel? |
1,364,380,590,000 |
I study with details the structure of the VFS superblock and I noticed the field
struct hlist_head s_pins;
Even though I made an extensive search it was not possible to find info about this. I only found that this is defined and used in fs_pins.c and in functions as pin_insert & etc cetera, but there is no information about its usage and its role. In fact, I found a PIN control subsystem, but I don't know if this is the same since it seems to be associated to hardware pins and not to file systems.
|
These pins are used by the accounting subsystem: they ensure that acct_pin_kill is called when file systems are unmounted or remounted, so that accounting can take appropriate action. (Accounting writes information to a file, so it needs to know when that file will no longer be writable.)
Pins were intended as a more general-purpose way for code to be attached to mounts, but that ended up never quite getting there.
| Usage and role of s_pins field in the VFS superblock |
1,364,380,590,000 |
I have:
$ find 1 2 -printf '%i %p\n'
40011805 1
40011450 1/t
40011923 1/a
40014006 1/a/e
40011217 1/a/q
40011806 2
40011458 2/y
40011924 2/a
40013989 2/a/e
40013945 2/a/w
I want:
<inode> <path>
any 2
40011450 2/t
40011458 2/y
any 2/a
40014006 2/a/e
40011217 2/a/q
40013945 2/a/w
How do do it?
|
Already answered.
Here is version adapted to this task:
D=$(readlink -f "2"); (cd "1" && find . -type f -print0 | cpio --pass-through --null --link --make-directories "$D") && rm -Rf 1
After this command I have exactly what I wanted:
$ find 1 2 -printf '%i %p\n'
find: `1': No such file or directory
40011806 2
40011450 2/t
40011458 2/y
40011924 2/a
40011217 2/a/q
40014006 2/a/e
40013945 2/a/w
Read notes about usage in the original answer (linked above).
| How do I merge (without copying) two directories? [duplicate] |
1,364,380,590,000 |
I'm attempting to help out a roommate who has installed Kubuntu on his Chromebook using Crouton (it's basically just a fancy chroot run within ChromeOS).
I helped him get the Docker daemon running, using some advice from this issue on the Docker github: https://github.com/docker/docker/issues/1863. That involved using the flag --storage-driver=vfs. AUFS tools are installed according to apt, but I guess there's some additional support that ChromeOS is lacking.
Anyways, the first pull he did failed because it filled the remainder of his SSD (about 8gb). I pulled the same image onto a blank Docker install on my laptop, and the entire /var/lib/docker directory consumed 1.2gb.
Is the fact that we're using vfs causing this? There's a literal order of magnitude difference in storage space used. I'm not overly familiar with Docker, but the other thought I had was that it uses system libraries when available but will pull anything not installed.
TL;DR - Docker image takes up ~700Mb on my machine, over 8Gb on a friend's. We'd like to be able to pull one Docker image without resorting to external storage. Is there anything we can do?
|
I'm working through this same issue now. This happens because vfs is not a true union file system (like aufs), so every incremental image in the image you restore is restored to its full size.
See this issue for more details:
https://github.com/docker/docker/issues/14040
| Docker in Crouton - VFS consuming astronomical amounts of space |
1,364,380,590,000 |
[~]$ stat -c %i /
2
As you can see in above, the inode for / is 2. But the First inode of /dev/sda2 is 11.
[~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 350G 67G 266G 21% /
tmpfs 12G 44M 12G 1% /dev/shm
[~]$ sudo tune2fs -l /dev/sda2 | grep 'First inode'
First inode: 11
Can any one help me to understand this difference?
|
The value in the superblock shown by tune2fs is the first inode number usable for new files, while the root directory must always exist when the file system is created.
The kernel’s Ext4 documentation lists the inode numbers which are used internally by file systems features.
| Why are the first inode of the `/` mounted partition and inode of `/` different? |
1,364,380,590,000 |
Today, I have read through some documents about file system (http://tekrants.me/2014/07/14/linux-file-system-write/). And the term kernel page is mentioned several times in this article. I am now quite confused about the memory usage of kernel and of user.
I understand that the address space for kernel and user and between users are different. The virtual to physical mapping is independent with each other. Is the memory mapped to kernel address space cannot be mapped to any user address space?
And, for the article I mentioned above, it basically talks about the usage of page cache. So, when the operating system is asked to load some data or code from disk to the page cache, where should those pages holding the data and code come from? And can those pages be accessed by users?
|
The kernel manages the memory, so kernel code has access to both kernel and user space. When talking about "kernel space" one usually means pages which are used exclusively by the kernel.
"User space" is not a single entity. Each process has its own address space, possibly partially overlapping with other processes.
The cache is governed by the kernel and cannot be accessed by user-space code. Of course, kernel can transfer pages from kernel space to user space if needed.
| What is kernel memory and user memory? (question about the term kernel page and the page cache) |
1,364,380,590,000 |
In my GFS clusters I use the CDPN feature to have separate chrooted /dev/log directories on separate cluster nodes:
/home/ftpuser/foo:
lrwxrwxrwx 1 root root 18 Sep 26 2010 dev -> .sys/@hostname/dev
/home/ftpuser/foo/.sys:
drwx--x--x 3 root root 3864 Sep 26 2010 server1.example.com
drwx--x--x 3 root root 3864 Sep 26 2010 server2.example.com
drwx--x--x 3 root root 3864 Sep 26 2010 server3.example.com
/home/ftpuser/foo/.sys/server2.example.com:
drwx--x--x 2 root root 3864 Sep 25 09:34 dev
/home/ftpuser/foo/.sys/server2.example.com/dev:
srw-rw-rw- 1 root root 0 Sep 25 09:23 log
/home/ftpuser/foo/dev: (transparently picking 1 subdir depending on node name)
srw-rw-rw- 1 root root 0 Sep 25 09:23 log
I use this so the rsyslog daemon on each node doesn't interfere with eachother. It works because @hostname in a path is replaced with the hostname of the host that interprets it, so different hosts get a different directory. The clusters are active on all nodes simultaneously.
My questions:
Is there a way to get corresponding functionality on an NFS share?
Could it in theory be implemented in the linux kernel on all filesystems (via a mount option so it doesn't break stuff by default)?
This question is similar but not identical to this one: NFS file with same name but different content depending on host
|
I don't think CDPN exists for NFS, but you can achieve something roughly equivalent with basic tools. The limitation is that you have to put all your node-specific files in the same location (or at least you have to keep a list of locations), you can't use the @hostname feature anywhere you like.
Mount a local filesystem on all nodes at the same location, e.g. /local. On that filesystem, create a symbolic link whose target varies between nodes and points to the node-specific area of the remote filesystem. You don't need any local storage for that, it can be an in-memory filesystem; since it only needs to store one symbolic link, the overhead is tiny.
mount -t tmpfs -o noexec,nodev,nosuid,mode=755,nr_inodes=2,nr_blocks=2 local-redirect /local
ln -s "/nfs/.sys/$HOSTNAME" /local/storage
Use /local/storage where you would use .sys/@hostname in your example.
A different, Linux-specific approach is to make a bind mount on each node. Have an empty directory on the shared filesystem, and bind-mount @hostname to it after mounting the NFS filesystem.
mount --bind "/.nfs/sys/$HOSTNAME" /nfs/.sys/@hostname
| Is there a way to achieve context-dependent path names (CDPN) on NFS? |
1,364,380,590,000 |
I have an eMMC backed Linux 3.10 development device (Android) and I am trying to better understand when Linux is actually reading from the eMMC as opposed to the page cache. Specifically, I am interested in the ELF loading process and I cannot explain the following results.
I have placed an ELF file on the filesystem that is run by init and does not exit. The ELF file data resides in blocks 0x22d930-0x22da78 and I have modified the kernel to log any eMMC read accesses to blocks there blocks. During boot, the log shows that the entire ELF file is read from the eMMC.
mmc read block: 0x22d930, num_blocks: 0x20
mmc read block: 0x22d9f0, num_blocks: 0x88
mmc read block: 0x22d950, num_blocks: 0xa0
This makes sense to me as I would expect the full ELF to be read from eMMC when init first fork/execve's my ELF (0x20 + 0x88 + 0xa0 = 0x22da78 - 0x22d930).
My confusion arises when I drop the page cache and kill my process. When I kill my process, init is configured to automatically restart the process through another fork/execve. By dropping the page cache, I would expect the full ELF to again be read from the eMMC. However, I only see partial eMMC read accesses after issuing the following command.
echo 3 > /proc/sys/vm/drop_caches && kill -9 <pid>
mmc read block: 0x22d960, num_blocks: 0x10
mmc read block: 0x22d988, num_blocks: 0x8
mmc read block: 0x22d9a0, num_blocks: 0x8
mmc read block: 0x22d9c0, num_blocks: 0x40
mmc read block: 0x22da40, num_blocks: x018
mmc read block: 0x22da60, num_blocks: 0x18
My process is successfully restarted with a new pid, but I cannot understand why Linux did not reread the full ELF from disk? Linux does not even reread the ELF header, which I know is the first file read done by execve.
Are parts of the original file mapping from my original process still in some cache/RAM? The following command only shows my process.
lsof | grep <ELF name>
I would appreciate any help in explaining this behavior and where my perceived logic is at fault.
|
The problem appears to be timing between init restarting the process and the page cache being dropped. Instead of issuing kill -9 <pid> && echo 3 > /proc/sys/vm/drop_caches, the proper sequence appears to be the following.
stop <service>
echo 3 > /proc/sys/vm/drop_caches
start <service>
Once the service is stopped, the file mapping is no longer locked in the page cache and can now be dropped. This forces Linux to reread the ELF from disk when the service is started again.
| Reading ELF file after dropping caches |
1,364,380,590,000 |
I'm learning Linux procfs, which utilizes a virtual file system where operations like open, read, write, and release are processed by functions registered to it.
I've left the open and release to null pointer by mistake, and when I try to read the content of the file, with Python codes like:
with open("/proc/testfile", "r") as f:
content = f.read()
The program stuck and I read error from the kernel dmesg that null pointer is dereferenced, which is expected, as open is pointed to NULL.
However, the cat command from GNU coreutils can do the job, giving me output like
$ cat /proc/testfile
testoutput
which means that cat have not invoked the open function but directly invoking read (write can also be done).
In my understanding, open() will return a file descriptor which is further be used to deal with files, and read() requires a file descriptor to continue.
How is this be done inside cat?
|
You can use strace to see what system call some command is making:
$ strace cat /proc/version
[...]
openat(AT_FDCWD, "/proc/version", O_RDONLY) = 3
[...]
read(3, "Linux version 5.16.0-5-amd64 (de"..., 131072) = 181
write(1, "Linux version 5.16.0-5-amd64 (de"..., 181) = 181
[...]
$ strace python3 -c 'open("/proc/version")'
[...]
openat(AT_FDCWD, "/proc/version", O_RDONLY|O_CLOEXEC) = 3
[...]
Appart for that O_CLOEXEC flag which should not have any bearing to your issue, both make the exact same system call on my system.
| How can `cat` read a file without a file descriptor? |
1,364,380,590,000 |
While studying VFS, this question popped into my head.
Is it okay to think of VFS as a module?
The reason why I thought like that is because VFS has the characteristic of simplifying actual file management to kernel/user space. This seemed something like a device driver would do and it got me thinking.
But then again, if VFS is something that is statically compile within the kernel I guess it can not be regarded as a module.
|
You don't specify which operating system you are asking about, but the answer is likely to be the same for all of the mainstream general purpose ones.
TL;DR: The VFS is not a module.
In general, the VFS is too much integral to the basic functionality of the kernel to be able to be configured as a (optional) module. Everything to do with files and pathnames and mount points and filesystems is basically hooked in to the VFS. Every system call that takes a pathname or file descriptor, from open() to rename() to execve() hooks in to the VFS. Without that last one you cannot, well... run any software.
There are operating systems that do not have a VFS or where the VFS is an optional component, but then those operating systems don't have the concept of files with names. Think microcontrollers, like the "operating system" in your digital thermostat.
| is VFS a module? |
1,364,380,590,000 |
Using debugfs -R 'stat <inode_nr> ' /dev/sda1 returns a result where there is a field crtime which i believe represents the creation date of a file pointed to by inode numbered inode_nr. I use this on an ext4 fs.
I know that the inode stores access_time, modification_time and change_time but not birth of a file
So my question is where is the creation time stored or how does the debugfs command retrieve it?
|
If the filesystem records file creation time (Not all do), it's stored in the inode along with the rest of the file metadata like modification and change times. It can be retrieved with the fairly recently added statx(2) system call in the stx_btime field of the struct statx that it populates. Note that there's no easy to use wrapper for it provided by glibc; you have to make the syscall directly.
debugfs probably examines the inode structures directly, though.
| Where is the file creation time (birth) stored in linux? |
1,364,380,590,000 |
As per my understanding kernel maintains 4 tables.
Per process FD table.
System wide open file table struct file
Inode (in-memory) table struct vnode
Inode (on-disk) table.
struct file have one field named struct file_operations f_ops; which contains FS specific operations like ext2_read(), ext2_write();
struct vnode also have one field struct vnodeops v_op; which contains FS specific operations too.
My question is why we have similar functionalities inside both? Or am I getting something wrong?
Are things different in Unix and Linux? Because I did not find struct vnode inside Linux's fs.h
Reference: https://www.usna.edu/Users/cs/wcbrown/courses/IC221/classes/L09/Class.html
Diagram (from "Unix internals new frontiers" book)
|
Okay, I have found the answer.
In previous versions of Unix like SVR4, struct file does not contain file_operations field and all operations e.g. read, write etc. contained by vnode->v_op.
However, in case of Linux struct file will contain file_operations field which will have functions like open, read, write etc. and struct inode (similar to vnode) will contain inode_operations field which will have operations like lookup, link, unlink, symlink, rmdir, mkdir, rename, etc.
| struct file_operations vs struct vnodeops |
1,364,380,590,000 |
Yesterday I did a kernel update on a virtual machine. When I restarted the machine unfortunately I got the error Kernel panic - not syncing VFS
After some searching I found out I should go to GRUB and just select the old kernel and everything would be fine.
Wel I selected the old kernel but came to the following error;
VFS: Cannot open root device "mapper/vg_cpanel-lv_root
So after some digging I found, this question somewhere else.
I went to rescue mode and do vim /boot/grub/device.map and 1 line appears
(fd0) /dev/fd0 (nothing more).
I did a fdisk -l;
But there ends the train for me, unfortunately.
Following are 2 screen shots of my grub command;
I hope someone can point me in the right direction.
|
Well I did "solve" it. Still don't know how.
I contacted a system friend of mine which pointed me to this answer. I followed it but did not got past the second command.
I reset the system and somehow we got a third option in the GRUB which did work (magic) so now the system is running again.
So boys and girls don't forget to backup your stuff
| Centos VFS cannot open root device |
1,447,153,121,000 |
I have script including multiple commands. How can I group commands to run together ( I want to make several groups of commands. Within each group, the commands should run in parallel (at the same time). The groups should run sequentially, waiting for one group to finish before starting the next group)
... i.e.
#!/bin/bash
command #1
command #2
command #3
command #4
command #5
command #6
command #7
command #8
command #9
command #10
how can I run every 3 commands to gether? I tried:
#!/bin/bash
{
command #1
command #2
command #3
} &
{
command #4
command #5
command #6
} &
{
command #7
command #8
command #9
}&
command #10
But this didn't work properly ( I want to run the groups of commands in parallel at the same time. Also I need to wait for the first group to finish before running the next group)
The script is exiting with an error message!
|
The commands within each group run in parallel, and the groups run sequentially, each group of parallel commands waiting for the previous group to finish before starting execution.
The following is a working example:
Assume 3 groups of commands as in the code below. In each group the three commands are started in the background with &.
The 3 commands will be started almost at the same time and run in parallel while the script waits for them to finish.
After all three commands in the the third group exit, command 10 will execute.
$ cat command_groups.sh
#!/bin/sh
command() {
echo $1 start
sleep $(( $1 & 03 )) # keep the seconds value within 0-3
echo $1 complete
}
echo First Group:
command 1 &
command 2 &
command 3 &
wait
echo Second Group:
command 4 &
command 5 &
command 6 &
wait
echo Third Group:
command 7 &
command 8 &
command 9 &
wait
echo Not really a group, no need for background/wait:
command 10
$ sh command_groups.sh
First Group:
1 start
2 start
3 start
1 complete
2 complete
3 complete
Second Group:
4 start
5 start
6 start
4 complete
5 complete
6 complete
Third Group:
7 start
8 start
9 start
8 complete
9 complete
7 complete
Not really a group, no need for background/wait:
10 start
10 complete
$
| Run commands in parallel and wait for one group of commands to finish before starting the next |
1,447,153,121,000 |
Doesn't the bash shell already run the commands one by one and wait for the executed command to finish? So when and why do we need the wait command?
|
You use wait if you have launched tasks in background, e.g.
#!/bin/bash
task1 &
task2 &
task3 &
wait
echo done
In this example the script starts three background tasks. These will run concurrently in the background and the wait will wait for all three tasks to finish. Once wait returns, the script continues with processing the echo done.
As pointed out in comment wait can be given a job number (wait %3) or a pid (wait 1234). While it is easy ( using job or ps ) in interactive bash to find those, it might be more difficult in batch mode.
| When and why do we need the `wait` command on bash? |
1,447,153,121,000 |
I know this question has been already asked & answered, but the solution I found listens for space and enter:
while [ "$key" != '' ]; do
read -n1 -s -r key
done
Is there a way (in bash) to make a script that will wait only for the space bar?
|
I suggest to use only read -d ' ' key.
-d delim: continue until the first character of DELIM is read, rather
than newline
See: help read
| Press SPACE to continue (not ENTER) |
1,447,153,121,000 |
If I have the following shell script
sleep 30s
And I hit Ctrl+C when the shell script is running, the sleep dies with it.
If I have the following shell script
sleep 30s &
wait
And I hit Ctrl+C when the shell script is running, the sleep continues on, and now has a parent of 1.
Why is that? Doesn't bash propagate Ctrl+C to all the children?
EDIT:
If I have the following script
/usr/bin/Xvfb :18.0 -ac -screen 0 1180x980x24 &
wait
where I am spawning a program, this time Ctrl+C on the main process kills the Xvfb process too.
So how/why is Xvfb different from sleep?
In the case of some processes I see that they get reaped by init, in some cases they die. Why does sleep get reaped by init? Why does Xvfb die?
|
tl;dr; the Xvfb process sets a signal handler for SIGINT and exits when it receives such a signal, but the sleep process doesn't, so it inherits the "ignore" state for SIGINT as it was set by the shell running the script before executing the sleep binary.
When a shell script is run, the job control is turned off, and background processes (the ones started with &) are simply run in the same process group, with SIGINT and SIGQUIT set to SIG_IGN (ignored) and with their stdin redirected from /dev/null.
This is required by the standard:
If job control is disabled (see the description of set -m) when the shell executes an asynchronous list, the commands in the list shall inherit from the shell a signal action of ignored (SIG_IGN) for the SIGINT and SIGQUIT signals.
If the signal disposition is set to SIG_IGN (ignore), that state will be inherited through fork() and execve():
Signals set to the default action (SIG_DFL) in the calling process
image shall be set to the default action in the new process image.
Except for SIGCHLD, signals set to be ignored (SIG_IGN) by the calling
process image shall be set to be ignored by the new process image.
| sleep, wait and Ctrl+C propagation |
1,447,153,121,000 |
In this script, that pulls all git repositories:
#!/bin/bash
find / -type d -name .git 2>/dev/null |
while read gitFolder; do
if [[ $gitFolder == *"/Temp/"* ]]; then
continue;
fi
if [[ $gitFolder == *"/Trash/"* ]]; then
continue;
fi
if [[ $gitFolder == *"/opt/"* ]]; then
continue;
fi
parent=$(dirname $gitFolder);
echo "";
echo $parent;
(git -C $parent pull && echo "Got $parent") &
done
wait
echo "Got all"
the wait does not wait for all git pull subshells.
Why is it so and how can I fix it?
|
The issue is that the wait is run by the wrong shell process. In bash, each part of a pipeline is running in a separate subshell. The background tasks belong to the subshell executing the while loop. Moving the wait into that subshell would make it work as expected:
find ... |
{
while ...; do
...
( git -C ... && ... ) &
done
wait
}
echo 'done.'
You also have some unquoted variables.
I would get rid of the pipe entirely and instead run the loop from find directly, which gets rid of the need to parse the output from find.
find / -type d -name .git \
! -path '*/Temp/*' \
! -path '*/opt/*' \
! -path '*/Trash/*' \
-exec sh -c '
for gitpath do
git -C "$gitpath"/.. pull &
done
wait' sh {} +
Or, using -prune to avoid even entering any of the subdirectories we don't want to deal with,
find / \( -name Temp -o -name Trash -o -name opt \) -prune -o \
-type d -name .git -exec sh -c '
for gitpath do
git -C "$gitpath"/.. pull &
done
wait' sh {} +
As mentioned in comments, you could also use xargs to have greater control over the number of concurrently running git processes. The -P option (for specifying the number of concurrent tasks) used below is non-standard, as are -0 (for reading \0-delimited pathnames) and -r (for avoiding running the command when there's no input). GNU xargs and some other implementations of this utility have these options though. Also, the -print0 predicate of find (to output \0-delimited pathnames) is non-standard, but commonly implemented.
find / \( -name Temp -o -name Trash -o -name opt \) -prune -o \
-type d -name .git -print0 |
xargs -t -0r -P 4 -I {} git -C {}/.. pull
I'm sure GNU parallel could also be used in a similar way, but since this is not the main focus of this question I'm not pursuing that train of thought.
| Why wait in this script is not executed after all subshells? |
1,447,153,121,000 |
I can't find any documentation that explains my observations in sufficiently enough terms. After I run the below code, I perform a kill -SIGINT $my_pid from a different shell. I will correctly see #### received trap 2 the first two times. However, the wait command gets interrupted on each signal. Why?
#!/bin/bash
for s in {0..64}
do
trap "echo '#### received trap $s'" $s
done
./code &
pid=$!
my_pid=$$
wait $pid
wait $pid
|
3.7.6 Signals
When Bash is waiting for an asynchronous command via the wait builtin, the reception of a signal for which a trap has been set will cause the wait builtin to return immediately with an exit status greater than 128, immediately after which the trap is executed.
| Why is the wait $pid command interrupted by any signal to the waiting process? |
1,447,153,121,000 |
#!/usr/bin/env bash
sleep 3 && echo '123' &
sleep 3 && echo '456' &
sleep 3 && echo '999' &
If I run this, and send SIGINT by pressing control-c via terminal, it seems to still echo the 123... output. I assumed this is because it's somehow detached?
However if I add a wait < <(jobs -p) (wait for all background jobs to finish) to the end of the script, if I run it then, and send the SIGINT then the 123... output is not displayed.
What explains this behavior? Is wait somehow intercepting the signal and passing it to the background processes? Or is it do with some sort of state of whether a process is "connected" or not to a terminal?
I found one possibly relevant question on this but I couldn't figure out how it relates to the above behaviour: Why SIGCHLD signal was not ignored when using wait() functions?
|
Signal interception? propagation? No
wait does not intercept the signal. The shell does not pass it. The signal is neither intercepted nor propagated. The subshells running your three "background" commands either get the signal directly or not.
You can test with the following script:
#!/usr/bin/env bash
printf 'My PID is %s.\n' "$$"
sleep 15 && echo '123' &
sleep 15 && echo '456' &
sleep 15 && echo '999' &
wait < <(jobs -p)
The script behaves like your script with wait. I mean you can terminate it and the three "background" jobs with Ctrl+C.
Run it again and don't hit Ctrl+C. The script will tell you its own PID ($$). Now if you send SIGINT from another terminal (kill -s INT <pid_here>) then it will terminate the script but not the three jobs.
But if you send SIGINT to the entire process group (kill -s INT -- -<pid_here>) then the script and the jobs will get it. The same happens when you hit Ctrl+C and your terminal is set to send an interrupt signal upon the keystroke (it normally is, stty -a includes intr = ^C), the entire group receives the signal.
Who gets the signal?
It's about the foreground process group. One of the tasks a shell does is informing the terminal which process group is in the foreground.
A terminal may have a foreground process group associated with it. This foreground process group plays a special role in handling signal-generating input characters […].
A command interpreter process supporting job control can allocate the terminal to different jobs, or process groups, by placing related processes in a single process group and associating this process group with the terminal. […]
(source)
This is what really happens in your case:
Initially you're in an interactive shell. The shell is the leader of its own process group (with process group ID equal to the PID of the shell). The terminal recognizes this process group as the foreground process group.
The said shell has job control enabled. (In general shells that support job control enable it by default when running interactively.) When you start the script (or basically any non-builtin), the shell makes it the leader of another process group. The terminal is informed the new process group should now be considered the foreground process group. The interactive shell puts itself in the background this way.
From the shebang we know it's bash interpreting the script. Non-interactive Bash starts with job control disabled by default. Commands run by it will not become leaders of their own process groups (unless the command itself changes its process group; it's possible but it doesn't happen in your case). The three subshells and the three sleep processes belong to the same process group as the running script.
When the script terminates, the terminal is informed the process group of the interactive shell should now be considered the foreground process group (again). If any process from the process group of the script is still running, it will no longer be in the foreground process group.
This explains the behavior you observed:
If the script is running when you hit Ctrl+C then it and basically all its descendants will receive SIGINT. It doesn't matter if the script waits because of wait or because of some command executed without &. What matters is its process group is still the foreground process group and the (grand)children belong to the group.
If the script is no longer running when you hit Ctrl+C then its descendants (if any) will not receive SIGINT because they do not belong to the current foreground process group.
Playing with job control
Note you can alter this behavior by disabling or enabling job control.
If you disable job control (set +m) in the interactive shell and run the script then the shell will run the script without making it the leader of a process group. There is no job control in the script. The script and basically all its children will belong to the process group of the interactive shell. This group will be the foreground process group the entire time. Upon Ctrl+C all the processes (including the interactive shell) will receive SIGINT, regardless whether the script is still running or not.
If you enable job control (set -m) in the script itself then the three subshells will be put in their respective process groups. In similar circumstances a command without & would become a process group leader and the terminal would be informed about the new foreground process group. But your commands are with &, they will become leaders but the foreground process group won't change. Upon Ctrl+C they won't receive SIGINT, regardless of whether the script is still running or not, and regardless if the interactive shell has job control enabled.
Notes
The meaning of & separator or terminator is often described as "run in background", but it's not equivalent to "run not in the foreground process group". Commands run with & can stay in the foreground process group (e.g. if job control is disabled). Commands run without & can leave the foreground process group. What you can be sure is & means "run asynchronously".
You may be surprised by the fact that an interactive shell puts itself in the background while running commands. This really happens. Run these commands in an interactive Bash:
set -m
trap 'echo "Signal received."' INT
sleep 999
Ctrl+C
You will see the sleep was interrupted but the shell did not receive the signal. This is because the shell had put sleep in a separate process group which was the foreground process group at the moment of the keystroke. The shell was not in the (then) foreground process group, this means it was in the background.
Now change set -m to set +m and run again:
set +m
trap 'echo "Signal received."' INT
sleep 999
Ctrl+C
With job control disabled, sleep will run in the process group of the shell. The group will be the foreground process group the entire time. You will see a message from the trap.
| What determines whether a script's background processes get a terminal's SIGINT signal? |
1,447,153,121,000 |
I know if a subprocess does not get reaped properly, it will become a zombie and you can see it by ps command.
Also the "wait [pid]" command will wait for subshell running on the background until it finishes and reap it.
I have a script like this:
#!/bin/bash
sleep 5 &
tail -f /dev/null
My question is, I don't use wait after sleep 5 & and the parent shell will never terminate because of tail, then why the sleep 5 & will not become a zombie? I see it disappear after finishing in ps, not sure who reaps it?
|
This is one of the pitfalls with having a shell mimic the OS API. It does create confusion. In this case you are confusing wait() (linux API) with wait (bash function).
Simply enough, Bash takes care of reaping processes for you, you don't generally need to think about this. The wait bash command has no effect on reaping processes.
When your child job terminates, bash will be informed via a SIGCHLD signal. This will when that happens bash will stop whatever it was doing for a moment and reap the stopped child. It then goes back to whatever it was doing.
| zombie process reap without "wait" |
1,447,153,121,000 |
Bash 5.0 includes a new -f option for wait:[1]
j. The `wait' builtin now has a `-f' option, which signfies to wait until the
specified job or process terminates, instead of waiting until it changes
state.
What does wait -f $pid do as opposed to the default wait $pid? Under what conditions is the -f option needed?
|
The change description is accurate, but somewhat obscure since wait is generally thought of as waiting for a process to finish.
Try this:
sleep 60&
wait %1
then in another terminal,
kill -STOP ${pid}
replacing ${pid} with sleep’s pid (as output when it was put in the background). wait will exit, because the job’s state changed.
With -f, wait will wait for the job or process to really terminate; used above, it wouldn’t exit with kill -STOP, and would wait for the process to be resumed (kill -CONT) and finish running.
| What does the `-f' option do for `wait' versus the default behaviour? |
1,447,153,121,000 |
I've got an old Mac with 24 cores, and I'd like to run several hundred/thousands one-core jobs automatically. I've made a bash script that runs the processes in the background, but if I set too many going at once the computer freezes (apparently 300 is okay, 400 too much...).
Ideally, what I'd like to do is run 24, then when one finishes, the 25th, then when the next finishes, the 26th, and so on. Unfortunately each job can take a different, and variable run time, so I can't do some kind of chron to set them going at staggered times.
I've seen some things with "wait", but I'm not sure if I sent 24 then, say, 976 with a wait command, would it give me the desired behaviour, or would it just run 976 in series after the first of the 24 finish?
EDIT: Thanks, this could very well be a duplicate, but as I see that question's answers only point towards parallel, can I please continue to explore here how to do it with xargs?
Reason for this, is that the Mac in question is currently on another continent and I absolutely need it to work for the next few days and run all these jobs - installing something always has the potential to mess up the machine, and so I don't want to install parallel at this point while I can't physically get to it. But it has xargs in bash, so I'm exploring using that.
Thus far, I've rewritten my bash script to meet what appears to be the situation expected by both xargs and parallel, that I can run it with a variety of input. So now, what I have is a bash script that runs my jobs on each file in a folder. I've currently tried:
ls -d myfolder/* | xargs -P 2 -L 1 ~/bin/myscript.sh
But this still seems to run them all simultaneously, thus I'm not sure what I've done wrong. (here I'm using max 2 just so I can keep looking and testing! I put only 4 in the folder - didn't want to send hundreds by accident)
FINAL EDIT: Ahah!!! MUCH later I figured out what I'd done wrong. xargs was likely running my script in parallel, but not the program I'd written the script to run. I wrote a script because I hadn't been able to figure out how to insert the filename into the arguments list, which expected parameter=value pairs. I eventually figured out how I could do this with the -I flag in xargs. This finally worked:
ls -d myfolder/* | xargs -I foo -P 2 -L 1 myprogram arg1 arg2 arg3=foo arg4
(I think -I and -L 1 are redundant, but as it works I'm not messing with it...)
Here, foo was replaced in the arguments list to myprogram with each filename. I note that one reason it took me ages to figure out is most instructions with -I use {} as the element to replace, and for some reason on my Macs it couldn't handle that. So I thought -I wasn't working, but it worked fine with foo.
|
I encountered a similar problem recently. As far as I know you have two options:
xargs -0 -P 24 -L 1
and
Gnu Parallel
For example, to convert every flac file found by the find command to ogg I tried running:
find -name "*.flac" -print0 | xargs -0 -P 24 -L 1 oggenc
This runs up to -P 24 processes at a time using -L 1 lines from the find command. I'm sure you can use this to customize it to your needs but we will need more details from your question.
| queue-like behaviour for multiple one-core jobs on single machine? [duplicate] |
1,447,153,121,000 |
I'm currently reading M. Bach's "THE DESIGN OF THE UNIX® OPERATING SYSTEM".
I read about the main shell loop.
Look at the if (/* piping */) block. If I understood correctly, piping allows treating the 1st command output as the 2nd command input. If so, why isn't there a code that makes the 2nd command wait for the 1st to terminate? Without this command, piping seems nonsense: the 2nd command can start executing without its input being ready.
|
the 2nd command can start executing without its input being ready.
It does. There's nothing wrong with that.
In a pipeline producer | consumer, the two sides run concurrently¹. The consumer does not wait for the producer to finish. It doesn't even care if the producer has started. All the consumer needs is a place to read input from. This place exists as soon as the pipe has been created by the pipe call.
Reading from a pipe is a blocking operation. If no data has been written to the pipe yet, the reader blocks. The reader will be unblocked when data is written to the pipe. More generally, the reader blocks if no data is available on the pipe. Reading data from the pipe consumes it. Therefore it doesn't matter whether the producer has started writing by the time the consumer starts reading. The consumer will just wait until the producer writes some data.
The consumer receives data as soon as it becomes available.² It typically reads and processes the data in chunks. Most consumers do not need to have all the data before they can start processing it. If the consumer does need to have all the data available, it'll store it in memory or in a temporary file and wait for the end of the input.
Since the producer and the consumer are separate processes, they are executed concurrently. The fact that one of them may be running does not prevent the other from running. If both the producer and the consumer want CPU time, the kernel will share the CPU between them (and between any other process that wants CPU time). So even while the consumer is initializing, or while it's processing some data, the producer can also run and produce more data.
¹ You can say they run in parallel. That's not technically correct, but close enough.
² In practice, the producer may buffer data internally. But as soon as the producer actually writes data to the pipe, consumer can read it.
| Why doesn't the 2nd command wait for the output of the 1st (piping)? |
1,447,153,121,000 |
I'm running a script which automates snapraid for my NAS server. It's a script I found online and it was working without issue on Debian 9. I updated to Debian 10 last week and the script now hangs in places it wasn't on Debian 9. I think I've narrowed the issue down to the tee and wait commands.
I've shortened the script to the below snippet for testing and the issue is happening, this is why I think it's tee and wait.
#!/bin/bash
# location of the snapraid binary
SNAPRAID_BIN="/usr/local/bin/snapraid"
# redirect all output to screen and file
> $TMP_OUTPUT
exec 3>&1 4>&2
# NOTE: Not preferred format but valid: exec &> >(tee -ia "${TMP_OUTPUT}" )
exec > >(tee -a "${TMP_OUTPUT}") 2>&1
# run the snapraid DIFF command
echo "###SnapRAID DIFF [`date`]"
$SNAPRAID_BIN diff
# wait for the above cmd to finish
wait
echo
echo "DIFF finished [`date`]"
I ran htop also to see what was happening and the processes created from the snapraid commands just don't end.
wait is doing exactly what it should, wait, but why did this work before and not now?
Full script is here: https://pastebin.com/gJqnz875
|
The tee inside the process substitution will not exit until it gets an eof on its stdin or some error happens.
And, since its stdin is a pipe, it will only get an EOF on its stdin when all the handles to its writing end are closed.
So, you'll have to save the original stdout and stderr, and then, before the wait, redirect them to the originals; dup'ing fds via the new>&old cause the old fd to be closed.
exec {out}>&1 {err}>&2
exec > >(tee -a output) 2>&1
...
exec >&$out 2>&$err
wait $(pgrep -P "$$")
Also, only in newer versions of bash wait also waits for processes running in a > >(...) process substitution; that's why I used wait $(pgrep -P "$$") instead of simply wait (pgrep -P finds processes by their parent). Also see here for this and other pitfalls related to > >(...).
| Script hanging when using tee and wait, why? |
1,447,153,121,000 |
I'd like to create a function that will wait until a job is completed before starting a new process. I'm aware of the wait command that is built into bash, but it only works for child processes. It will fail if I start in a different bash session. I wrote a bash function using kill -0 that takes a pid and waits until the given process completes. However, it doesn't seem to work. Say the pid of a particular job sleep 60 & is 4569, when I type ./pidwait.sh 4569, nothing prints out. What am I doing wrong?
#!/usr/bin/env bash
function pidwait()
{
while kill -0 "$1"; do
echo "Process $1 still running..."
sleep 1
done
echo 'Process $1 is done'
}
|
You are creating a function in your script but never using it. You could source that script and then call pidwait 4569 (no ./) or add a call to the function inside the script:
#!/usr/bin/env bash
function pidwait()
{
while kill -0 "$1"; do
echo "Process $1 still running..."
sleep 1
done
echo "Process $1 is done"
}
pidwait "$1"
Additionally the use of single quotes in your second echo command will cause the literal $1 to be printed out instead of the value of the first parameter.
| Bash function that will wait for a process to be finished before starting a new one |
1,447,153,121,000 |
I wonder if I can get some help with a project I'm working on. I have a Synology NAS. I found a Community Package that autoruns a script of my creation anytime a USB drive is plugged in to one of the drives. My script copies images and movie files to a given folder from all USB drives/Sandisk cards listed in the script into a specific folder on the Synology. The autorun package runs the script every time each drive is plugged in. The problem is if I plug in four USB drives one after the other within 15 seconds, it copies all four drives four times. Instead I want it wait 15 seconds to allow me to plug in all USBs, and then copy all drives once.
My script is:
#!/bin/bash
#
var=$(date +"%FORMAT_STRING")
now=$(date +”%m_%d_%Y_%s”)
printf "%s\n" $now
today=$(date +%m-%d-%Y-%s)
rsync -avz --prune-empty-dirs --include "*/" --include="*."{cr2,CR2,mov,MOV,mpg,MPG,dng,DNG,jpg,JPG,jpeg,JPEG} --exclude="*" /volumeUSB1/usbshare/ /volume1/KingstonSSD/Camera_Loads/Sandisk-${today}
rsync -avz --prune-empty-dirs --include "*/" --include="*."{cr2,CR2,mov,MOV,mpg,MPG,dng,DNG,jpg,JPG,jpeg,JPEG} --exclude="*" /volumeUSB2/usbshare/ /volume1/KingstonSSD/Camera_Loads/Sandisk-${today}
rsync -avz --prune-empty-dirs --include "*/" --include="*."{cr2,CR2,mov,MOV,mpg,MPG,dng,DNG,jpg,JPG,jpeg,JPEG} --exclude="*" /volumeUSB3/usbshare/ /volume1/KingstonSSD/Camera_Loads/Sandisk-${today}
rsync -avz --prune-empty-dirs --include "*/" --include="*."{cr2,CR2,mov,MOV,mpg,MPG,dng,DNG,jpg,JPG,jpeg,JPEG} --exclude="*" /volumeUSB4/usbshare4-2/ /volume1/KingstonSSD/Camera_Loads/Sandisk-${today}
My goal is to have the script wait a given time (say 15 seconds), to allow me to plug in all four USBs. Then after 15 seconds, run the code one time. I guess I need to check if the code is already running for any of the USBs plugged in. Terminate the current script if so, copy files if not.
I found this, I'm wondering if I can tweak it and add it to mine to check if any other instances of my script are running and terminate if so... or copy files if not:
if [ `ps -ef | grep "script.sh" | grep -v grep | wc -l` -gt 1 ] ; then
echo "RUNNING...."
else
echo "NOT RUNNING..."
fi
Any chance anyone would help with a solution?
Thanks in advance!
|
You wrote
today=$(date +%m-%d-%Y-%s)
...
It appears the answer could be as simple as
today=$(date +%m-%d-%Y-%s)
sleep 15
...
Consider putting those rsyncs inside a bash function named sync_all.
Then you could run everything twice -- immediately, and after a delay.
sync_all
sleep 15
sync_all
The cool thing about rsync is it is "mostly free" to
run it multiple times, since it will notice if a file
has already been copied.
You don't need to coordinate among rsync's.
Suppose a pair of rsync processes are simultaneously
trying to copy a giant ReadMe file from /here to /there.
Do they both open /there/ReadMe ? No !
Instead they invent a new random filename like /there/.123
or /there/.456, copy into that, and at the end
do an atomic rename() to mv the numeric filename to /there/ReadMe.
They're both good copies, so it doesn't matter which one wins.
There will be no file truncation or corruption,
just a few more writes hitting the disk than absolutely necessary.
Certainly you can exclude other competing instances, if you like.
The c-news shlock
utility is a convenient way to do so.
The first script to run will write a lock file
which says "keep out!", and subsequent scripts
will notice it and immediately exit, before
attempting to do any work.
Asking ps "am I already running?" will give you a hint
about existing background processes. But using a proper
locking primitive will guarantee that at most a single
participant is within the critical section at any instant.
The ps approach will always be somewhat racy.
| How to add file markers to check if script is already running |
1,447,153,121,000 |
In my tests, I always get the correct result so far with this:
[fabian@manjaro ~]$ sleep 10 & echo $!
[1] 302657
302657
But sleep and echo are getting executed simultaneously here, so I would expect that it can sometimes happen that echo executes before the value of $! is set properly. Can this happen? Why doesn't it so far for me?
My ultimate goal: Execute two tasks in parallel and then wait for both before moving on. The current plan is to use something like foo & bar; wait $!; baz. Will this always work or can it sometimes wait for an arbitrary older background process (or nothing at all, if $! is empty)?
|
So
sleep 10 & echo $!
is two commands
sleep 10 &
echo $!
That they're on the same line doesn't change this.
So the shell will fork() a new process and put the process ID of the new process into $!. Then the new process will run the sleep and the other process will run the echo.
So you can be sure $! will always hold the PID of the new process, even if that process fails; it's the result of the fork().
| Will "$!" reliably return the correct ID with "&"? |
1,447,153,121,000 |
My question is about signals and handling them inside the operating system kernel.
I know that every process has it own signal_handler() table: a 31 bit array for signals (pending_signals), and when a signal arrives, do_signal() is invoke, and it calls to the relevant signal_handler() routine, which is running in the user mode and not in the kernel mode (why is that by the way?).
Let's suppose process got some signal i.e. some bit in his signal array is on, who is writing it to this array (I guess it is the process that invoked the signal — the process that we are currently in it's context), therefore, the flow is as follows:
A invokes signal and writes it to B's signal array (before returning to user mode?) Then in the same context (without switching to B) the specific signal handler of this signal of B's is invoked (when do we switch to user mode?) and after we return to A and check if it needs rescheduling and continue…
Second thing is what is happening when the signal is SIG_CHILD, I suppose that it should occur somewhere in do_exit() that is invoked by the child process.
And last thing is how does waitpid(pid_t num) work?
How does the father ignore all the SIG_CHILD signals from its other sons and care only for specific son?
If there is good source for reading the following stuff it will be great (didn't find such).
|
While reading your question. I think you said that the signal handle of B is run in the context of A. This does not sound correct. This would lead to a security loop hole. Signal handlers are always called in the context of the owning process.
This then answers your 2nd part about SIG_CHILD.
You also ask why run signal handles in user space. This is because we don't want to allow processes to inject code into the kernel. If they did, they would be god, of the system.
What process is the signal handler run in?
If a process calls exec, and the code that is loaded contains a signal_handler, and if this signal handler runs, then (like all the other code that was loaded) it is run in this process. You will not find code from one process, unexpectedly, running in another.
| how signals are handled in linux kernel [closed] |
1,447,153,121,000 |
I'm facing an unexpected behaviour of the wait builtin.
~
❯ sleep 1 &
[1] 72009
~
❯
[1] + 72009 done sleep 1
~
❯ wait 72009
~
❯ echo $?
0
Although the PID doesn't exist anymore wait still exits with zero exit status.
Questions
What is the reason for this behaviour?
How does wait works? What does it do behind the scenes?
|
Bash’s wait returns 0 in this case because it “remembers” that process 72009 was one of its children, and it remembers its children’s exit codes, and that exit code was 0. (The documentation is somewhat misleading here since it mentions “active” processes explicitly.)
Behind the scenes, wait determines whether a given process identifier corresponds to one of the shell’s children, perhaps in a job; if so, it checks whether the process is still running. If it’s still running, it waits for it to finish. Once it’s finished, it determines the corresponding exit code (which could be for the process only, or for the job as a whole), and returns that. There’s a lot of additional complexity to deal with signals correctly, process substitution, controlling terminals etc. but that’s not relevant here.
The exit code is remembered (at least) in the job table. You can see this in action by running two commands with different exit codes (false & and true &), and waiting for the respective process identifiers. As long as the job table isn’t cleared, wait will give the correct exit code. Run wait with no argument to remove finished jobs from the job table, and you’ll see that you can no longer retrieve the exit codes of the earlier jobs.
| Wait command works when pid doesn't exists |
1,447,153,121,000 |
I searched a lot but didn't find a solution. So it can be silly question.
The format of waitpid is
pid_t waitpid (pid_t pid, int *status, int options)
The pid parameter specifies exactly which process or processes to wait for. Its values fall into
four camps:
< -1
Wait for any child process whose process group ID is equal to the absolute value of this value.
-1
Wait for any child process. This is the same behavior as wait( ).
0
Wait for any child process that belongs to the same process group as the calling process.
> 0
Wait for any child process whose pid is exactly the value provided.
Now the question is what if parent and child have different group id and group id of child is 1. How to use waitpid for this specific child? Because we can't use -1 it will tell to wait for any child.
|
You can only wait for children from your process.
If the child changes it's process group id, the new process group id can be used as a negative number with waitpid().
BTW: the function waitpid() is deprecated since 1989. The modern function is: waitid() and it supports what you like:
waitid(idtype, id, infop, opts)
idtype_t idtype;
id_t id;
siginfo_t *infop; /* Must be != NULL */
int opts;
If you like to wait for a process group, use:
waitid(P_PGID, pgid, infop, opts);
So if you really have a process under process group ID 1, call:
waitid(P_PGID, 1, infop, opts);
But since init already uses this process group id, you would need to be the init process in order to able to have children under pgid 1.
This however will not work, if you are on a platform that does not implement waitid() as a syscall but as an emulation on top of the outdated waitpid().
The advantages of waitid() are:
allows to cleanly specify what to wait for (e.g. P_PID P_PGID P_ALL)
returns all 32 bits from the exit(2) parameter in the child back to the parent process.
allows to wait with the flag: WNOWAIT that does not reap the child and keeps it for later in the process table.
BTW: The siginfo_t pointer in waitid() is identical to the second parameter of the signal handler function for SIGCHLD.
| Use waitpid for child having groupid 1 |
1,628,980,198,000 |
I already have a /boot partition on a USB stick and a LUKS partition on my computer, which correspond to a fulldisk encryption scheme with Ubuntu 21
I want to put the header of my LUKS partition onto the USB (either on the /boot partition or on another new partition on my USB)
I have put the header (with cryptsetup luksHeaderBackup) in boot_header.luks on my boot partition (let say on device /dev/sda3) then in the crypttab file with the option header= I tried the following :
/boot/boot_header.luks
/dev/sda3/boot_header.luks
/dev/sda3:/boot_header.luks and also /boot_header.luks:/dev/sda3 (to be sure)
and the same with the uuid of /dev/sda3 and also with /dev/disk/by-uuid/[uuid]
So I though the device sda3 wasn't mounted as it should be according to the doc of crypttab (if I understand it correctly)
Optionally, the path [of the file containing the header] may be followed by ":" and an /etc/fstab
device specification (e.g. starting with "UUID=" or similar);
in which case, the path is relative to the device file system
root. The device gets mounted automatically for LUKS device
activation duration only.
So I looked for mounting the boot partition before the execution of the cryptroot script with a custom script in local-bottom and init-bottom. And also as suggested here I tried to incorporate the header in the initramfs following this answer
But the result at boot time is always the same :
wrong value for 'header' option
I found that it was quite feasible with arch but is there a way to do the same with Ubuntu (without modifying an existing script like cryptroot) ?
|
As pointed out in the comments by @A.B the solution is a raw partition that contains the header instead of the header file inside a partition (which is a hassle due to the need to mount the filesystem first)
To copy the header (around 16MB for LUKS2) to a partition (/dev/sdb larger than the header size) two options.
The first one is to copy the raw header with dd.
First, you need to find the offset of the data (since the header always starts at 0). For a LUKS device /dev/sda4, use cryptsetup luksDump /dev/sda4 and look for the ligne offset in the section Data segments.
Then find the filesystem block size with stat -fc %s /dev/sda4.
And Finally, dd if=/dev/sda4 of=/dev/sdb bs=<fs_block_size> count=<data_offset>
The second one is to pack all the header data into a backup file that will be copied to the partition /dev/sdb.
Because having a backup file can lead to some security issues even if it is saved on your encrypted disk, it's better to create a ramdisk just for that file.
mkdir /tmp/header_backup
mount -t tmpfs -o size=512m tmpfs /tmp/header_backup
cryptsetup luksHeaderBackup /dev/sda4 --header-backup-file /tmp/header_backup/header.luks
dd if=/tmp/header_backup/header.luks of=/dev/sdb
umount /tmp/header_backup
Then in /etc/crypttab add the option header=/dev/sdb to the corresponding line (e.g sda4_crypt [UUID] none luks,discard,header=/dev/sdb)
To erase the old LUKS header : cryptsetup luksErase /dev/sda4. That only wipes the keyslots but keep all the metadata. If you want (it's not necessary) to completely wipe the header you will need to have another filesystem header on it in order to keep a UUID onto that partition. But be aware that completely wiping the LUKS header may not results in a secure erase depending on your storage device (SSD or HDD). For SSD it's possible that the deleted block will stay in a queue until the device need to allocate more space.
Otherwise to wipe the header : get the filesystem block size (<fs_block_size>) with stat -fc %s /dev/sda4, the LUKS data offset (<luks_data_offset>) with cryptsetup luksDump /dev/sdb, the UUID of the partition /dev/sda4 (<uuid_sda4>) and then :
dd if=/dev/urandom of=/dev/sda4 bs=<fs_block_size> count=<luks_data_offset>
mkfs.ext4 fs.img
tune2fs -U <uuid_sda4> fs.img
dd if=fs.img of=/dev/sda4
The partition /dev/sdb will have the same UUID that the LUKS one which may be a problem. You can change it without messing up the LUKS process with cryptsetup luksUUID /dev/sdb --uuid $(uuidgen)
And finally update the initramfs with update-initramfs -u -k all
| Detached LUKS header (on USB) for an existing full-disk encryption device with Ubuntu |
1,628,980,198,000 |
I recently installed Antergos (which is basically Arch) and set it to use full disk encryption. Now, I want to migrate from encrypt to sd-encrypt because I want to be able to hibernate and I couldn't put swap partition in the same LUKS volume..
Background
During the setup:
I used LUKS for / partition and swap partition,
because my main SSD is small, I wanted to be able to hibernate and I have 32GB of RAM I created the encrypted swap partition on the second drive,
I mounted swap partition (as well as another encrypted EXT4 partition from the second drive) using /etc/crypttab.
I tested that installation works, grub let me boot into both linux and dual booted Windows, on Linux boot it decrypts and mounts both encrypted drives.
However, I was getting error about not finding disk with the UUID of a swap drive, and Arch manual confirmed that encrypt which I got from installer can handle only one encrypted partition during boot. If I want to handle more of them I should move to sd-encrypt. However, even after reading the documentation I am not certain what I have to do in order to migrate to sd-encrypt.
Details
HOOKS="base udev autodetect modconf block keyboard keymap encrypt resume filesystems fsck"
GRUB_CMDLINE_LINUX_DEFAULT="quiet resume=UUID=[encrypted swap UUID]"
GRUB_CMDLINE_LINUX=cryptdevice=/dev/disk/by-uuid/[/ UUID]:Arch_crypt
GRUB_ENABLE_CRYPTODISK=y
/etc/crypttab
swap_crypt /dev/disk/by-uuid/[/ UUID] password_file luks
data_crypt /dev/disk/by-uuid/[/ UUID] password_file luks
What else should I do after I change encrypt to sd-encrypt in HOOKS? Do I have to create a /etc/crypttab.initramfs and move swap_crypt there? Do I have to change luks to rd.luks? Both swap partition and / partition uses the same password, so according to the documentation both should be mounted on boot after I entered the password once, is that right? Documentation mentions luks.* and rd.luks.* params and similar - do I have to use them and if so, where should I put them?
|
I don't use Grub myself (but Arch and sd-encrypt) but from my kernel options I guess you would have to transform your configuration to look like (don't forget to backup your old configuration before switching).
HOOKS="base systemd autodetect modconf block keyboard sd-vconsole sd-encrypt resume filesystems fsck"
GRUB_CMDLINE_LINUX_DEFAULT="quiet resume=UUID=[decrypted swap UUID]"
# I use resume=/dev/mapper/name-of-decrypted-device
GRUB_CMDLINE_LINUX=luks.uuid=[/ encrypted UUID] luks.uuid=[swap encrypted UUID]
GRUB_ENABLE_CRYPTODISK=y
/etc/crypttab
swap_crypt /dev/disk/by-uuid/[/ UUID] password_file luks
data_crypt /dev/disk/by-uuid/[/ UUID] password_file luks
Don't forget to run mkinitcpio -p linux or the equivalent to regenerate your initramfs, once the modification of the HOOKS have been done. And the grub.cfg file with grub-mkconfig -o /boot/grub/grub.cfg or something similar.
| Migrate Arch from encrypt to sd-encrypt |
1,628,980,198,000 |
Are there any full-disk encryption schemes that can be done without an initramfs, rather getting the encryption key from the kernel cmdline? I know this sounds insecure, as an attacker could just read the bootloader files; but due this device's boot process, I have to manually enter the cmdline at every boot.
I already compile my own kernels for this arm64 device, so custom kernel configuration options aren't a problem for me.
|
No.
Well, normally F.D.E. has to be in hardware (not Linux) else where does the kernel come from. Assuming you've resolved that (perhaps related to your suggestion of a less typical boot process)...
It is not possible to mount the root fs from a block device decrypted by a command-line option. Nor is it possible to mount ecryptfs as the root: you must have setup the backing filesystem for the ecryptfs before you can mount ecryptfs...
(Technically there is a hacky option rootdelay=. But there isn't a boot option to mount two rootfs on top of each other, and there isn't a boot option to decrypt a block device with any scheme).
Typically /proc/cmdline can be read by any userspace process, so Linux does not encourage putting put secret keys in it. Reconciling such an idea against the security needs implied by F.D.E. is challenging, but perhaps there is some contrived circumstance...
It almost sounds likeas if you want to pass the kernel a blob of userspace code, which can construct the storage stack in any way you choose. Even ways which kernel developers would not approve :-). You could pass the blob at boot time. Or you could have an option to build it in to the kernel. We could call it an initial ramfs or initramfs for short. Good news! Someone already implemented this kernel feature for you.
The question doesn't say why this 9-letter word must not be spoken. Since you're doing a custom compile, you can always patch in whatever name you like :-P.
(This is the more generic option. Technically for your case, you might use an unencrypted partition to hold the same code, but it's generally less convenient).
It does not have to be as large as a distribution initramfs. For example a quick search found this as a plausible starting point:
https://gist.github.com/packz/4077532
and you can build a custom busybox, which only enables the modules the initramfs needs. Depends how well the static linking works, but I'd really hope that initramfs would be smaller than a kernel.
| Linux full-disk encryption without initramfs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.