date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,483,543,885,000 |
I am using ARM Foundation Model to emulate a arm-v8 environment(I need both TrustZone and KVM), and I run a kvm-enabled linux on it as the Host OS, the kernel version is 3.14.0(I clone it following the guide on virtualopensystems.
However, when try to run a guest OS, I met with some problems(both kvm-tools and qemu-system-aarch64). Here are what I met:
All the tools and ther kernel source code are from the guide mentioned above, and I use the prebuilt binaries. But I compiled the kernel myself.
KVM Tools:
I use the prebuilt kvm tools found in the website of virtualopensystems, lkvm-static
When I boot the guest kernel, it hung at bootconsole [earlycon0] disabled. I did some google work but didn't find not too much. Some say it might because of the wrong devicetree, but I there is no dtb file in the guide.
qemu-system-aarch64
Also, I use the prebuilt binaries
When I boot the guest kernel, it seemed 'normal' until it gave messages like the following:
hwclock: can't open '/dev/misc/rtc': No such file or directory
modprobe: can't change directory to '3.14.0': No such file or directory
openvt: can't open '/dev/tty1': No such file or directory
The complete log of QEMU would be in the last of this question.
Another thing needs to be mentioned, the .config file of the Host OS is NOT the original one. The original .config file has the KVM feature but lacks the support for DMA-relevant functions, which is needed for my own sake.
So I replaced it with the .config in the kernel of 3.18-rc. Of course, they are not same at all, and when I run make, I have to manually decide some configurations, I decide them all as default. And after compilation, the TrustZone feature is normal, and /dev/kvm is also generated, but when I try to run a guest OS, I met with the problems above.
Could anyone please give me some suggestions?
Thanks a lot.
Tgn Yang
=========================================================
Here are the output when using qemu:
root@FVP:/data/qemu-kvm ./qemu-system-aarch64 --enable-kvm --nographic --kernel
Image --drive if=none,file=disk_oe64.img,id=fs --device virtio-blk-device,drive=fs -m 512 -M virt --cpu host --append "earlyprintk conosole=ttyAMA0 mem=512M rootwait root=/dev/vda rw"
Initializing cgroup subsys cpu
Linux version 3.14.0 (hamayun@hamayun-laptop) (gcc version 4.8.1 (Ubuntu/Linaro4.8.1-10ubuntu7) ) #1 SMP PREEMPT Tue Apr 29 15:37:35 CEST 2014
CPU: AArch64 Processor [410fd000] revision 0
No earlyprintk arguments passed.
Memory limited to 512MB
psci: probing function IDs from device-tree
PERCPU: Embedded 11 pages/cpu @ffffffc01ffe7000 s16128 r8192 d20736 u45056
Built 1 zonelists in Zone order, mobility grouping on. Total pages: 129280
Kernel command line: earlyprintk conosole=ttyAMA0 mem=512M rootwait root=/dev/vda rw
PID hash table entries: 2048 (order: 2, 16384 bytes)
Dentry cache hash table entries: 65536 (order: 7, 524288 bytes)
Inode-cache hash table entries: 32768 (order: 6, 262144 bytes)
software IO TLB [mem 0x5b000000-0x5f000000] (64MB) mapped at [ffffffc01b000000-ffffffc01effffff]
Memory: 436272K/524288K available (4122K kernel code, 357K rwdata, 1748K rodata, 243K init, 284K bss, 88016K reserved)
Virtual kernel memory layout:
vmalloc : 0xffffff8000000000 - 0xffffffbbffff0000 (245759 MB)
vmemmap : 0xffffffbc00e00000 - 0xffffffbc01500000 ( 7 MB)
modules : 0xffffffbffc000000 - 0xffffffc000000000 ( 64 MB)
memory : 0xffffffc000000000 - 0xffffffc020000000 ( 512 MB)
.init : 0xffffffc00063d000 - 0xffffffc000679f00 ( 244 kB)
.text : 0xffffffc000080000 - 0xffffffc00063ca14 ( 5875 kB)
.data : 0xffffffc00067a000 - 0xffffffc0006d37d0 ( 358 kB)
SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=1, Nodes=1
Preemptible hierarchical RCU implementation.
RCU restricting CPUs from NR_CPUS=4 to nr_cpu_ids=1.
RCU: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=1
NR_IRQS:64 nr_irqs:64 0
Architected cp15 timer(s) running at 100.00MHz (virt).
sched_clock: 56 bits at 100MHz, resolution 10ns, wraps every 2748779069440ns
Calibrating delay loop (skipped), value calculated using timer frequency.. 200.00 BogoMIPS (lpj=1000000)
pid_max: default: 32768 minimum: 301
Mount-cache hash table entries: 1024 (order: 1, 8192 bytes)
Mountpoint-cache hash table entries: 1024 (order: 1, 8192 bytes)
hw perfevents: enabled with arm/armv8-pmuv3 PMU driver, 1 counters available
Brought up 1 CPUs
SMP: Total of 1 processors activated.
devtmpfs: initialized
atomic64 test passed
regulator-dummy: no parameters
NET: Registered protocol family 16
vdso: 2 pages (1 code, 1 data) at base ffffffc000681000
hw-breakpoint: found 16 breakpoint and 16 watchpoint registers.
Serial: AMBA PL011 UART driver
9000000.pl011: ttyAMA0 at MMIO 0x9000000 (irq = 33, base_baud = 0) is a PL011 rev1
console [ttyAMA0] enabled
bio: create slab <bio-0> at 0
SCSI subsystem initialized
Switched to clocksource arch_sys_counter
NET: Registered protocol family 2
TCP established hash table entries: 4096 (order: 3, 32768 bytes)
TCP bind hash table entries: 4096 (order: 4, 65536 bytes)
TCP: Hash tables configured (established 4096 bind 4096)
TCP: reno registered
UDP hash table entries: 256 (order: 1, 8192 bytes)
UDP-Lite hash table entries: 256 (order: 1, 8192 bytes)
NET: Registered protocol family 1
RPC: Registered named UNIX socket transport module.
RPC: Registered udp transport module.
RPC: Registered tcp transport module.
RPC: Registered tcp NFSv4.1 backchannel transport module.
kvm [1]: HYP mode not available
futex hash table entries: 256 (order: 2, 16384 bytes)
NFS: Registering the id_resolver key type
Key type id_resolver registered
Key type id_legacy registered
nfs4filelayout_init: NFSv4 File Layout Driver Registering...
fuse init (API version 7.22)
9p: Installing v9fs 9p2000 file system support
msgmni has been set to 852
io scheduler noop registered
io scheduler cfq registered (default)
Serial: AMBA driver
blk-mq: CPU -> queue map
CPU 0 -> Queue 0
vda: unknown partition table
TCP: cubic registered
NET: Registered protocol family 17
9pnet: Installing 9P2000 support
Key type dns_resolver registered
regulator-dummy: disabling
kjournald starting. Commit interval 5 seconds
EXT3-fs (vda): using internal journal
EXT3-fs (vda): mounted filesystem with writeback data mode
VFS: Mounted root (ext3 filesystem) on device 254:0.
devtmpfs: mounted
Freeing unused kernel memory: 240K (ffffffc00063d000 - ffffffc000679000)
INIT: version 2.88 booting
Mounting local filesystems...
Starting udev
udevd[384]: starting version 182
Activating swap
Starting Bootlog daemon: bootlogd.
Initializing random number generator...
random: dd urandom read with 5 bits of entropy available
udev-cache: checking for /dev/shm/udev-regen... found.
Populating dev cache
Populating volatile Filesystems.
Checking for -/run/lock-.
Creating directory -/run/lock-.
Checking for -/var/volatile/log-.
Creating directory -/var/volatile/log-.
Checking for -/var/volatile/tmp-.
Creating directory -/var/volatile/tmp-.
Target already exists. Skipping.
Checking for -/var/lock-.
Creating link -/var/lock- pointing to -/run/lock-.
Checking for -/var/log-.
Creating link -/var/log- pointing to -/var/volatile/log-.
Checking for -/var/run-.
Creating link -/var/run- pointing to -/run-.
Checking for -/var/tmp-.
Creating link -/var/tmp- pointing to -/var/volatile/tmp-.
Checking for -/tmp-.
Creating link -/tmp- pointing to -/var/tmp-.
Checking for -/var/lock/subsys-.
Creating directory -/var/lock/subsys-
Checking for -/var/log/wtmp-.
Creating file -/var/log/wtmp-.
Checking for -/var/run/utmp-.
Creating file -/var/run/utmp-.
Checking for -/etc/resolv.conf-.
Creating link -/etc/resolv.conf- pointing to -/var/run/resolv.conf-.
Checking for -/var/run/resolv.conf-.
Creating file -/var/run/resolv.conf-.
Checking for -/var/log/boot-.
Creating file -/var/log/boot-.
Target already exists. Skipping.
Checking for -/var/run/sepermit-.
Creating directory -/var/run/sepermit-.
Checking for -/var/run/sshd-.
Creating directory -/var/run/sshd-.
Checking for -/var/log/lastlog-.
Creating file -/var/log/lastlog-.
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
System time was Thu Jan 1 00:00:06 UTC 1970.
Setting the System Clock using the Hardware Clock as reference...
hwclock: can't open '/dev/misc/rtc': No such file or directory
System Clock set. System local time is now Thu Jan 1 00:00:06 UTC 1970.
Thu Jun 18 11:03:04 UTC 2015
Saving the System Clock time to the Hardware Clock...
hwclock: can't open '/dev/misc/rtc': No such file or directory
Hardware Clock updated to Thu Jun 18 11:03:04 UTC 2015.
INIT: Entering runlevel: 5
Configuring network interfaces... ifconfig: SIOCGIFFLAGS: No such device
Starting OpenBSD Secure Shell server: sshd
generating ssh RSA key...
generating ssh ECDSA key...
generating ssh DSA key...
generating ssh ED25519 key...
done.
Starting rpcbind daemon...done.
starting statd: done
System time was Thu Jun 18 11:03:36 UTC 2015.
Setting the System Clock using the Hardware Clock as reference...
hwclock: can't open '/dev/misc/rtc': No such file or directory
System Clock set. System local time is now Thu Jun 18 11:03:36 UTC 2015.
modprobe: can't change directory to '3.14.0': No such file or directory
NFS daemon support not enabled in kernel
Starting syslogd/klogd: done
openvt: can't open '/dev/tty1': No such file or directory
Starting auto-serial-console: done
Stopping Bootlog daemon: bootlogd.
INIT: no more processes left in this runlevel
|
It turns out to be the matter of Image, I recompile the kernel and then it is OK.
But there is some problems of lkvm-tools, it couldn't initialize the ioctl normally.
| Unable to boot a kvm guest via qemu/kvm-tools |
1,483,543,885,000 |
I am new to VyOS development. I have written a patch which will fetch info from the VyOS kernel module and write it on a netlink socket. But the problem is I am not sure if I should edit the kernel module code directly to call my defined function or I should write the patch. If I have to make a patch file for it then where do I place it in the kernel source code? I have already made a patch file using the diff command.
|
After a long search I solved the problem I was facing. Here is conclusions in case any of you gets stuck in same problem.
Yes, you can edit the kernel module code in VyOS Development. But this method is not much appreciated.
Yes, you can write patch for kernel modules too. and it should be in GIT formate as described in How to write VyOS Patch. Soon I will update, where to place .patch file in VyOS kernel code.
To check the debugging output using dmesg, use KERN_DEBUG option. As I am not sure about others.
printk(KERN_DEBUG "%s: Debuging info \n", __FUNCTION__);
Moreover to check modification in VyOS kernel you don't need to make a complete ISO file all the time. You just need to run following commands.
*Note each path is
described everytime from the main iso building directory to avoid path problems.
cd build-iso/
sudo make clean-linux-image
sudo make linux-image
Then
cd buil-iso/pkgs/
Here you will find these debian packages.
buil-iso/pkgs/linux-image-3.13.11-1-amd64-vyos_999.dev_amd64.deb
buil-iso/pkgs/linux-libc-dev_999.dev_amd64.deb
buil-iso/pkgs/linux-vyatta-kbuild_999.dev_amd64.deb
Copy these files to an already installed VyOS Sytem and install them over there.
dpkg -i linux-image-3.13.11-1-amd64-vyos_999.dev_amd64.deb
dpkg -i linux-libc-dev_999.dev_amd64.deb
dpkg -i linux-vyatta-kbuild_999.dev_amd64.deb
reboot the system and check you modifications using dmesg.
| How to write a Patch for VyOS kernel |
1,483,543,885,000 |
I have recently started learning kernel module programming using the book The Linux Kernel Module Programming Guide
and I dont understand how does this make work
obj-m += hello-1.o
all:
make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules
clean:
make -C /lib/modules/$(shell uname -r)/build M=$(PWD) clean
note that I understand the basics of makefile but this one seems to be complex one
at least shouldn't make clean be rm hello-1.ko ?
|
Here is a breakdown of what is going on in this makefile:
There is a list of objects someplace, add "hello-1.o" to the list.
obj-m += hello-1.o
Create a target called all that has no requirements. The recipe to make all is to change in to the build directory in the directory that is named based on the current kernel release this system is running (which is in the modules directory under /lib). While we're at it save a variable that will contain the present working directory that we call this make from. Now find a make file in that directory we just changed into and build the target modules
all:
make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules
This one is almost the same and is left as an exercise for the reader:
clean:
make -C /lib/modules/$(shell uname -r)/build M=$(PWD) clean
The reason the clean isn't just a rm is because the kernel has a lot of independent parts that all work together. Writing one monolithic makefile would be painful at best but really unmaintainable. Therefore each logical part of the build directory has it's own makefile that can be called from the coordinating makefile. Makefiles can get pretty hairy so it's bestto keep them focused and maintainable.
| make for linux kernel module |
1,483,543,885,000 |
Upgrading kernel manually on Ubuntu 14.04 can cause trouble with NVIDIA drivers? (one of the troubles could be booting to black screen)
I had lot of trouble with installing NVIDIA graphics drivers and getting it work, I had to reinstall my Ubuntu 14.04 four times.
Few reasons were installing nvidia*.run file and booting to black screen, or installing Intel Graphics for Linux and booting to black screen etc. I am tired of solving those problems.
I am want to know if there could be any problem before upgrading Linux kernel. Any advice would help
|
Not always, newer versions of kernel may not be able to build DKMS modules with nvidia because of which nvidia drivers may not work when booted with that kernel.
As of 25 April 2015 I tried to install Linux kernel v4 which failed to work with nvidia so I had to remove it.
| Is it safe to upgrade kernel manually on a system which is using NVIDIA drivers? |
1,483,543,885,000 |
I 'm running Angstrom distribution (console only) on BeagleBoard-xM. Image was build on Narcissus, with Bootloader Files (x-load/u-boot/scripts) added.
I noticed that usr/src directory is empty. I intend to install kernel source packages. For this reason, I downloaded Linux kernel 2.6.32 from https://www.kernel.org. What should be my next steps? I 've been searching for days, but I didn't find anything.
Any help would really be appreciated.
|
Just untar (tar xf kernel-sources.tar.whatever) it into /usr/src - or anywhere else for that matter.
Just remember that if you want to use the symlinks /lib/modules/kernel/{build,source} that point to the kernel build and source tree, respectively, as created by make modules_install, you'll need to either keep the sources at the same place, or update the symlinks accordingly.
That said, if you are planning anything else than just one-off kernel compilation, clone Linus' or any other git tree and use that instead of the tarballs. It can be interesting even for "just" configuring the kernel, since you can get .config versioning for free.
| How to install kernel sources |
1,483,543,885,000 |
In general what is supposed to be in the Module section of the Xorg.conf file in order to get maximum performance from a graphic card?
At the moment my Module section is empty... also because I didn't find any useful doc about modules.
...and if it matters, my graphic card is an ATI Radeon HD 5700 Series and I've already installed the Catalyst 12.6 drivers on my Ubuntu 12.04 box.
|
As far as I know those days are over.
There are driver settings which are supposed to improve performance ... if Xorg does not use these settings anyways.
Option "ColorTiling" "1"
Option "EnablePageFlip" "1"
Option "AccelMethod" "EXA"
Don't know what they do. Got those off some forum a long time ago. The first 2 are supposed to improve 3D performance, the second 2D performance.
YMMV
| Xorg.conf, which modules should be loaded for better performance? |
1,483,543,885,000 |
From the answers to this question I have discovered that the embedded Linux distribution provided to me, by my hardware supplier, was not built with kernel support. If I am to use this distribution I have to be able to install drivers for some CANBUS hardware that will be attached. The source for the drivers is provided by the CANBUS part manufacturer but since the OS I have been given does not have gcc installed on it and does not support loadable modules, I have no idea how to continue.
Is there anything at all I can do to try get around this problem? The alternative is to use DOS as the OS on the embedded device which I am very keen to avoid so any potential solutions would be gratefully received.
|
By definition, if the kernel does not support loadable modules, you cannot load a module.
As you have already been told, there is something you can do: install a kernel compiled by someone else or recompile a kernel, with loadable modules and all the extra drivers you like.
I recommend that you first try installing an existing Linux distribution. This is a lot easier than compiling your own kernel, especially if you don't have enough technical information about exactly what hardware is in it.
You do not need to have GCC installed on the device to recompile a kernel. The kernel is designed to make cross-compilation easy. In fact, since your device has an x86 processor, all you need to do is compile a kernel with the right options on your PC.
Determining the right options can be difficult, and putting the kernel in the right place to be booted can be difficult. Feel free to ask on this site if you need help with those. In your question, be sure to give as much information as you can about your device.
| Installing a .ko module on an embedded Linux system that does not support modules |
1,483,543,885,000 |
I have RedHat Enterprise Linux Server 7, and I downloaded the linux kernel version 4.12.10 which I am trying to compile but when I execute the following command:
make modules
I get the following error:
scripts/sign-file.c:25:30: fatal error: openssl/opensslv.h: No such file or directory
Does anyone have an idea to fix this please ?
|
You must install the OpenSSL development files (package openssl-devel)
| How to fix: fatal error: openssl/opensslv.h: No such file or directory in RedHat 7 |
1,483,543,885,000 |
I am trying to compile the linux kernel 5.15.64 but it fails. I have the config and use make -j4 && sudo make modules_install -j4 but this is the error I get.
make[1]: *** [kernel/Makefile:160: kernel/kheaders_data.tar.xz] Error 127
make: *** [Makefile:1896: kernel] Error 2
What is going wrong in the process?
|
There is a bug report https://bugs.gentoo.org/701678 that has the same messages. This was caused by having CONFIG_IKHEADERS=m in your configuration and not having cpio available.
| Error trying to compile kernel 5.15 |
1,457,991,018,000 |
Following these guides (https://github.com/ToadKing/wii-u-gc-adapter/blob/master/README.md and https://dolphin-emu.org/docs/guides/how-use-official-gc-controller-adapter-wii-u/#Linux) - I've tried these steps:
sudo su; apt update && apt upgrade -y && apt dist-upgrade -y && apt install jstest-gtk -y
echo 'SUBSYSTEM=="usb", ENV{DEVTYPE}=="usb_device", ATTRS{idVendor}=="057e", ATTRS{idProduct}=="0337", MODE="0666"' > /etc/udev/rules.d/51-gcadapter.rules
exit
cd ~/Downloads
sudo apt install libusb-dev libudev-dev -y
sudo modprobe uinput
git clone https://github.com/ToadKing/wii-u-gc-adapter
cd ~/Downloads/wii-u-gc-adapter
make
sudo ./wii-u-gc-adapter
jstest-gtk
When I do lsusb, I get Bus 001 Device 011: ID 057e:0337 Nintendo Co., Ltd but nothing shows up in jstest-gtk.
I unplugged/replugged the adapter and nothing.
I'm totally stumped. Any advice? See any obvious mistakes I'm making?
|
What worked for me is returning the off brand hardware and buying the official adapter made by Nintendo. The off brand YTEAM adapter doesn't seem to work with Linux.
| How do you wii-u-gc-adapter working on 64-bit debian? |
1,457,991,018,000 |
So I'm compiling the 6.0.3 kernel in Debian 11, and I've been given the task of getting the smallest kernel possible that boots and has Internet connection.
I find myself at a point where I've compiled the kernel 89 times in total, and my kernel has 599 static modules and 0 loadable modules.
I'm using the command make nconfig and I've searched high and low for the section to disable the GUI, but I can't find it. My OS boots still with a GUI, and I want to disable that because I'm sure I can remove a lot of modules that way and make my kernel even smaller.
Somebody knows which section of the menu has this option?
EDIT:
The task is finished and I've ended up with 533 static modules + 0 dynamic modules.
I literally can't remove any more modules, and the GUI is still working and there is no section in the menu to disable it.
You were all right, thanks!
|
To build a minimal kernel you should use make tinyconfig instead of make nconfig.
To disable the graphic interface, use:
sudo systemctl set-default multi-user.target
to revert back:
sudo systemctl set-default graphical.target
But it doesn't make the kernel smaller.
| How to disable GUI when compiling the linux kernel? |
1,457,991,018,000 |
/rtlwifi_new# make && make install
make -C /lib/modules/4.12.0-kali2-amd64/build M=/root/rtlwifi_new modules
make[1]: *** /lib/modules/4.12.0-kali2-amd64/build: No such file or directory. Stop.
Makefile:58: recipe for target 'all' failed
make: *** [all] Error 2
I am trying to increase the link strength of my WiFi using these commands:
git clone https://github.com/lwfinger/rtlwifi_new
cd rtlwifi_new
git pull
make
sudo make install
sudo modprobe -rv rtl8723be
sudo modprobe -v rtl8723be ant_sel=0
sudo iw dev wlan0 scan | grep signal
sudo modprobe -rv rtl8723be
sudo modprobe -v rtl8723be ant_sel=1
sudo iw dev wlan0 scan | grep signal
Earlier the make command worked successfully, but for some reason I reinstalled my Linux and now I am not able to use it, showing the above mentioned error.
|
You need to install the development tools and the headers of your kernel:
apt install build-essential
apt install linux-headers-4.12.0-kali2-amd64
| make command not working |
1,457,991,018,000 |
I want to install Debian 9 in the aforementioned laptop but I know from my experience that it will not be easy (I have tried in the past Ubuntu but only Mint worked - I suppose that there is some incompatibility with the graphics card which is NVDIA).
I found online (https://wiki.debian.org/InstallingDebianOn/Dell/Inspiron7559/stretch)
that I have to add the following line before booting (modprobe.blacklist=nouveau), but I am not sure how to do it. Any ideas?
|
I'm not sure if this solves all the problems you may experience at installation, but answers the question how to start up with the parameter set.
There are two steps required. The first one is after you've installed the OS and booting up, you need to boot your OS with this parameter added by hand. Then, once the OS is up, you need to change the Grub config file, so that the parameter is set automatically at bootup.
Here's a good guide for Ubuntu, which should be the same for Debian: https://askubuntu.com/a/19487/582808
In a nutshell, the first step is to edit the bootup entry and add modprobe.blacklist=nouveau. Then already in Debian, add the same to the line with GRUB_CMDLINE_LINUX_DEFAULT and then run
sudo update-grub
| Installation of Debian 9 in Dell Inspiron 7559 |
1,457,991,018,000 |
im new to linux , and im using linux clear distribution
im trying to conect my DSLR to be like webcam
trying make this module to install and it not working
this is my output
$ make
Building v4l2-loopback driver...
make -C /lib/modules/`uname -r`/build M=/home/yukehi/הורדות/v4l2loopback-master modules
make[1]: *** /lib/modules/5.3.11-868.native/build: No such file or directory. Stop.
make: *** [Makefile:43: v4l2loopback.ko] Error 2
and this is the makefile
i dont understand what i need to change here
v4l2loopback-master
KERNELRELEASE ?= `uname -r`
KERNEL_DIR ?= /lib/modules/$(KERNELRELEASE)/build
PWD := $(shell pwd)
obj-m := v4l2loopback.o
PREFIX ?= /usr/local
BINDIR = $(PREFIX)/bin
MANDIR = $(PREFIX)/share/man
MAN1DIR = $(MANDIR)/man1
INSTALL = install
INSTALL_PROGRAM = $(INSTALL) -p -m 755
INSTALL_DIR = $(INSTALL) -p -m 755 -d
INSTALL_DATA = $(INSTALL) -m 644
MODULE_OPTIONS = devices=2
|
"make this module to install", and
"how to make a module install"
could combine into make modules_install, which is a special make target. These other two targets are included in make (if in the linux source dir):
* vmlinux - Build the bare kernel
* modules - Build all modules
That means make modules would compile all the configured modules.
But do you mean that kind of "making a module"?
Maybe it is enough to modprobe v4l2loopback, after you have installed the package. That would be inserting, or adding, a module to the kernel, manually.
| how to make a module install in linux |
1,377,261,966,000 |
Prior to doing some benchmarking work how would one free up the memory (RAM) that the Linux Kernel is consuming for its buffers and cache?
Note that this is mostly useful for benchmarking. Emptying the buffers and cache reduces performance! If you're here because you thought that freeing buffers and cache was a positive thing, go and read Linux ate my RAM!. The short story: free memory is unused memory is wasted memory.
|
Emptying the buffers cache
If you ever want to empty it you can use this chain of commands.
# free && sync && echo 3 > /proc/sys/vm/drop_caches && free
total used free shared buffers cached
Mem: 1018916 980832 38084 0 46924 355764
-/+ buffers/cache: 578144 440772
Swap: 2064376 128 2064248
total used free shared buffers cached
Mem: 1018916 685008 333908 0 224 108252
-/+ buffers/cache: 576532 442384
Swap: 2064376 128 2064248
You can signal the Linux Kernel to drop various aspects of cached items by changing the numeric argument to the above command.
To free pagecache:
# echo 1 > /proc/sys/vm/drop_caches
To free dentries and inodes:
# echo 2 > /proc/sys/vm/drop_caches
To free pagecache, dentries and inodes:
# echo 3 > /proc/sys/vm/drop_caches
The above are meant to be run as root. If you're trying to do them using sudo then you'll need to change the syntax slightly to something like these:
$ sudo sh -c 'echo 1 >/proc/sys/vm/drop_caches'
$ sudo sh -c 'echo 2 >/proc/sys/vm/drop_caches'
$ sudo sh -c 'echo 3 >/proc/sys/vm/drop_caches'
NOTE: There's a more esoteric version of the above command if you're into that:
$ echo "echo 1 > /proc/sys/vm/drop_caches" | sudo sh
Why the change in syntax? The /bin/echo program is running as root, because of sudo, but the shell that's redirecting echo's output to the root-only file is still running as you. Your current shell does the redirection before sudo starts.
Seeing what's in the buffers and cache
Take a look at linux-ftools if you'd like to analyze the contents of the buffers & cache. Specifically if you'd like to see what files are currently being cached.
fincore
With this tool you can see what files are being cached within a give directory.
fincore [options] files...
--pages=false Do not print pages
--summarize When comparing multiple files, print a summary report
--only-cached Only print stats for files that are actually in cache.
For example, /var/lib/mysql/blogindex:
root@xxxxxx:/var/lib/mysql/blogindex# fincore --pages=false --summarize --only-cached *
stats for CLUSTER_LOG_2010_05_21.MYI: file size=93840384 , total pages=22910 , cached pages=1 , cached size=4096, cached perc=0.004365
stats for CLUSTER_LOG_2010_05_22.MYI: file size=417792 , total pages=102 , cached pages=1 , cached size=4096, cached perc=0.980392
stats for CLUSTER_LOG_2010_05_23.MYI: file size=826368 , total pages=201 , cached pages=1 , cached size=4096, cached perc=0.497512
stats for CLUSTER_LOG_2010_05_24.MYI: file size=192512 , total pages=47 , cached pages=1 , cached size=4096, cached perc=2.127660
stats for CLUSTER_LOG_2010_06_03.MYI: file size=345088 , total pages=84 , cached pages=43 , cached size=176128, cached perc=51.190476
stats for CLUSTER_LOG_2010_06_04.MYD: file size=1478552 , total pages=360 , cached pages=97 , cached size=397312, cached perc=26.944444
stats for CLUSTER_LOG_2010_06_04.MYI: file size=205824 , total pages=50 , cached pages=29 , cached size=118784, cached perc=58.000000
stats for COMMENT_CONTENT_2010_06_03.MYI: file size=100051968 , total pages=24426 , cached pages=10253 , cached size=41996288, cached perc=41.975764
stats for COMMENT_CONTENT_2010_06_04.MYD: file size=716369644 , total pages=174894 , cached pages=79821 , cached size=326946816, cached perc=45.639645
stats for COMMENT_CONTENT_2010_06_04.MYI: file size=56832000 , total pages=13875 , cached pages=5365 , cached size=21975040, cached perc=38.666667
stats for FEED_CONTENT_2010_06_03.MYI: file size=1001518080 , total pages=244511 , cached pages=98975 , cached size=405401600, cached perc=40.478751
stats for FEED_CONTENT_2010_06_04.MYD: file size=9206385684 , total pages=2247652 , cached pages=1018661 , cached size=4172435456, cached perc=45.321117
stats for FEED_CONTENT_2010_06_04.MYI: file size=638005248 , total pages=155763 , cached pages=52912 , cached size=216727552, cached perc=33.969556
stats for FEED_CONTENT_2010_06_04.frm: file size=9840 , total pages=2 , cached pages=3 , cached size=12288, cached perc=150.000000
stats for PERMALINK_CONTENT_2010_06_03.MYI: file size=1035290624 , total pages=252756 , cached pages=108563 , cached size=444674048, cached perc=42.951700
stats for PERMALINK_CONTENT_2010_06_04.MYD: file size=55619712720 , total pages=13579031 , cached pages=6590322 , cached size=26993958912, cached perc=48.533080
stats for PERMALINK_CONTENT_2010_06_04.MYI: file size=659397632 , total pages=160985 , cached pages=54304 , cached size=222429184, cached perc=33.732335
stats for PERMALINK_CONTENT_2010_06_04.frm: file size=10156 , total pages=2 , cached pages=3 , cached size=12288, cached perc=150.000000
---
total cached size: 32847278080
With the above output you can see that there are several *.MYD, *.MYI, and *.frm files that are currently being cached.
Swap
If you want to clear out your swap you can use the following commands.
$ free
total used free shared buffers cached
Mem: 7987492 7298164 689328 0 30416 457936
-/+ buffers/cache: 6809812 1177680
Swap: 5963772 609452 5354320
Then use this command to disable swap:
$ swapoff -a
You can confirm that it's now empty:
$ free
total used free shared buffers cached
Mem: 7987492 7777912 209580 0 39332 489864
-/+ buffers/cache: 7248716 738776
Swap: 0 0 0
And to re-enable it:
$ swapon -a
And now reconfirm with free:
$ free
total used free shared buffers cached
Mem: 7987492 7785572 201920 0 41556 491508
-/+ buffers/cache: 7252508 734984
Swap: 5963772 0 5963772
| How do you empty the buffers and cache on a Linux system? |
1,377,261,966,000 |
How should one reload udev rules, so that newly created one can function?
I'm running Arch Linux, and I don't have a udevstart command here.
Also checked /etc/rc.d, no udev service there.
|
# udevadm control --reload-rules && udevadm trigger
| How to reload udev rules without reboot? |
1,377,261,966,000 |
I can't seem to find any information on this aside from "the CPU's MMU sends a signal" and "the kernel directs it to the offending program, terminating it".
I assumed that it probably sends the signal to the shell and the shell handles it by terminating the offending process and printing "Segmentation fault". So I tested that assumption by writing an extremely minimal shell I call crsh (crap shell). This shell does not do anything except take user input and feed it to the system() method.
#include <stdio.h>
#include <stdlib.h>
int main(){
char cmdbuf[1000];
while (1){
printf("Crap Shell> ");
fgets(cmdbuf, 1000, stdin);
system(cmdbuf);
}
}
So I ran this shell in a bare terminal (without bash running underneath). Then I proceeded to run a program that produces a segfault. If my assumptions were correct, this would either a) crash crsh, closing the xterm, b) not print "Segmentation fault", or c) both.
braden@system ~/code/crsh/ $ xterm -e ./crsh
Crap Shell> ./segfault
Segmentation fault
Crap Shell> [still running]
Back to square one, I guess. I've just demonstrated that it's not the shell that does this, but the system underneath. How does "Segmentation fault" even get printed? "Who" is doing it? The kernel? Something else? How does the signal and all of its side effects propagate from the hardware to the eventual termination of the program?
|
All modern CPUs have the capacity to interrupt the currently-executing machine instruction. They save enough state (usually, but not always, on the stack) to make it possible to resume execution later, as if nothing had happened (the interrupted instruction will be restarted from scratch, usually). Then they start executing an interrupt handler, which is just more machine code, but placed at a special location so the CPU knows where it is in advance. Interrupt handlers are always part of the kernel of the operating system: the component that runs with the greatest privilege and is responsible for supervising execution of all the other components.1,2
Interrupts can be synchronous, meaning that they are triggered by the CPU itself as a direct response to something the currently-executing instruction did, or asynchronous, meaning that they happen at an unpredictable time because of an external event, like data arriving on the network port. Some people reserve the term "interrupt" for asynchronous interrupts, and call synchronous interrupts "traps", "faults", or "exceptions" instead, but those words all have other meanings so I'm going to stick with "synchronous interrupt".
Now, most modern operating systems have a notion of processes. At its most basic, this is a mechanism whereby the computer can run more than one program at the same time, but it is also a key aspect of how operating systems configure memory protection, which is is a feature of most (but, alas, still not all) modern CPUs. It goes along with virtual memory, which is the ability to alter the mapping between memory addresses and actual locations in RAM. Memory protection allows the operating system to give each process its own private chunk of RAM, that only it can access. It also allows the operating system (acting on behalf of some process) to designate regions of RAM as read-only, executable, shared among a group of cooperating processes, etc. There will also be a chunk of memory that is only accessible by the kernel.3
As long as each process accesses memory only in the ways that the CPU is configured to allow, memory protection is invisible. When a process breaks the rules, the CPU will generate a synchronous interrupt, asking the kernel to sort things out. It regularly happens that the process didn't really break the rules, only the kernel needs to do some work before the process can be allowed to continue. For instance, if a page of a process's memory needs to be "evicted" to the swap file in order to free up space in RAM for something else, the kernel will mark that page inaccessible. The next time the process tries to use it, the CPU will generate a memory-protection interrupt; the kernel will retrieve the page from swap, put it back where it was, mark it accessible again, and resume execution.
But suppose that the process really did break the rules. It tried to access a page that has never had any RAM mapped to it, or it tried to execute a page that is marked as not containing machine code, or whatever. The family of operating systems generally known as "Unix" all use signals to deal with this situation.4 Signals are similar to interrupts, but they are generated by the kernel and fielded by processes, rather than being generated by the hardware and fielded by the kernel. Processes can define signal handlers in their own code, and tell the kernel where they are. Those signal handlers will then execute, interrupting the normal flow of control, when necessary. Signals all have a number and two names, one of which is a cryptic acronym and the other a slightly less cryptic phrase. The signal that's generated when the a process breaks the memory-protection rules is (by convention) number 11, and its names are SIGSEGV and "Segmentation fault".5,6
An important difference between signals and interrupts is that there is a default behavior for every signal. If the operating system fails to define handlers for all interrupts, that is a bug in the OS, and the entire computer will crash when the CPU tries to invoke a missing handler. But processes are under no obligation to define signal handlers for all signals. If the kernel generates a signal for a process, and that signal has been left at its default behavior, the kernel will just go ahead and do whatever the default is and not bother the process. Most signals' default behaviors are either "do nothing" or "terminate this process and maybe also produce a core dump." SIGSEGV is one of the latter.
So, to recap, we have a process that broke the memory-protection rules. The CPU suspended the process and generated a synchronous interrupt. The kernel fielded that interrupt and generated a SIGSEGV signal for the process. Let's assume the process did not set up a signal handler for SIGSEGV, so the kernel carries out the default behavior, which is to terminate the process. This has all the same effects as the _exit system call: open files are closed, memory is deallocated, etc.
Up till this point nothing has printed out any messages that a human can see, and the shell (or, more generally, the parent process of the process that just got terminated) has not been involved at all. SIGSEGV goes to the process that broke the rules, not its parent. The next step in the sequence, though, is to notify the parent process that its child has been terminated. This can happen in several different ways, of which the simplest is when the parent is already waiting for this notification, using one of the wait system calls (wait, waitpid, wait4, etc). In that case, the kernel will just cause that system call to return, and supply the parent process with a code number called an exit status.7 The exit status informs the parent why the child process was terminated; in this case, it will learn that the child was terminated due to the default behavior of a SIGSEGV signal.
The parent process may then report the event to a human by printing a message; shell programs almost always do this. Your crsh doesn't include code to do that, but it happens anyway, because the C library routine system runs a full-featured shell, /bin/sh, "under the hood". crsh is the grandparent in this scenario; the parent-process notification is fielded by /bin/sh, which prints its usual message. Then /bin/sh itself exits, since it has nothing more to do, and the C library's implementation of system receives that exit notification. You can see that exit notification in your code, by inspecting the return value of system; but it won't tell you that the grandchild process died on a segfault, because that was consumed by the intermediate shell process.
Footnotes
Some operating systems don't implement device drivers as part of the kernel; however, all interrupt handlers still have to be part of the kernel, and so does the code that configures memory protection, because the hardware doesn't allow anything but the kernel to do these things.
There may be a program called a "hypervisor" or "virtual machine manager" that is even more privileged than the kernel, but for purposes of this answer it can be considered part of the hardware.
The kernel is a program, but it is not a process; it is more like a library. All processes execute parts of the kernel's code, from time to time, in addition to their own code. There may be a number of "kernel threads" that only execute kernel code, but they do not concern us here.
The one and only OS you are likely to have to deal with anymore that can't be considered an implementation of Unix is, of course, Windows. It does not use signals in this situation. (Indeed, it does not have signals; on Windows the <signal.h> interface is completely faked by the C library.) It uses something called "structured exception handling" instead.
Some memory-protection violations generate SIGBUS ("Bus error") instead of SIGSEGV. The line between the two is underspecified and varies from system to system. If you've written a program that defines a handler for SIGSEGV, it is probably a good idea to define the same handler for SIGBUS.
"Segmentation fault" was the name of the interrupt generated for memory-protection violations by one of the computers that ran the original Unix, probably the PDP-11. "Segmentation" is a type of memory protection, but nowadays the term "segmentation fault" refers generically to any sort of memory protection violation.
All the other ways the parent process might be notified of a child having terminated, end up with the parent calling wait and receiving an exit status. It's just that something else happens first.
| How does a Segmentation Fault work under-the-hood? |
1,377,261,966,000 |
I'm currently facing a problem on a linux box where as root I have commands returning error because inotify watch limit has been reached.
# tail -f /var/log/messages
[...]
tail: cannot watch '/var/log/messages': No space left on device
# inotifywatch -v /var/log/messages
Establishing watches...
Failed to watch /var/log/messages; upper limit on inotify watches reached!
Please increase the amount of inotify watches allowed per user via '/proc/sys/fs/inotify/max_user_watches'.`
I googled a bit and every solution I found is to increase the limit with:
sudo sysctl fs.inotify.max_user_watches=<some random high number>
But I was unable to find any information of the consequences of raising that value. I guess the default kernel value was set for a reason but it seems to be inadequate for particular usages. (e.g., when using Dropbox with a large number of folder, or software that monitors a lot of files)
So here are my questions:
Is it safe to raise that value and what would be the consequences of a too high value?
Is there a way to find out what are the currently set watches and which process set them to be able to determine if the reached limit is not caused by a faulty software?
|
Is it safe to raise that value and what would be the consequences of a too high value?
Yes, it's safe to raise that value and below are the possible costs [source]:
Each used inotify watch takes up 540 bytes (32-bit system), or 1 kB (double - on 64-bit) [sources: 1, 2]
This comes out of kernel memory, which is unswappable.
Assuming you set the max at 524288 and all were used (improbable), you'd be using approximately 256MB/512MB of 32-bit/64-bit kernel memory.
Note that your application will also use additional memory to keep track of the inotify handles, file/directory paths, etc. -- how much depends on its design.
To check the max number of inotify watches:
cat /proc/sys/fs/inotify/max_user_watches
To set max number of inotify watches
Temporarily:
Run sudo sysctl fs.inotify.max_user_watches= with your preferred value at the end.
Permanently (more detailed info):
put fs.inotify.max_user_watches=524288 into your sysctl settings. Depending on your system they might be in one of the following places:
Debian/RedHat: /etc/sysctl.conf
Arch: put a new file into /etc/sysctl.d/, e.g. /etc/sysctl.d/40-max-user-watches.conf
you may wish to reload the sysctl settings to avoid a reboot: sysctl -p (Debian/RedHat) or sysctl --system (Arch)
Check to see if the max number of inotify watches have been reached:
Use tail with the -f (follow) option on any old file, e.g. tail -f /var/log/dmesg:
- If all is well, it will show the last 10 lines and pause; abort with Ctrl-C
- If you are out of watches, it will fail with this somewhat cryptic error:
tail: cannot watch '/var/log/dmsg': No space left on device
To see what's using up inotify watches
find /proc/*/fd -lname anon_inode:inotify |
cut -d/ -f3 |
xargs -I '{}' -- ps --no-headers -o '%p %U %c' -p '{}' |
uniq -c |
sort -nr
The first column indicates the number of inotify fds (not the number of watches though) and the second shows the PID of that process [sources: 1, 2].
| Kernel inotify watch limit reached |
1,377,261,966,000 |
The Linux proc(5) man page tells me that /proc/$pid/mem “can be used to access the pages of a process's memory”. But a straightforward attempt to use it only gives me
$ cat /proc/$$/mem /proc/self/mem
cat: /proc/3065/mem: No such process
cat: /proc/self/mem: Input/output error
Why isn't cat able to print its own memory (/proc/self/mem)? And what is this strange “no such process” error when I try to print the shell's memory (/proc/$$/mem, obviously the process exists)? How can I read from /proc/$pid/mem, then?
|
/proc/$pid/maps
/proc/$pid/mem shows the contents of $pid's memory mapped the same way as in the process, i.e., the byte at offset x in the pseudo-file is the same as the byte at address x in the process. If an address is unmapped in the process, reading from the corresponding offset in the file returns EIO (Input/output error). For example, since the first page in a process is never mapped (so that dereferencing a NULL pointer fails cleanly rather than unintendedly accessing actual memory), reading the first byte of /proc/$pid/mem always yield an I/O error.
The way to find out what parts of the process memory are mapped is to read /proc/$pid/maps. This file contains one line per mapped region, looking like this:
08048000-08054000 r-xp 00000000 08:01 828061 /bin/cat
08c9b000-08cbc000 rw-p 00000000 00:00 0 [heap]
The first two numbers are the boundaries of the region (addresses of the first byte and the byte after last, in hexa). The next column contain the permissions, then there's some information about the file (offset, device, inode and name) if this is a file mapping. See the proc(5) man page or Understanding Linux /proc/id/maps for more information.
Here's a proof-of-concept script that dumps the contents of its own memory.
#! /usr/bin/env python
import re
maps_file = open("/proc/self/maps", 'r')
mem_file = open("/proc/self/mem", 'rb', 0)
output_file = open("self.dump", 'wb')
for line in maps_file.readlines(): # for each mapped region
m = re.match(r'([0-9A-Fa-f]+)-([0-9A-Fa-f]+) ([-r])', line)
if m.group(3) == 'r': # if this is a readable region
start = int(m.group(1), 16)
end = int(m.group(2), 16)
mem_file.seek(start) # seek to region start
chunk = mem_file.read(end - start) # read region contents
output_file.write(chunk) # dump contents to standard output
maps_file.close()
mem_file.close()
output_file.close()
/proc/$pid/mem
[The following is for historical interest. It does not apply to current kernels.]
Since version 3.3 of the kernel, you can access /proc/$pid/mem normally as long as you access only access it at mapped offsets and you have permission to trace it (same permissions as ptrace for read-only access). But in older kernels, there were some additional complications.
If you try to read from the mem pseudo-file of another process, it doesn't work: you get an ESRCH (No such process) error.
The permissions on /proc/$pid/mem (r--------) are more liberal than what should be the case. For example, you shouldn't be able to read a setuid process's memory. Furthermore, trying to read a process's memory while the process is modifying it could give the reader an inconsistent view of the memory, and worse, there were race conditions that could trace older versions of the Linux kernel (according to this lkml thread, though I don't know the details). So additional checks are needed:
The process that wants to read from /proc/$pid/mem must attach to the process using ptrace with the PTRACE_ATTACH flag. This is what debuggers do when they start debugging a process; it's also what strace does to a process's system calls. Once the reader has finished reading from /proc/$pid/mem, it should detach by calling ptrace with the PTRACE_DETACH flag.
The observed process must not be running. Normally calling ptrace(PTRACE_ATTACH, …) will stop the target process (it sends a STOP signal), but there is a race condition (signal delivery is asynchronous), so the tracer should call wait (as documented in ptrace(2)).
A process running as root can read any process's memory, without needing to call ptrace, but the observed process must be stopped, or the read will still return ESRCH.
In the Linux kernel source, the code providing per-process entries in /proc is in fs/proc/base.c, and the function to read from /proc/$pid/mem is mem_read. The additional check is performed by check_mem_permission.
Here's some sample C code to attach to a process and read a chunk its of mem file (error checking omitted):
sprintf(mem_file_name, "/proc/%d/mem", pid);
mem_fd = open(mem_file_name, O_RDONLY);
ptrace(PTRACE_ATTACH, pid, NULL, NULL);
waitpid(pid, NULL, 0);
lseek(mem_fd, offset, SEEK_SET);
read(mem_fd, buf, _SC_PAGE_SIZE);
ptrace(PTRACE_DETACH, pid, NULL, NULL);
I've already posted a proof-of-concept script for dumping /proc/$pid/mem on another thread.
| How do I read from /proc/$pid/mem under Linux? |
1,377,261,966,000 |
How can I pick which kernel GRUB 2 should load by default? I recently installed a Linux real-time kernel and now it loads by default. I'd like to load the regular one by default.
So far I only managed to pick the default OS... and for some reason the /boot/grub.cfg already assumes that I want to load the real-time kernel and put it into the generic Linux menu entry (in my case Arch Linux).
|
I think most distributions have moved additional kernels into the advanced options sub menu at this point, as TomTom found was the case with his
Arch.
I didn't want to alter my top level menu structure in order to select a previous kernel as the default. I found the answer here.
To summarize:
Find the $menuentry_id_option for the submenu:
$ grep submenu /boot/grub/grub.cfg
submenu 'Advanced options for Debian GNU/Linux' $menuentry_id_option 'gnulinux-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' {
Find the $menuentry_id_option for the menu entry for the kernel you want to use:
$ grep gnulinux /boot/grub/grub.cfg
menuentry 'Debian GNU/Linux' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' {
submenu 'Advanced options for Debian GNU/Linux' $menuentry_id_option 'gnulinux-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' {
menuentry 'Debian GNU/Linux, with Linux 4.18.0-0.bpo.1-rt-amd64' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.18.0-0.bpo.1-rt-amd64-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' {
menuentry 'Debian GNU/Linux, with Linux 4.18.0-0.bpo.1-rt-amd64 (recovery mode)' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.18.0-0.bpo.1-rt-amd64-recovery-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' {
menuentry 'Debian GNU/Linux, with Linux 4.18.0-0.bpo.1-amd64' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.18.0-0.bpo.1-amd64-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' {
menuentry 'Debian GNU/Linux, with Linux 4.18.0-0.bpo.1-amd64 (recovery mode)' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.18.0-0.bpo.1-amd64-recovery-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' {
menuentry 'Debian GNU/Linux, with Linux 4.17.0-0.bpo.1-amd64' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.17.0-0.bpo.1-amd64-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' {
menuentry 'Debian GNU/Linux, with Linux 4.17.0-0.bpo.1-amd64 (recovery mode)' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.17.0-0.bpo.1-amd64-recovery-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' {
menuentry 'Debian GNU/Linux, with Linux 4.9.0-8-amd64' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.9.0-8-amd64-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' {
menuentry 'Debian GNU/Linux, with Linux 4.9.0-8-amd64 (recovery mode)' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.9.0-8-amd64-recovery-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' {
Comment out your current default grub in /etc/default/grub and replace it with the sub-menu's $menuentry_id_option from step one, and the selected kernel's $menuentry_id_option from step two separated by >.
In my case the modified GRUB_DEFAULT is:
#GRUB_DEFAULT=0
GRUB_DEFAULT="gnulinux-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc>gnulinux-4.18.0-0.bpo.1-amd64-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc"
Update grub to make the changes. For Debian this is done like so:
$ sudo update-grub
Done.
Now when you boot, the advanced menu should have an asterisk and you should boot into the selected kernel. You can confirm this with uname.
$ uname -a
Linux NAME 4.18.0-0.bpo.1-amd64 #1 SMP Debian 4.18.0-0 (2018-09-13) x86_64 GNU/Linux
Changing this back to the most recent kernel is as simple as commenting out the new line and uncommenting #GRUB_DEFAULT=0:
GRUB_DEFAULT=0
#GRUB_DEFAULT="gnulinux-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc>gnulinux-4.18.0-0.bpo.1-amd64-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc"
then rerunning update-grub.
Specifying IDs for all the entries from the top level menu is mandatory. The format for setting the default boot entry can be found in the documentation.
| Set the default kernel in GRUB |
1,377,261,966,000 |
I've just installed kernel-3.11.0-1.fc20 for my Fedora 19 installation. During the rebooting progress, I saw the Linux logo with a Windows flag in it, what does it mean?
The Fedora 19 is installed in an ASUS TX300CA notebook, secure boot is off, CSM (BIOS Compatibility Support Module) mode is on.
|
A couple of years ago, Linus Torvalds was discussing Linux version
numbers and said, "I think I will call it 3.11 Linux for Workgroups."
It turns out he wasn't joking. With a release candidate of Linux 3.11
now available, Torvalds has actually named the new version of the
kernel "Linux for Workgroups." He even gave it a Windows-themed boot
icon featuring Linux's mascot penguin, Tux, holding a flag emblazoned
with an old Windows logo. The name "Linux for Workgroups" follows such
whimsical past Linux version names as "Pink Farting Weasel," "Killer
Bat of Doom," "Erotic Pickled Herring," and "Jeff Thinks I Should
Change This, But To What?"
From the news:
20 years after Windows 3.11, Linus unveils “Linux for Workgroups”
| What does the Windows flag in the Linux logo of kernel 3.11 mean? |
1,377,261,966,000 |
I'm interested in the difference between Highmem and Lowmem:
Why is there such a differentiation?
What do we gain by doing so?
What features does each have?
|
On a 32-bit architecture, the address space range for addressing RAM is:
0x00000000 - 0xffffffff
or 4'294'967'295 (4 GB).
The linux kernel splits that up 3/1 (could also be 2/2, or 1/3 1) into user space (high memory) and kernel space (low memory) respectively.
The user space range:
0x00000000 - 0xbfffffff
Every newly spawned user process gets an address (range) inside this area. User processes are generally untrusted and therefore are forbidden to access the kernel space. Further, they are considered non-urgent, as a general rule, the kernel tries to defer the allocation of memory to those processes.
The kernel space range:
0xc0000000 - 0xffffffff
A kernel processes gets its address (range) here. The kernel can directly access this 1 GB of addresses (well, not the full 1 GB, there are 128 MB reserved for high memory access).
Processes spawned in kernel space are trusted, urgent and assumed error-free, the memory request gets processed instantaneously.
Every kernel process can also access the user space range if it wishes to. And to achieve this, the kernel maps an address from the user space (the high memory) to its kernel space (the low memory), the 128 MB mentioned above are especially reserved for this.
1 Whether the split is 3/1, 2/2, or 1/3 is controlled by the CONFIG_VMSPLIT_... option; you can probably check under /boot/config* to see which option was selected for your kernel.
| What are high memory and low memory on Linux? |
1,377,261,966,000 |
Is Kernel space used when Kernel is executing on the behalf of the user program i.e. System Call? Or is it the address space for all the Kernel threads (for example scheduler)?
If it is the first one, than does it mean that normal user program cannot have more than 3GB of memory (if the division is 3GB + 1GB)? Also, in that case how can kernel use High Memory, because to what virtual memory address will the pages from high memory be mapped to, as 1GB of kernel space will be logically mapped?
|
Is Kernel space used when Kernel is executing on the behalf of the user program i.e. System Call? Or is it the address space for all the Kernel threads (for example scheduler)?
Yes and yes.
Before we go any further, we should state this about memory.
Memory gets divided into two distinct areas:
The user space, which is a set of locations where normal user processes run (i.e everything other than the kernel). The role of the kernel is to manage applications running in this space from messing with each other, and the machine.
The kernel space, which is the location where the code and data of the kernel is stored, and executes under.
Processes running under the user space have access only to a limited part of memory, whereas the kernel has access to all of the memory. Processes running in user space also don't have access to the kernel space. User space processes can only access a small part of the kernel via an interface exposed by the kernel - the system calls. If a process performs a system call, a software interrupt is sent to the kernel, which then dispatches the appropriate interrupt handler and continues its work after the handler has finished.
Kernel space code has the property to run in "kernel mode", which (in your typical desktop -x86- computer) is what you call code that executes under ring 0. Typically in x86 architecture, there are 4 rings of protection. Ring 0 (kernel mode), Ring 1 (may be used by virtual machine hypervisors or drivers), Ring 2 (may be used by drivers, I am not so sure about that though). Ring 3 is what typical applications run under. It is the least privileged ring, and applications running on it have access to a subset of the processor's instructions. Ring 0 (kernel space) is the most privileged ring, and has access to all of the machine's instructions. For an example of this, a "plain" application (like a browser) can not use x86 assembly instructions lgdt to load the global descriptor table, nor hlt to halt a processor.
If it is the first one, than does it mean that normal user program cannot have more than 3GB of memory (if the division is 3GB + 1GB)? Also, in that case how can kernel use High Memory, because to what virtual memory address will the pages from high memory be mapped to, as 1GB of kernel space will be logically mapped?
For an answer to this, please refer to the excellent answer by wag to What are high memory and low memory on Linux?.
| What is difference between User space and Kernel space? |
1,377,261,966,000 |
What benefit could I see by compiling a Linux kernel myself? Is there some efficiency you could create by customizing it to your hardware?
|
In my mind, the only benefit you really get from compiling your own linux kernel is:
You learn how to compile your own linux kernel.
It's not something you need to do for more speed / memory / xxx whatever. It is a valuable thing to do if that's the stage you feel you are at in your development. If you want to have a deeper understanding of what this whole "open source" thing is about, about how and what the different parts of the kernel are, then you should give it a go. If you are just looking to speed up your boot time by 3 seconds, then... what's the point... go buy an ssd. If you are curious, if you want to learn, then compiling your own kernel is a great idea and you will likely get a lot out of it.
With that said, there are some specific reasons when it would be appropriate to compile your own kernel (as several people have pointed out in the other answers). Generally these arise out of a specific need you have for a specific outcome, for example:
I need to get the system to boot/run on hardware with limited resources
I need to test out a patch and provide feedback to the developers
I need to disable something that is causing a conflict
I need to develop the linux kernel
I need to enable support for my unsupported hardware
I need to improve performance of x because I am hitting the current limits of the system (and I know what I'm doing)
The issue lies in thinking that there's some intrinsic benefit to compiling your own kernel when everything is already working the way it should be, and I don't think that there is. Though you can spend countless hours disabling things you don't need and tweaking the things that are tweakable, the fact is the linux kernel is already pretty well tuned (by your distribution) for most user situations.
| What is the benefit of compiling your own linux kernel? |
1,377,261,966,000 |
I have seen on many blogs, using this command to enable IP forwarding while using many network security/sniffing tools on linux
echo 1 > /proc/sys/net/ipv4/ip_forward
Can anyone explain me in layman terms, what essentially does this command do? Does it turn your system into router?
|
"IP forwarding" is a synonym for "routing." It is called "kernel IP forwarding" because it is a feature of the Linux kernel.
A router has multiple network interfaces. If traffic comes in on one interface that matches a subnet of another network interface, a router then forwards that traffic to the other network interface.
So, let's say you have two NICs, one (NIC 1) is at address 192.168.2.1/24, and the other (NIC 2) is 192.168.3.1/24. If forwarding is enabled, and a packet comes in on NIC 1 with a "destination address" of 192.168.3.8, the router will resend that packet out of the NIC 2.
It's common for routers functioning as gateways to the Internet to have a default route whereby any traffic that doesn't match any NICs will go through the default route's NIC. So in the above example, if you have an internet connection on NIC 2, you'd set NIC 2 as your default route and then any traffic coming in from NIC 1 that isn't destined for something on 192.168.2.0/24 will go through NIC 2. Hopefully there's other routers past NIC 2 that can further route it (in the case of the Internet, the next hop would be your ISP's router, and then their providers upstream router, etc.)
Enabling ip_forward tells your Linux system to do this. For it to be meaningful, you need two network interfaces (any 2 or more of wired NIC cards, Wifi cards or chipsets, PPP links over a 56k modem or serial, etc.).
When doing routing, security is important and that's where Linux's packet filter, iptables, gets involved. So you will need an iptables configuration consistent with your needs.
Note that enabling forwarding with iptables disabled and/or without taking firewalling and security into account could leave you open to vulnerabilites if one of the NICs is facing the Internet or a subnet you don't have control over.
| What is kernel ip forwarding? |
1,377,261,966,000 |
I'm trying to install the most up-to-date NVIDIA driver in Debian Stretch. I've downloaded NVIDIA-Linux-x86_64-390.48.run from here, but when I try to do
sudo sh ./NVIDIA-Linux-x86_64-390.48.run
as suggested, an error message appears.
ERROR: An NVIDIA kernel module 'nvidia-drm' appears to already be loaded in your kernel. This may be because it is in use (for example, by an X server, a CUDA program, or
the NVIDIA Persistence Daemon), but this may also happen if your kernel was configured without support for module unloading. Please be sure to exit any programs
that may be using the GPU(s) before attempting to upgrade your driver. If no GPU-based programs are running, you know that your kernel supports module unloading,
and you still receive this message, then an error may have occured that has corrupted an NVIDIA kernel module's usage count, for which the simplest remedy is to
reboot your computer.
When I try to find out who is using nvidia-drm (or nvidia_drm), I see nothing.
~$ sudo lsof | grep nvidia-drm
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs
Output information may be incomplete.
~$ sudo lsof -e /run/user/1000/gvfs | grep nvidia-drm
~$
And when I try to remove it, it says it's being used.
~$ sudo modprobe -r nvidia-drm
modprobe: FATAL: Module nvidia_drm is in use.
~$
I have rebooted and started in text-only mode (by pressing Ctrl+Alt+F2 before giving username/password), but I got the same error.
Besides it, how do I "know that my kernel supports module unloading"?
I'm getting a few warnings on boot up related to nvidia, no idea if they're related, though:
Apr 30 00:46:15 debian-9 kernel: nvidia: loading out-of-tree module taints kernel.
Apr 30 00:46:15 debian-9 kernel: nvidia: module license 'NVIDIA' taints kernel.
Apr 30 00:46:15 debian-9 kernel: Disabling lock debugging due to kernel taint
Apr 30 00:46:15 debian-9 kernel: NVRM: loading NVIDIA UNIX x86_64 Kernel Module 375.82 Wed Jul 19 21:16:49 PDT 2017 (using threaded interrupts)
|
I imagine you want to stop the display manager which is what I'd suspect would be using the Nvidia drivers.
After change to a text console (pressing Ctrl+Alt+F2) and logging in as root, use the following command to disable the graphical target, which is what keeps the display manager running:
# systemctl isolate multi-user.target
At this point, I'd expect you'd be able to unload the Nvidia drivers using modprobe -r (or rmmod directly):
# modprobe -r nvidia-drm
Once you've managed to replace/upgrade it and you're ready to start the graphical environment again, you can use this command:
# systemctl start graphical.target
| How to unload kernel module 'nvidia-drm'? |
1,377,261,966,000 |
Is it possible to cause a kernel panic with a single command line?
What would be the most straightforward such command for a sudoing user and what would it be for a regular user, if any?
Scenarios that suggest downloading something as a part of the command do not count.
|
FreeBSD:
sysctl debug.kdb.panic=1
Linux (more info in the kernel documentation):
echo c > /proc/sysrq-trigger
| How to cause kernel panic with a single command? |
1,377,261,966,000 |
When I do a lspci -k on my Kubuntu with a 3.2.0-29-generic kernel I can see something like this:
01:00.0 VGA compatible controller: NVIDIA Corporation G86 [Quadro NVS 290] (rev a1)
Subsystem: NVIDIA Corporation Device 0492
Kernel driver in use: nvidia
Kernel modules: nvidia_current, nouveau, nvidiafb
There is a kernel driver nvidia and kernel modules nvidia_current, nouveau, nvidiafb.
Now I wondered what might be the difference between Kernel drivers and Kernel modules?
|
A kernel module is a bit of compiled code that can be inserted into the kernel at run-time, such as with insmod or modprobe.
A driver is a bit of code that runs in the kernel to talk to some hardware device. It "drives" the hardware. Most every bit of hardware in your computer has an associated driver.¹ A large part of a running kernel is driver code.²
A driver may be built statically into the kernel file on disk.³ A driver may also be built as a kernel module so that it can be dynamically loaded later. (And then maybe unloaded.)
Standard practice is to build drivers as kernel modules where possible, rather than link them statically to the kernel, since that gives more flexibility. There are good reasons not to, however:
Sometimes a given driver is absolutely necessary to help the system boot up. That doesn't happen as often as you might imagine, due to the initrd feature.
Statically built drivers may be exactly what you want in a system that is statically scoped, such as an embedded system. That is to say, if you know in advance exactly which drivers will always be needed and that this will never change, you have a good reason not to bother with dynamic kernel modules.
If you build your kernel statically and disable Linux's dynamic module loading feature, you prevent run-time modification of the kernel code. This provides additional security and stability at the expense of flexibility.
Not all kernel modules are drivers. For example, a relatively recent feature in the Linux kernel is that you can load a different process scheduler. Another example is that the more complex types of hardware often have multiple generic layers that sit between the low-level hardware driver and userland, such as the USB HID driver, which implements a particular element of the USB stack, independent of the underlying hardware.
Asides:
One exception to this broad statement is the CPU chip, which has no "driver" per se. Your computer may also contain hardware for which you have no driver.
The rest of the code in an OS kernel provides generic services like memory management, IPC, scheduling, etc. These services may primarily serve userland applications, as with the examples linked previously, or they may be internal services used by drivers or other intra-kernel infrastructure.
The one in /boot, loaded into RAM at boot time by the boot loader early in the boot process.
| What is the difference between kernel drivers and kernel modules? |
1,377,261,966,000 |
I am trying to figure out how a tty works1 (the workflow and responsibilities of each element). I have read several interesting articles about it, but there are still some blurry areas.
This is what I understand so far:
The emulated terminal makes different system calls to /dev/ptmx, the master part of the pseudo terminal.
The master part of the pseudo terminal allocates a file in /dev/pts/[0-N], corresponding to the obsolete serial port, and "attaches" a slave pseudo terminal to it.
The slave pseudo terminal keeps information such as session ID, foreground job, screen size.
Here are my questions:
Has ptmx any purpose besides allocating the slave part? Does it provide some kind of "intelligence", or does the emulated terminal
(xterm for instance) have all the intelligence of behaving like a
terminal?
Why does xterm have to interact with the master part, as it only forwards the stdout and stdin of the slave part? Why can't it
directly write and read from the pts file?
Is a session ID always attached to one pts file and vice versa?
Could I execute ps and find two session IDs for the same
/dev/pts/X?
What other information does the pts store? Does xterm update all
fields by itself, or does the ptm add some "intelligence" to it?
1. I base my understanding on the TTY demystified by Linus Åkesson, and the Linux Kernel by Andries Brouwer posts, as on several other questions on these sites
|
Terminal emulators
The master side replaces the line (the pair of TX/RX wires) that goes to the terminal.
The terminal displays the characters that it receives on one of the wires (some of those are control characters and make it do things like move the cursor, change colour...) and sends on another wire the characters corresponding to the keys you type.
Terminal emulators like xterm are not different except that instead of sending and receiving characters on wires, they read and write characters on their file descriptor to the master side. Once they've spawned the slave terminal, and started your shell on that, they no longer touch that. In addition to emulating the pair of wire, xterm can also change some of the line discipline properties via that file descriptor to the master side. For instance, they can update the size attributes so a SIGWINCH is sent to the applications that interact with the slave pty to notify them of a changed size.
Other than that, there is little intelligence in the terminal/terminal emulator.
What you write to a terminal device (like the pty slave) is what you mean to be displayed there, what you read from it is what you have typed there, so it does not make sense for the terminal emulator to read or write to that. They are the ones at the other end.
The tty line discipline
A lot of the intelligence is in the tty line discipline. The line discipline is a software module (residing in the driver, in the kernel) pushed on top of a serial/pty device that sits between that device and the line/wire (the master side for a pty).
A serial line can have a terminal at the other end, but also a mouse or another computer for networking. You can attach a SLIP line discipline for instance to get a network interface on top of a serial device (or pty device), or you can have a tty line discipline. The tty line discipline is the default line discipline at least on Linux for serial and pty devices. On Linux, you can change the line discipline with ldattach.
You can see the effect of disabling the tty line discipline by issuing stty raw -echo (note that the bash prompt or other interactive applications like vi set the terminal in the exact mode they need, so you want to use a dumb application like cat to experiment with that).
Then, everything that is written to the slave terminal device makes it immediately to the master side for xterm to read, and every character written by xterm to the master side is immediately available for reading from the slave device.
The line discipline is where the terminal device internal line editor is implemented. For instance with stty icanon echo (as is the default), when you type a, xterm writes a to the master, then the line discipline echoes it back (makes a a available for reading by xterm for display), but does not make anything available for reading on the slave side. Then if you type backspace, xterm sends a ^? or ^H character, the line discipline (as that ^? or ^H corresponds to the erase line discipline setting) sends back on the master a ^H, space and ^H for xterm to erase the a you've just typed on its screen and still doesn't send anything to the application reading from the slave side, it just updates its internal line editor buffer to remove that a you've typed before.
Then when you press Enter, xterm sends ^M (CR), which the line discipline converts on input to a ^J (LF), and sends what you've entered so far for reading on the slave side (an application reading on /dev/pts/x will receive what you've typed including the LF, but not the a since you've deleted it), while on the master side, it sends a CR and LF to move the cursor to the next line and the start of the screen.
The line discipline is also responsible for sending the SIGINT signal to the foreground process group of the terminal when it receives a ^C character on the master side etc.
Many interactive terminal applications disable most of the features of that line discipline to implement it themselves. But in any case, beware that the terminal (xterm) has little involvement in that (except displaying what it's told to display).
And there can be only one session per process and per terminal device. A session can have a controlling terminal attached to it but does not have to (all sessions start without a terminal until they open one). xterm, in the process that it forks to execute your shell will typically create a new session (and therefore detach from the terminal where you launched xterm from if any), open the new /dev/pts/x it has spawned, by that attaching that terminal device to the new session. It will then execute your shell in that process, so your shell will become the session leader. Your shell or any interactive shell in that session will typically juggle with process groups and tcsetpgrp(), to set the foreground and background jobs for that terminal.
As to what information is stored by a terminal device with a tty discipline (serial or pty), that's typically what the stty command displays and modifies. All the discipline configuration: terminal screen size, local, input output flags, settings for special characters (like ^C, ^Z...), input and output speed (not relevant for ptys). That corresponds to the tcgetattr()/tcsetattr() functions which on Linux map to the TCGETS/TCSETS ioctls, and TIOCGWINSZ/TIOCSWINSZ for the screen size. You may argue that the current foreground process group is another information stored in the terminal device (tcsetpgrp()/tcgetpgrp(), TIOC{G,S}PGRP ioctls), or the current input or output buffer.
Note that that screen size information stored in the terminal device may not reflect reality. The terminal emulator will typically set it (via the same ioctl on the master size) when its window is resized, but it can get out of sync if an application calls the ioctl on the slave side or when the resize is not transmitted (in case of an ssh connection which implies another pty spawned by sshd if ssh ignores the SIGWINCH for instance). Some terminals can also be queried for their size via escape sequences, so an application can query it that way, and update the line discipline with that information.
For more details, you can have a look at the termios and tty_ioctl man pages on Debian for instance.
To play with other line disciplines:
Emulate a mouse with a pseudo-terminal:
socat pty,link=mouse fifo:fifo
sudo inputattach -msc mouse # sets the MOUSE line discipline and specifies protocol
xinput list # see the new mouse there
exec 3<> fifo
printf '\207\12\0' >&3 # moves the cursor 10 pixels to the right
Above, the master side of the pty is terminated by socat onto a named pipe (fifo). We connect that fifo to a process (the shell) that writes 0x87 0x0a 0x00 which in the mouse systems protocol means no button pressed, delta(x,y) = (10,0). Here, we (the shell) are not emulating a terminal, but a mouse, the 3 bytes we send are not to be read (potentially transformed) by an application from the terminal device (mouse above which is a symlink made by socat to some /dev/pts/x device), but are to be interpreted as a mouse input event.
Create a SLIP interface:
# on hostA
socat tcp-listen:12345,reuseaddr pty,link=interface
# after connection from hostB:
sudo ldattach SLIP interface
ifconfig -a # see the new interface there
sudo ifconfig sl0 192.168.123.1/24
# on hostB
socat -v -x pty,link=interface tcp:hostA:12345
sudo ldattach SLIP interface
sudo ifconfig sl0 192.168.123.2/24
ping 192.168.123.1 # see the packets on socat output
Above, the serial wire is emulated by socat as a TCP socket in-between hostA and hostB. The SLIP line discipline interprets those bytes exchanged over that virtual line as SLIP encapsulated IP packets for delivery on the sl0 interface.
| What are the responsibilities of each Pseudo-Terminal (PTY) component (software, master side, slave side)? |
1,377,261,966,000 |
What happens when I write cat /proc/cpuinfo. Is that a named pipe (or something else) to the OS which reads the CPU info on the fly and generate that text each time I call it?
|
Whenever you read a file under /proc, this invokes some code in the kernel which computes the text to read as the file content. The fact that the content is generated on the fly explains why almost all files have their time reported as now and their size reported as 0 — here you should read 0 as “don't know”. Unlike usual filesystems, the filesystem which is mounted on /proc, which is called procfs, doesn't load data from a disk or other storage media (like FAT, ext2, zfs, …) or over the network (like NFS, Samba, …) and doesn't call user code (unlike FUSE).
Procfs is present in most non-BSD unices. It started its life in AT&T's Bell Labs in UNIX 8th edition as a way to report information about processes (and ps is often a pretty-printer for information read through /proc). Most procfs implementations have a file or directory called /proc/123 to report information about the process with PID 123. Linux extends the proc filesystem with many more entries that report the state of the system, including your example /proc/cpuinfo.
In the past, Linux's /proc acquired various files that provide information about drivers, but this use is now deprecated in favor of /sys, and /proc now evolves slowly. Entries like /proc/bus and /proc/fs/ext4 remain where they are for backward compatibility, but newer similar interfaces are created under /sys. In this answer, I'll focus on Linux.
Your first and second entry points for documentation about /proc on Linux are:
the proc(5) man page;
The /proc filesystem in the kernel documentation.
Your third entry point, when the documentation doesn't cover it, is reading the source. You can download the source on your machine, but this is a huge program, and LXR, the Linux cross-reference, is a big help. (There are many variants of LXR; the one running on lxr.linux.no is the nicest by far but unfortunately the site is often down.) A little knowledge of C is required, but you don't need to be a programmer to track down a mysterious value.
The core handling of /proc entries is in the fs/proc directory. Any driver can register entries in /proc (though as indicated above this is now deprecated in favor of /sys), so if you don't find what you're looking for in fs/proc, look everywhere else. Drivers call functions declared in include/linux/proc_fs.h. Kernel versions up to 3.9 provide the functions create_proc_entry and some wrappers (especially create_proc_read_entry), and kernel versions 3.10 and above provide instead only proc_create and proc_create_data (and a few more).
Taking /proc/cpuinfo as an example, a search for "cpuinfo" leads you to the call to proc_create("cpuinfo, …") in fs/proc/cpuinfo.c. You can see that the code is pretty much boilerplate code: since most files under /proc just dump some text data, there are helper functions to do that. There is merely a seq_operations structure, and the real meat is in the cpuinfo_op data structure, which is architecture-dependent, usually defined in arch/<architecture>/kernel/setup.c (or sometimes a different file). Taking x86 as an example, we're led to arch/x86/kernel/cpu/proc.c. There the main function is show_cpuinfo, which prints out the desired file content; the rest of the infrastructure is there to feed the data to the reading process at the speed it requests it. You can see the data being assembled on the fly from data in various variables in the kernel, including a few numbers computed on the fly such as the CPU frequency.
A big part of /proc is the per-process information in /proc/<PID>. These entries are registered in fs/proc/base.c, in the tgid_base_stuff array; some functions registered here are defined in other files. Let's look at a few examples of how these entries are generated:
cmdline is generated by proc_pid_cmdline in the same file. It locates te data in the process and prints it out.
clear_refs, unlike the entries we've seen so far, is writable but not readable. Therefore the proc_clear_refs_operations structures defines a clear_refs_write function but no read function.
cwd is a symbolic link (a slightly magical one), declared by proc_cwd_link, which looks up the process's current directory and returns it as the link content.
fd is a subdirectory. The operations on the directory itself are defined in the proc_fd_operations data structure (they're boilerplate except for the function that enumerates the entries, proc_readfd, which enumerates the process's open files) while operations on the entries are in `proc_fd_inode_operations.
Another important area of /proc is /proc/sys, which is a direct interface to sysctl. Reading from an entry in this hierarchy returns the value of the corresponding sysctl value, and writing sets the sysctl value. The entry points for sysctl are in fs/proc/proc_sysctl.c. Sysctls have their own registration system with register_sysctl and friends.
| What happens when I run the command cat /proc/cpuinfo? |
1,377,261,966,000 |
I can't find any good information on the rt and lowlatency Linux kernels.
I am wondering why anybody would not want to use a lowlatency kernel.
Also, if anyone can tell what the specific differences are, that would be great too.
|
The different configurations, “generic”, “lowlatency” (as configured in Ubuntu), and RT (“real-time”), are all about balancing throughput versus latency. Generic kernels favour throughput over latency, the others favour latency over throughput. Thus users who need throughput more than they need low latency wouldn’t choose a low latency kernel.
Compared to the generic configuration, the low-latency kernel changes the following settings:
IRQs are threaded by default, meaning that more IRQs (still not all IRQs) can be pre-empted, and they can also be prioritised and have their CPU affinity controlled;
pre-emption is enabled throughout the kernel (CONFIG_PREEMPT instead of CONFIG_PREEMPT_VOLUNTARY);
the latency debugging tools are enabled, so that the user can determine what kernel operations are blocking progress;
the timer frequency is set to 1000 Hz instead of 250 Hz.
RT kernels add a number of patches to the mainline kernel, and a few more configuration tweaks. The purpose of most of those patches is to allow more opportunities for pre-emption, by removing or splitting up locks, and to reduce the amount of time the kernel spends handling uninterruptible tasks (notably, by improving the logging mechanisms and using them less). The goal of all this is to allow the kernel to meet deadlines, i.e. ensure that, when it is required to handle something, it isn’t busy doing something else; this isn’t the same as high throughput or low latency, but fixing latency issues helps.
The generic kernels, as configured by default in most distributions, are designed to be a “sensible” compromise: they try to ensure that no single task can monopolise the system for too long, and that tasks can switch reasonably frequently, but without compromising throughput — because the more time the kernel spends considering whether to switch tasks (inside or outside the kernel), or handling interrupts, the less time the system as a whole can spend “working”. That compromise isn’t good enough for latency-sensitive workloads such as real-time audio or video processing: for those, low-latency kernels provide lower latencies at the expense of some throughput. And for real-time requirements, the real-time kernels remove as many low-latency-blockers as possible, at the expense of more throughput.
Main-stream distributions of Linux are mostly installed on servers, where traditionally latency hasn’t been considered all that important (although if you do percentile performance analysis, and care about top percentile performance, you might disagree), so the default kernels are quite conservative. Desktop users should probably use the low-latency kernels, as suggested by the kernel’s own documentation. In fact, the more low-latency kernels are used, the more feedback there will be on their relevance, which helps get generally-applicable improvements into the default kernel configurations; the same goes for the RT kernels (many of the RT patches are intended, at some point, for the mainstream kernel).
This presentation on the topic provides quite a lot of background.
Since version 5.12 of the Linux kernel, “dynamic preemption” can be enabled; this allows the default preemption model to be overridden on the kernel command-line, using the preempt= parameter. This currently supports none (server), voluntary (desktop), and full (low-latency desktop).
| Why would anyone choose not to use the lowlatency kernel? |
1,377,261,966,000 |
AFAIK dmesg shows information about kernel and kernel modules, and /var/log/messages also shows information produced by kernel and modules.
So what's the difference? Does /var/log/messages ⊂ output of dmesg?
More Info that may be helpful:
- There is a kernel ring buffer, which I think is the very and only place to store kernel log data.
- Article "Kernel logging: APIs and implementation" on IBM DeveloperWorks described APIs and the bird-view picture.
|
dmesg prints the contents of the ring buffer. This information is also sent in real time to syslogd or klogd, when they are running, and ends up in /var/log/messages; when dmesg is most useful is in capturing boot-time messages from before syslogd and/or klogd started, so that they will be properly logged.
| What's the difference of dmesg output and /var/log/messages? |
1,377,261,966,000 |
I was under the impression that the maximum length of a single argument was not the problem here so much as the total size of the overall argument array plus the size of the environment, which is limited to ARG_MAX. Thus I thought that something like the following would succeed:
env_size=$(cat /proc/$$/environ | wc -c)
(( arg_size = $(getconf ARG_MAX) - $env_size - 100 ))
/bin/echo $(tr -dc [:alnum:] </dev/urandom | head -c $arg_size) >/dev/null
With the - 100 being more than enough to account for the difference between the size of the environment in the shell and the echo process. Instead I got the error:
bash: /bin/echo: Argument list too long
After playing around for a while, I found that the maximum was a full hex order of magnitude smaller:
/bin/echo \
$(tr -dc [:alnum:] </dev/urandom | head -c $(($(getconf ARG_MAX)/16-1))) \
>/dev/null
When the minus one is removed, the error returns. Seemingly the maximum for a single argument is actually ARG_MAX/16 and the -1 accounts for the null byte placed at the end of the string in the argument array.
Another issue is that when the argument is repeated, the total size of the argument array can be closer to ARG_MAX, but still not quite there:
args=( $(tr -dc [:alnum:] </dev/urandom | head -c $(($(getconf ARG_MAX)/16-1))) )
for x in {1..14}; do
args+=( ${args[0]} )
done
/bin/echo "${args[@]}" "${args[0]:6534}" >/dev/null
Using "${args[0]:6533}" here makes the last argument 1 byte longer and gives the Argument list too long error. This difference is unlikely to be accounted for by the size of the environment given:
$ cat /proc/$$/environ | wc -c
1045
Questions:
Is this correct behaviour, or is there a bug somewhere?
If not, is this behaviour documented anywhere? Is there another parameter which defines the maximum for a single argument?
Is this behaviour limited to Linux (or even particular versions of such)?
What accounts for the additional ~5KB discrepancy between the actual maximum size of the argument array plus the approximate size of the environment and ARG_MAX?
Additional info:
uname -a
Linux graeme-rock 3.13-1-amd64 #1 SMP Debian 3.13.5-1 (2014-03-04) x86_64 GNU/Linux
|
Answers
Definitely not a bug.
The parameter which defines the maximum size for one argument is MAX_ARG_STRLEN. There is no documentation for this parameter other than the comments in binfmts.h:
/*
* These are the maximum length and maximum number of strings passed to the
* execve() system call. MAX_ARG_STRLEN is essentially random but serves to
* prevent the kernel from being unduly impacted by misaddressed pointers.
* MAX_ARG_STRINGS is chosen to fit in a signed 32-bit integer.
*/
#define MAX_ARG_STRLEN (PAGE_SIZE * 32)
#define MAX_ARG_STRINGS 0x7FFFFFFF
As is shown, Linux also has a (very large) limit on the number of arguments to a command.
A limit on the size of a single argument (which differs from the overall limit on arguments plus environment) does appear to be specific to Linux. This article gives a detailed comparison of ARG_MAX and equivalents on Unix like systems. MAX_ARG_STRLEN is discussed for Linux, but there is no mention of any equivalent on any other systems.
The above article also states that MAX_ARG_STRLEN was introduced in Linux 2.6.23, along with a number of other changes relating to command argument maximums (discussed below). The log/diff for the commit can be found here.
It is still not clear what accounts for the additional discrepancy between the result of getconf ARG_MAX and the actual maximum possible size of arguments plus environment. Stephane Chazelas' related answer, suggests that part of the space is accounted for by pointers to each of the argument/environment strings. However, my own investigation suggests that these pointers are not created early in the execve system call when it may still return a E2BIG error to the calling process (although pointers to each argv string are certainly created later).
Also, the strings are contiguous in memory as far as I can see, so no memory gaps due do alignment here. Although is very likely to be a factor within whatever does use up the extra memory. Understanding what uses the extra space requires a more detailed knowledge of how the kernel allocates memory (which is useful knowledge to have, so I will investigate and update later).
ARG_MAX Confusion
Since the Linux 2.6.23 (as result of this commit), there have been changes to the way that command argument maximums are handled which makes Linux differ from other Unix-like systems. In addition to adding MAX_ARG_STRLEN and MAX_ARG_STRINGS, the result of getconf ARG_MAX now depends on the stack size and may be different from ARG_MAX in limits.h.
Normally the result of getconf ARG_MAX will be 1/4 of the stack size. Consider the following in bash using ulimit to get the stack size:
$ echo $(( $(ulimit -s)*1024 / 4 )) # ulimit output in KiB
2097152
$ getconf ARG_MAX
2097152
However, the above behaviour was changed slightly by this commit (added in Linux 2.6.25-rc4~121).
ARG_MAX in limits.h now serves as a hard lower bound on the result of getconf ARG_MAX. If the stack size is set such that 1/4 of the stack size is less than ARG_MAX in limits.h, then the limits.h value will be used:
$ grep ARG_MAX /usr/include/linux/limits.h
#define ARG_MAX 131072 /* # bytes of args + environ for exec() */
$ ulimit -s 256
$ echo $(( $(ulimit -s)*1024 / 4 ))
65536
$ getconf ARG_MAX
131072
Note also that if the stack size set lower than the minimum possible ARG_MAX, then the size of the stack (RLIMIT_STACK) becomes the upper limit of argument/environment size before E2BIG is returned (although getconf ARG_MAX will still show the value in limits.h).
A final thing to note is that if the kernel is built without CONFIG_MMU (support for memory management hardware), then the checking of ARG_MAX is disabled, so the limit does not apply. Although MAX_ARG_STRLEN and MAX_ARG_STRINGS still apply.
Further Reading
Related answer by Stephane Chazelas - https://unix.stackexchange.com/a/110301/48083
In detailed page covering most of the above. Includes a table of ARG_MAX (and equivalent) values on other Unix-like systems - http://www.in-ulm.de/~mascheck/various/argmax/
Seemingly the introduction of MAX_ARG_STRLEN caused a bug in with Automake which was embedding shell scripts into Makefiles using sh -c - http://www.mail-archive.com/[email protected]/msg05522.html
| What defines the maximum size for a command single argument? |
1,377,261,966,000 |
While browsing through the Kernel Makefiles, I found these terms. So I would like to know what is the difference between vmlinux, vmlinuz, vmlinux.bin, zimage & bzimage?
|
vmlinux
This is the Linux kernel in an statically linked executable file format. Generally, you don't have to worry about this file, it's just a intermediate step in the boot procedure.
The raw vmlinux file may be useful for debugging purposes.
vmlinux.bin
The same as vmlinux, but in a bootable raw binary file format. All symbols and relocation information is discarded. Generated from vmlinux by objcopy -O binary vmlinux vmlinux.bin.
vmlinuz
The vmlinux file usually gets compressed with zlib. Since 2.6.30 LZMA and bzip2 are also available. By adding further boot and decompression capabilities to vmlinuz, the image can be used to boot a system with the vmlinux kernel. The compression of vmlinux can occur with zImage or bzImage.
The function decompress_kernel() handles the decompression of vmlinuz at bootup, a message indicates this:
Decompressing Linux... done
Booting the kernel.
zImage (make zImage)
This is the old format for small kernels (compressed, below 512KB). At boot, this image gets loaded low in memory (the first 640KB of the RAM).
bzImage (make bzImage)
The big zImage (this has nothing to do with bzip2), was created while the kernel grew and handles bigger images (compressed, over 512KB). The image gets loaded high in memory (above 1MB RAM). As today's kernels are way over 512KB, this is usually the preferred way.
An inspection on Ubuntu 10.10 shows:
ls -lh /boot/vmlinuz-$(uname -r)
-rw-r--r-- 1 root root 4.1M 2010-11-24 12:21 /boot/vmlinuz-2.6.35-23-generic
file /boot/vmlinuz-$(uname -r)
/boot/vmlinuz-2.6.35-23-generic: Linux kernel x86 boot executable bzImage, version 2.6.35-23-generic (buildd@rosea, RO-rootFS, root_dev 0x6801, swap_dev 0x4, Normal VGA
| What is the difference between the following kernel Makefile terms: vmLinux, vmlinuz, vmlinux.bin, zimage & bzimage? |
1,377,261,966,000 |
In Linux, a finished execution of a command such as cp or dd doesn't mean that the data has been written to the device. One has to, for example, call sync, or invoke the "Safely Remove" or "Eject" function on the drive.
What's the philosophy behind such an approach? Why isn't the data written at once? Is there no danger that the write will fail due to an I/O error?
|
What's the philosophy behind such an approach?
Efficiency (better usage of the disk characteristics) and performance (allows the application to continue immediately after a write).
Why isn't the data written at once?
The main advantage is the OS is free to reorder and merge contiguous write operations to improve their bandwidth usage (less operations and less seeks). Hard disks perform better when a small number of large operations are requested while applications tend to need a large number of small operations instead. Another clear optimization is the OS can also remove all but the last write when the same block is written multiple times in a short period of time, or even remove some writes all together if the affected file has been removed in the meantime.
These asynchronous writes are done after the write system call has returned. This is the second and most user visible advantage. Asynchronous writes speeds up the applications as they are free to continue their work without waiting for the data to actually be on disk. The same kind of buffering/caching is also implemented for read operations where recently or often read blocks are retained in memory instead of being read again from the disk.
Is there no danger that the write will fail due to an IO error?
Not necessarily. That depends on the file system used and the redundancy in place. An I/O error might be harmless if the data can be saved elsewhere. Modern file systems like ZFS do self heal bad disk blocks. Note also that I/O errors do not crash modern OSes. If they happen during data access, they are simply reported to the affected application. If they happen during structural metadata access and put the file system at risk, it might be remounted read-only or made inaccessible.
There is also a slight data loss risk in case of an OS crash, a power outage, or an hardware failure. This is the reason why applications that must be 100% sure the data is on disk (e.g. databases/financial apps) are doing less efficient but more secure synchronous writes. To mitigate the performance impact, many applications still use asynchronous writes but eventually sync them when the user saves explicitly a file (e.g. vim, word processors.)
On the other hand, a very large majority of users and applications do not need nor care the safety that synchronous writes do provide. If there is a crash or power outage, the only risk is often to lose at worst the last 30 seconds of data.
Unless there is a financial transaction involved or something similar that would imply a cost much larger than 30 seconds of their time, the huge gain in performance (which is not an illusion but very real) asynchronous writes is allowing largely outperforms the risk.
Finally, synchronous writes are not enough to protect the data written anyway. Should your application really need to be sure their data cannot be lost whatever happens, data replication on multiple disks and on multiple geographical locations need to be put in place to resist disasters like fire, flooding, etc.
| What's the philosophy behind delaying writing data to disk? |
1,377,261,966,000 |
We are installing SAP HANA in a RAID machine. As part of the installation step, it is mentioned that,
To disable the usage of transparent hugepages set the kernel settings
at runtime with echo never > /sys/kernel/mm/transparent_hugepage/enabled
So instead of runtime, if I wanted to make this a permanent change, should I add the above line inside /proc/vmstat file?
|
To make options such as this permanent you'll typically add them to the file /etc/sysctl.conf. You can see a full list of the options available using this command:
$ sysctl -a
Example
$ sudo sysctl -a | head -5
kernel.sched_child_runs_first = 0
kernel.sched_min_granularity_ns = 6000000
kernel.sched_latency_ns = 18000000
kernel.sched_wakeup_granularity_ns = 3000000
kernel.sched_shares_ratelimit = 750000
You can look for hugepage in the output like so:
$ sudo sysctl -a | grep hugepage
vm.nr_hugepages = 0
vm.nr_hugepages_mempolicy = 0
vm.hugepages_treat_as_movable = 0
vm.nr_overcommit_hugepages = 0
It's not there?
However looking through the output I did not see transparent_hugepage. Googling a bit more I did come across this Oracle page which discusses this very topic. The page is titled: Configuring HugePages for Oracle on Linux (x86-64).
Specifically on that page they mention how to disable the hugepage feature.
excerpt
The preferred method to disable Transparent HugePages is to add "transparent_hugepage=never" to the kernel boot line in the "/etc/grub.conf" file.
title Oracle Linux Server (2.6.39-400.24.1.el6uek.x86_64)
root (hd0,0)
kernel /vmlinuz-2.6.39-400.24.1.el6uek.x86_64 ro root=/dev/mapper/vg_ol6112-lv_root rd_NO_LUKS KEYBOARDTYPE=pc KEYTABLE=uk
LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 rd_NO_DM rd_LVM_LV=vg_ol6112/lv_swap rd_LVM_LV=vg_ol6112/lv_root rhgb quiet numa=off
transparent_hugepage=never
initrd /initramfs-2.6.39-400.24.1.el6uek.x86_64.img
The server must be rebooted for this to take effect.
Alternatively you can add the command to your /etc/rc.local file.
if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
echo never > /sys/kernel/mm/transparent_hugepage/enabled
fi
if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
echo never > /sys/kernel/mm/transparent_hugepage/defrag
fi
I think I would go with the 2nd option, since the first will be at risk of getting unset when you upgrade from one kernel to the next.
You can confirm that it worked with the following command after rebooting:
$ cat /sys/kernel/mm/transparent_hugepage/enabled
always madvise [never]
| disable transparent hugepages |
1,377,261,966,000 |
I'd like to write a statement to dmesg. How can I do this?
|
Write to /dev/kmsg (not /proc/kmsg as suggested by @Nils). See linux/kernel/printk/printk.c devkmsg_writev for the kernel-side implementation and systemd/src/journal/journald-kmsg.c server_forward_kmsg for an example of usage.
| How can I write to dmesg from command line? |
1,377,261,966,000 |
After a recent upgrade to Fedora 15, I'm finding that a number of tools are failing with errors along the lines of:
tail: inotify resources exhausted
tail: inotify cannot be used, reverting to polling
It's not just tail that's reporting problems with inotify, either. Is there any way to interrogate the kernel to find out what process or processes are consuming the inotify resources? The current inotify-related sysctl settings look like this:
fs.inotify.max_user_instances = 128
fs.inotify.max_user_watches = 8192
fs.inotify.max_queued_events = 16384
|
It seems that if the process creates inotify instance via inotify_init(), the resulting file that represents filedescriptor in the /proc filesystem is a symlink to (non-existing) 'anon_inode:inotify' file.
$ cd /proc/5317/fd
$ ls -l
total 0
lrwx------ 1 puzel users 64 Jun 24 10:36 0 -> /dev/pts/25
lrwx------ 1 puzel users 64 Jun 24 10:36 1 -> /dev/pts/25
lrwx------ 1 puzel users 64 Jun 24 10:36 2 -> /dev/pts/25
lr-x------ 1 puzel users 64 Jun 24 10:36 3 -> anon_inode:inotify
lr-x------ 1 puzel users 64 Jun 24 10:36 4 -> anon_inode:inotify
Unless I misunderstood the concept, the following command should show you list of processes (their representation in /proc), sorted by number of inotify instances they use.
$ for foo in /proc/*/fd/*; do readlink -f $foo; done | grep inotify | sort | uniq -c | sort -nr
Finding the culprits
Via the comments below @markkcowan mentioned this:
$ find /proc/*/fd/* -type l -lname 'anon_inode:inotify' -exec sh -c 'cat $(dirname {})/../cmdline; echo ""' \; 2>/dev/null
| Who's consuming my inotify resources? |
1,377,261,966,000 |
I often saw the words "kernel ring buffer", "user level", "log level" and some other words appear together. e.g.
/var/log/dmesg Contains kernel ring buffer information.
/var/log/kern.log Contains only the kernel's messages of any loglevel
/var/log/user.log Contains information about all user level logs
Are they all about logs? How are they related and different?
By "level", I would imagine a hierarchy of multiple levels?
Is "user level" related to "user space"?
Are they related to runlevel or protection ring in some way?
|
Yes, all of this has to do with logging. No, none of it has to do with runlevel or "protection ring".
The kernel keeps its logs in a ring buffer. The main reason for this is so that the logs from the system startup get saved until the syslog daemon gets a chance to start up and collect them. Otherwise there would be no record of any logs prior to the startup of the syslog daemon. The contents of that ring buffer can be seen at any time using the dmesg command, and its contents are also saved to /var/log/dmesg just as the syslog daemon is starting up.
All logs that do not come from the kernel are sent as they are generated to the syslog daemon so they are not kept in any buffers. The kernel logs are also picked up by the syslog daemon as they are generated but they also continue to be saved (unnecessarily, arguably) to the ring buffer.
The log levels can be seen documented in the syslog(3) manpage and are as follows:
LOG_EMERG: system is unusable
LOG_ALERT: action must be taken immediately
LOG_CRIT: critical conditions
LOG_ERR: error conditions
LOG_WARNING: warning conditions
LOG_NOTICE: normal, but significant, condition
LOG_INFO: informational message
LOG_DEBUG: debug-level message
Each level is designed to be less "important" than the previous one. A log file that records logs at one level will also record logs at all of the more important levels too.
The difference between /var/log/kern.log and /var/log/mail.log (for example) is not to do with the level but with the facility, or category. The categories are also documented on the manpage.
| What are the concepts of "kernel ring buffer", "user level", "log level"? |
1,377,261,966,000 |
Let's say I work for a large services organisation outside the US/UK. We use UNIX and Linux servers extensively.
Reading through this article it mentions that it would be easy to insert a backdoor into a C compiler, then any code compiled with that compiler would also contain a backdoor. Now given recent leaks regarding the NSA/GCHQ's mandate to put backdoors/weaknesses in all encryption methods, hardware and software, the compiler is now a critical point of failure. Potentially all standard UNIX/Linix distributions could be compromised. We cannot afford to have our systems, data and our customers data compromised by rogue governments.
Given this information, I would like to build a trusted compiler from scratch, then I have a secure base to build on so I can build the Operating System and applications from source code using that compiler.
Question
What is the correct (and secure way) to go about compiling a compiler from source code (a seemingly chicken-egg scenario) then compiling a trusted Unix/Linux distribution from scratch?
You can assume I or others have the ability to read and understand source code for security flaws, so source code will be vetted first before compiling. What I am really after is a working guide to produce this compiler from scratch securely and can be used to compile the kernel, other parts of the OS and applications.
The security stack must start at the base level if we are to have any confidence in the operating system or applications running on that stack. Yes I understand there may be hardware backdoors which may insert some microcode into the compiler as it's being built. Not much we can do about that for the moment except maybe use chips not designed in the US. Let's get this layer sorted for a start and assume I could build it on an old computer potentially before any backdoors were inserted.
As Bruce Schneier says: "To the engineers, I say this: we built the internet, and some of us have helped to subvert it. Now, those of us who love liberty have to fix it."
Extra links:
http://nytimes.com/2013/09/06/us/nsa-foils-much-internet-encryption.html?pagewanted=all&_r=0
http://theguardian.com/commentisfree/2013/sep/05/government-betrayed-internet-nsa-spying
|
AFAIK the only way to be completely sure of security would be to write a compiler in assembly language (or modifying the disk directly yourself). Only then can you ensure that your compiler isn't inserting a backdoor - this works because you're actually eliminating the compiler completely.
From there, you may use your from-scratch compiler to bootstrap e.g. the GNU toolchain. Then you could use your custom toolchain to compile a Linux From Scratch system.
Note that to make things easier on yourself, you could have a second intermediary compiler, written in C (or whatever other language). So you would write compiler A in assembly, then rewrite that compiler in C/C++/Python/Brainfuck/whatever to get compiler B, which you would compile using compiler A. Then you would use compiler B to compile gcc and friends.
| How to compile the C compiler from scratch, then compile Unix/Linux from scratch |
1,377,261,966,000 |
I know what a kernel panic is, but I've also seen the term "kernel oops". I'd always thought they were the same, but maybe not. So:
What is a kernel oops, and how is it different from a kernel panic?
|
An "oops" is a Linux kernel problem bad enough that it may affect system reliability.
Some "oops"es are bad enough that the kernel decides to stop running immediately, lest there be data loss or other damage. These are called kernel panics.
The latter term is primordial, going back to the very earliest versions of Linux's Unix forebears, which also print a "panic" message on the console when they happen. The original AT&T Unix kernel function that handles such conditions is called panic(). You can trace it back through the public source code releases of AT&T Unix to its very first releases:
The OpenSolaris version of panic() was released by Sun in 2005. It is fairly elaborate, and its header comments explain a lot about what happens in a panic situation.
The Unix V4 implementation of panic() was released in 1973. It basically just prints the core state of the kernel to the console and stops the processor.
That function is substantially unchanged in Unix V3 according to Amit Singh, who famously dissected an older version of Mac OS X and explained it. That first link takes you to a lovely article explaining macOS's approach to the implementation of panic(), which starts off with a relevant historical discussion.
The "unix-jun72" project to resurrect Unix V1 from scanned source code printouts shows a very early PDP-11 assembly version of this function, written sometime before June 1972, before Unix was fully rewritten in C. By this point, its implementation is whittled down to a 6-instruction routine that does little more than restart the PDP-11.
| What's the difference between a kernel oops and a kernel panic? |
1,377,261,966,000 |
I interrupted tcpdump with Ctrl+C and got this total summary:
579204 packets captured
579346 packets received by filter
142 packets dropped by kernel
What are the "packets dropped by kernel"? Why does that happen?
|
From tcpdump(1) (tcpdump's man page):
packets ‘‘dropped by kernel’’
(this is the number of packets that were dropped,
due to a lack of buffer space,
by the packet capture mechanism in the OS on which tcpdump is running,
if the OS reports that information to applications;
if not, it will be reported as 0).
A bit of explanation:
The tcpdump program captures raw packets
passing through a network interface.
The packets have to be parsed and filtered
according to rules specified by you in the command line,
and that takes some time,
so incoming packets have to be buffered (queued) for processing.
Sometimes there are too many packets, and so they are saved to a buffer.
But they are saved faster than processed, so eventually the buffer runs out of space, and so the kernel drops all further packets
until there is some free space in the buffer.
You can increase the buffer size with the -B (--buffer-size) option like this:
tcpdump -B 4096 ....
or
tcpdump --buffer-size=4096 ...
Note that the size is specified in kibibytes,
so the lines above set the buffer size to 4 MiB.
| Why would the kernel drop packets? |
1,377,261,966,000 |
This question is motivated by my shock when I discovered that Mac OS X kernel uses 750MB of RAM.
I have been using Linux for 20 years, and I always "knew" that the kernel RAM usage is dwarfed by X (is it true? has it ever been true?).
So, after some googling, I tried slabtop which told me:
Active / Total Size (% used) : 68112.73K / 72009.73K (94.6%)
Does this mean that my kernel is using ~72MB of RAM now?
(Given that top reports Xorg's RSS as 17M, the kernel now dwarfs X, not the other way around).
What is the "normal" kernel RAM usage (range) for a laptop?
Why does MacOS use an order of magnitude more RAM than Linux?
PS. No answer here addressed the last question, so please see related questions:
Is it a problem if kernel_task is routinely above 130MB on mid 2007 white MacBook?
kernel_task using way too much memory
What is included under kernel_task in Activity Monitor?
|
Kernel is a bit of a misnomer. The Linux kernel is comprised of several proceses/threads + the modules (lsmod) so to get a complete picture you'd need to look at the whole ball and not just a single component.
Incidentally mine shows slabtop:
Active / Total Size (% used) : 173428.30K / 204497.61K (84.8%)
The man page for slabtop also had this to say:
The slabtop statistic header is tracking how many bytes of slabs are being used and it not a measure of physical memory. The 'Slab' field in the /proc/meminfo file is tracking information about used slab physical memory.
Dropping caches
Dropping my caches as @derobert suggested in the comments under your question does the following for me:
$ sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches'
$
Active / Total Size (% used) : 61858.78K / 90524.77K (68.3%)
Sending a 3 does the following: free pagecache, dentries and inodes. I discuss this more in this U&L Q&A titled: Are there any ways or tools to dump the memory cache and buffer?". So 110MB of my space was being used by just maintaining the info regarding pagecache, dentries and inodes.
Additional Information
If you're interested I found this blog post that discusses slabtop in a bit more details. It's titled: Linux command of the day: slabtop.
The Slab Cache is discussed in more detail here on Wikipedia, titled: Slab allocation.
So how much RAM is my Kernel using?
This picture is a bit foggier to me, but here are the things that I "think" we know.
Slab
We can get a snapshot of the Slab usage using this technique. Essentially we can pull this information out of /proc/meminfo.
$ grep Slab /proc/meminfo
Slab: 100728 kB
Modules
Also we can get a size value for Kernel modules (unclear whether it's their size from on disk or when in RAM) by pulling these values from /proc/modules:
$ awk '{print $1 " " $2 }' /proc/modules | head -5
cpufreq_powersave 1154
tcp_lp 2111
aesni_intel 12131
cryptd 7111
aes_x86_64 7758
Slabinfo
Much of the details about the SLAB are accessible in this proc structure, /proc/slabinfo:
$ less /proc/slabinfo | head -5
slabinfo - version: 2.1
# name <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab> : tunables <limit> <batchcount> <sharedfactor> : slabdata <active_slabs> <num_slabs> <sharedavail>
nf_conntrack_ffff8801f2b30000 0 0 320 25 2 : tunables 0 0 0 : slabdata 0 0 0
fuse_request 100 125 632 25 4 : tunables 0 0 0 : slabdata 5 5 0
fuse_inode 21 21 768 21 4 : tunables 0 0 0 : slabdata 1 1 0
Dmesg
When your system boots there is a line that reports memory usage of the Linux kernel just after it's loaded.
$ dmesg |grep Memory:
[ 0.000000] Memory: 7970012k/9371648k available (4557k kernel code, 1192276k absent, 209360k reserved, 7251k data, 948k init)
References
Where is the memory going? Memory usage in the 2.6 kernel
| How much RAM does the kernel use? |
1,377,261,966,000 |
Linux uses a virtual memory system where all of the addresses are virtual addresses and not physical addresses. These virtual addresses are converted into physical addresses by the processor.
To make this translation easier, virtual and physical memory are divided into pages. Each of these pages is given a unique number; the page frame number.
Some page sizes can be 2 KB, 4 KB, etc. But how is this page size number determined? Is it influenced by the size of the architecture? For example, a 32-bit bus will have 4 GB address space.
|
You can find out a system's default page size by querying its configuration via the getconf command:
$ getconf PAGE_SIZE
4096
or
$ getconf PAGESIZE
4096
NOTE: The above units are typically in bytes, so the 4096 equates to 4096 bytes or 4kB.
This is hardwired in the Linux kernel's source here:
Example
$ more /usr/src/kernels/3.13.9-100.fc19.x86_64/include/asm-generic/page.h
...
...
/* PAGE_SHIFT determines the page size */
#define PAGE_SHIFT 12
#ifdef __ASSEMBLY__
#define PAGE_SIZE (1 << PAGE_SHIFT)
#else
#define PAGE_SIZE (1UL << PAGE_SHIFT)
#endif
#define PAGE_MASK (~(PAGE_SIZE-1))
How does shifting give you 4096?
When you shift bits, you're performing a binary multiplication by 2. So in effect a shifting of bits to the left (1 << PAGE_SHIFT) is doing the multiplication of 2^12 = 4096.
$ echo "2^12" | bc
4096
| how is page size determined in virtual address space? |
1,377,261,966,000 |
Quite often in the course of troubleshooting and tuning things I find myself thinking about the following Linux kernel settings:
net.core.netdev_max_backlog
net.ipv4.tcp_max_syn_backlog
net.core.somaxconn
Other than fs.file-max, net.ipv4.ip_local_port_range, net.core.rmem_max, net.core.wmem_max, net.ipv4.tcp_rmem, and net.ipv4.tcp_wmem, they seems to be the important knobs to mess with when you are tuning a box for high levels of concurrency.
My question: How can I check to see how many items are in each of those queues ? Usually people just set them super high, but I would like to log those queue sizes to help predict future failure and catch issues before they manifest in a user noticeable way.
|
I too have wondered this and was motivated by your question!
I've collected how close I could come to each of the queues you listed with some information related to each. I welcome comments/feedback, any improvement to monitoring makes things easier to manage!
net.core.somaxconn
net.ipv4.tcp_max_syn_backlog
net.core.netdev_max_backlog
$ netstat -an | grep -c SYN_RECV
Will show the current global count of connections in the queue, you can break this up per port and put this in exec statements in snmpd.conf if you wanted to poll it from a monitoring application.
From:
netstat -s
These will show you how often you are seeing requests from the queue:
146533724 packets directly received from backlog
TCPBacklogDrop: 1029
3805 packets collapsed in receive queue due to low socket buffer
fs.file-max
From:
http://linux.die.net/man/5/proc
$ cat /proc/sys/fs/file-nr
2720 0 197774
This (read-only) file gives the number of files presently opened. It
contains three numbers: The number of allocated file handles, the
number of free file handles and the maximum number of file handles.
net.ipv4.ip_local_port_range
If you can build an exclusion list of services (netstat -an | grep LISTEN) then you can deduce how many connections are being used for ephemeral activity:
netstat -an | egrep -v "MYIP.(PORTS|IN|LISTEN)" | wc -l
Should also monitor (from SNMP):
TCP-MIB::tcpCurrEstab.0
It may also be interesting to collect stats about all the states seen in this tree(established/time_wait/fin_wait/etc):
TCP-MIB::tcpConnState.*
net.core.rmem_max
net.core.wmem_max
You'd have to dtrace/strace your system for setsockopt requests. I don't think stats for these requests are tracked otherwise. This isn't really a value that changes from my understanding. The application you've deployed will probably ask for a standard amount. I think you could 'profile' your application with strace and configure this value accordingly. (discuss?)
net.ipv4.tcp_rmem
net.ipv4.tcp_wmem
To track how close you are to the limit you would have to look at the average and max from the tx_queue and rx_queue fields from (on a regular basis):
# cat /proc/net/tcp
sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt uid timeout inode
0: 00000000:0FB1 00000000:0000 0A 00000000:00000000 00:00000000 00000000 500 0 262030037 1 ffff810759630d80 3000 0 0 2 -1
1: 00000000:A133 00000000:0000 0A 00000000:00000000 00:00000000 00000000 500 0 262029925 1 ffff81076d1958c0 3000 0 0 2 -1
To track errors related to this:
# netstat -s
40 packets pruned from receive queue because of socket buffer overrun
Should also be monitoring the global 'buffer' pool (via SNMP):
HOST-RESOURCES-MIB::hrStorageDescr.1 = STRING: Memory Buffers
HOST-RESOURCES-MIB::hrStorageSize.1 = INTEGER: 74172456
HOST-RESOURCES-MIB::hrStorageUsed.1 = INTEGER: 51629704
| how to check rx ring, max_backlog, and max_syn_backlog size |
1,377,261,966,000 |
So, I thought I had a good understanding of this, but just ran a test (in response to a conversation where I disagreed with someone) and found that my understanding is flawed...
In as much detail as possible what exactly happens when I execute a file in my shell? What I mean is, if I type: ./somefile some arguments into my shell and press return (and somefile exists in the cwd, and I have read+execute permissions on somefile) then what happens under the hood?
I thought the answer was:
The shell make a syscall to exec, passing the path to somefile
The kernel examines somefile and looks at the magic number of the file to determine if it is a format the processor can handle
If the magic number indicates that the file is in a format the processor can execute, then
a new process is created (with an entry in the process table)
somefile is read/mapped to memory. A stack is created and execution jumps to the entry point of the code of somefile, with ARGV initialized to an array of the parameters (a char**, ["some","arguments"])
If the magic number is a shebang then exec() spawns a new process as above, but the executable used is the interpreter referenced by the shebang (e.g. /bin/bash or /bin/perl) and somefile is passed to STDIN
If the file doesn't have a valid magic number, then an error like "invalid file (bad magic number): Exec format error" occurs
However someone told me that if the file is plain text, then the shell tries to execute the commands (as if I had typed bash somefile). I didn't believe this, but I just tried it, and it was correct. So I clearly have some misconceptions about what actually happens here, and would like to understand the mechanics.
What exactly happens when I execute a file in my shell? (in as much detail is reasonable...)
|
The definitive answer to "how programs get run" on Linux is the pair of articles on LWN.net titled, surprisingly enough, How programs get run and How programs get run: ELF binaries. The first article addresses scripts briefly. (Strictly speaking the definitive answer is in the source code, but these articles are easier to read and provide links to the source code.)
A little experimentation show that you pretty much got it right, and that the execution of a file containing a simple list of commands, without a shebang, needs to be handled by the shell. The execve(2) manpage contains source code for a test program, execve; we'll use that to see what happens without a shell. First, write a testscript, testscr1, containing
#!/bin/sh
pstree
and another one, testscr2, containing only
pstree
Make them both executable, and verify that they both run from a shell:
chmod u+x testscr[12]
./testscr1 | less
./testscr2 | less
Now try again, using execve (assuming you built it in the current directory):
./execve ./testscr1
./execve ./testscr2
testscr1 still runs, but testscr2 produces
execve: Exec format error
This shows that the shell handles testscr2 differently. It doesn't process the script itself though, it still uses /bin/sh to do that; this can be verified by piping testscr2 to less:
./testscr2 | less -ppstree
On my system, I get
|-gnome-terminal--+-4*[zsh]
| |-zsh-+-less
| | `-sh---pstree
As you can see, there's the shell I was using, zsh, which started less, and a second shell, plain sh (dash on my system), to run the script, which ran pstree. In zsh this is handled by zexecve in Src/exec.c: the shell uses execve(2) to try to run the command, and if that fails, it reads the file to see if it has a shebang, processing it accordingly (which the kernel will also have done), and if that fails it tries to run the file with sh, as long as it didn't read any zero byte from the file:
for (t0 = 0; t0 != ct; t0++)
if (!execvebuf[t0])
break;
if (t0 == ct) {
argv[-1] = "sh";
winch_unblock();
execve("/bin/sh", argv - 1, newenvp);
}
bash has the same behaviour, implemented in execute_cmd.c with a helpful comment (as pointed out by taliezin):
Execute a simple command that is hopefully defined in a disk file
somewhere.
fork ()
connect pipes
look up the command
do redirections
execve ()
If the execve failed, see if the file has executable mode set.
If so, and it isn't a directory, then execute its contents as
a shell script.
POSIX defines a set of functions, known as the exec(3) functions, which wrap execve(2) and provide this functionality too; see muru's answer for details. On Linux at least these functions are implemented by the C library, not by the kernel.
| What exactly happens when I execute a file in my shell? |
1,377,261,966,000 |
I was just wondering why the Linux NFS server is implemented in the kernel as opposed to a userspace application?
I know a userspace NFS daemon exists, but it's not the standard method for providing NFS server services.
I would think that running NFS server as a userspace application would be the preferred approach as it can provide added security having a daemon run in userspace instead of the kernel. It also would fit with the common Linux principal of doing one thing and doing it well (and that daemons shouldn't be a job for the kernel).
In fact the only benefit I can think of running in the kernel would a performance boost from context switching (and that is a debatable reason).
So is there any documented reason why it is implemented the way it is? I tried googling around but couldn't find anything.
There seems to be a lot of confusion, please note I am not asking about mounting filesystems, I am asking about providing the server side of a network filesystem. There is a very distinct difference. Mounting a filesystem locally requires support for the filesystem in the kernel, providing it does not (eg samba or unfs3).
|
unfs3 is dead as far as I know; Ganesha is the most active userspace NFS server project right now, though it is not completely mature.
Although it serves different protocols, Samba is an example of a successful
file server that operates in userspace.
I haven't seen a recent performance comparison.
Some other issues:
Ordinary applications look files up by pathname, but nfsd needs to be able to
look them up by filehandle. This is tricky and requires support from the
filesystem (and not all filesystems can support it). In the past it was not
possible to do this from userspace, but more recent kernels have added
name_to_handle_at(2) and open_by_handle_at(2) system calls.
I seem to recall blocking file-locking calls being a problem; I'm not sure
how userspace servers handle them these days. (Do you tie up a server thread
waiting on the lock, or do you poll?)
Newer file system semantics (change attributes, delegations, share locks)
may be implemented
more easily in kernel first (in theory--they mostly haven't been yet).
You don't want to have to check permissions, quotas, etc., by hand--instead
you want to change your uid and rely on the common kernel vfs code to do
that. And Linux has a system call (setfsuid(2)) that should do that. For
reasons I forget, I think that's proved more complicated to use in servers
than it should be.
In general, a kernel server's strengths are closer integration with the vfs and the exported filesystem. We can make up for that by providing more kernel interfaces (such as the filehandle system calls), but that's not easy. On the other hand, some of the filesystems people want to export these days (like gluster) actually live mainly in userspace. Those can be exported by the kernel nfsd using FUSE--but again extensions to the FUSE interfaces may be required for newer features, and there may be performance issues.
Short version: good question!
| Why is Linux NFS server implemented in the kernel as opposed to userspace? |
1,377,261,966,000 |
I am attempting to install the VMWare player in Fedora 19. I am running into the problem that multiple users have had where VMware player cannot find the kernel headers. I have installed the kernel-headers and kernel-devel packages through yum and the file that appears in /usr/src/kernels is:
3.12.8-200.fc19.x86_64
However, when I do uname -r my Fedora kernel version is:
3.9.5-301.fc19.x86_64
which is a different version. This seems to mean that when I point VMware player at the path of the kernels I get this error:
C header files matching your running kernel were not found.
Refer to your distribution's documentation for installation instructions.
How can I install the correct Kernel and where should I be pointing VMware if its not /usr/src/kernels/<my-kernel> ?
|
You can install the correct kernel header files like so:
$ sudo yum install "kernel-devel-uname-r == $(uname -r)"
Example
This command will always install the right version.
$ sudo yum install "kernel-devel-uname-r == $(uname -r)"
Loaded plugins: auto-update-debuginfo, changelog, langpacks, refresh-packagekit
No package kernel-devel-uname-r == 3.12.6-200.fc19.x86_64 available.
Error: Nothing to do
Or you can search for them like this:
$ yum search "kernel-headers-uname-r == $(uname -r)" --disableexcludes=all
Loaded plugins: auto-update-debuginfo, changelog, langpacks, refresh-packagekit
Warning: No matches found for: kernel-headers-uname-r == 3.12.6-200.fc19.x86_64
No matches found
However I've notice this issue as well where specific versions of headers are not present in the repositories. You might have to reach into Koji to find a particular version of a build.
Information for build kernel-3.12.6-200.fc19
That page includes all the assets for that particular version of the Kernel.
| yum installs kernel-devel different from my kernel version |
1,377,261,966,000 |
I read once that one advantage of a microkernel architecture is that you can stop/start essential services like networking and filesystems, without needing to restart the whole system. But considering that Linux kernel nowadays (was it always the case?) offers the option to use modules to achieve the same effect, what are the (remaining) advantages of a microkernel?
|
Microkernels require less code to be run in the innermost, most trusted mode than monolithic kernels. This has many aspects, such as:
Microkernels allow non-fundamental features (such as drivers for hardware that is not connected or not in use) to be loaded and unloaded at will. This is mostly achievable on Linux, through modules.
Microkernels are more robust: if a non-kernel component crashes, it won't take the whole system with it. A buggy filesystem or device driver can crash a Linux system. Linux doesn't have any way to mitigate these problems other than coding practices and testing.
Microkernels have a smaller trusted computing base. So even a malicious device driver or filesystem cannot take control of the whole system (for example a driver of dubious origin for your latest USB gadget wouldn't be able to read your hard disk).
A consequence of the previous point is that ordinary users can load their own components that would be kernel components in a monolithic kernel.
Unix GUIs are provided via X window, which is userland code (except for (part of) the video device driver). Many modern unices allow ordinary users to load filesystem drivers through FUSE. Some of the Linux network packet filtering can be done in userland. However, device drivers, schedulers, memory managers, and most networking protocols are still kernel-only.
A classic (if dated) read about Linux and microkernels is the Tanenbaum–Torvalds debate. Twenty years later, one could say that Linux is very very slowly moving towards a microkernel structure (loadable modules appeared early on, FUSE is more recent), but there is still a long way to go.
Another thing that has changed is the increased relevance of virtualization on desktop and high-end embedded computers: for some purposes, the relevant distinction is not between the kernel and userland but between the hypervisor and the guest OSes.
| How does Linux kernel compare to microkernel architectures? |
1,377,261,966,000 |
I just know that Interrupt is a hardware signal assertion caused in a processor pin. But I would like to know how Linux OS handles it.
What all are the things that happen when an interrupt occurs?
|
Here's a high-level view of the low-level processing. I'm describing a simple typical architecture, real architectures can be more complex or differ in ways that don't matter at this level of detail.
When an interrupt occurs, the processor looks if interrupts are masked. If they are, nothing happens until they are unmasked. When interrupts become unmasked, if there are any pending interrupts, the processor picks one.
Then the processor executes the interrupt by branching to a particular address in memory. The code at that address is called the interrupt handler. When the processor branches there, it masks interrupts (so the interrupt handler has exclusive control) and saves the contents of some registers in some place (typically other registers).
The interrupt handler does what it must do, typically by communicating with the peripheral that triggered the interrupt to send or receive data. If the interrupt was raised by the timer, the handler might trigger the OS scheduler, to switch to a different thread. When the handler finishes executing, it executes a special return-from-interrupt instruction that restores the saved registers and unmasks interrupts.
The interrupt handler must run quickly, because it's preventing any other interrupt from running. In the Linux kernel, interrupt processing is divided in two parts:
The “top half” is the interrupt handler. It does the minimum necessary, typically communicate with the hardware and set a flag somewhere in kernel memory.
The “bottom half” does any other necessary processing, for example copying data into process memory, updating kernel data structures, etc. It can take its time and even block waiting for some other part of the system since it runs with interrupts enabled.
As usual on this topic, for more information, read Linux Device Drivers; chapter 10 is about interrupts.
| How is an Interrupt handled in Linux? |
1,377,261,966,000 |
My question is with regards to booting a Linux system from a separate /boot partition. If most configuration files are located on a separate / partition, how does the kernel correctly mount it at boot time?
Any elaboration on this would be great. I feel as though I am missing something basic. I am mostly concerned with the process and order of operations.
Thanks!
EDIT: I think what I needed to ask was more along the lines of the dev file that is used in the root kernel parameter. For instance, say I give my root param as root=/dev/sda2. How does the kernel have a mapping of the /dev/sda2 file?
|
Linux initially boots with a ramdisk (called an initrd, for "INITial RamDisk") as /. This disk has just enough on it to be able to find the real root partition (including any driver and filesystem modules required). It mounts the root partition onto a temporary mount point on the initrd, then invokes pivot_root(8) to swap the root and temporary mount points, leaving the initrd in a position to be umounted and the actual root filesystem on /.
| How does a kernel mount the root partition? |
1,377,261,966,000 |
I'm running a headless server installation of arch linux. The high rate of kernel upgrades caused me some maintainance headache and I therefore wish to switch to the lts kernel.
I already installed the linux-lts and linux-lts-headers packages. Now, I got both kernels installed but I'm a bit clueless how to continue from here. The docs explain:
[...] you will need to update your bootloader's configuration file to use the LTS kernel and ram disk: vmlinuz-linux-lts and initramfs-linux-lts.img.
I already located them in the boot section:
0 ✓ root@host ~ $ ll /boot/
total 85M
4,0K drwxr-xr-x 4 root root 4,0K 21. Mai 13:46 ./
4,0K drwxr-xr-x 17 root root 4,0K 4. Apr 15:08 ../
4,0K drwxr-xr-x 6 root root 4,0K 4. Apr 14:50 grub/
27M -rw-r--r-- 1 root root 27M 20. Mai 17:01 initramfs-linux-fallback.img
12M -rw-r--r-- 1 root root 12M 20. Mai 17:01 initramfs-linux.img
27M -rw-r--r-- 1 root root 27M 21. Mai 13:46 initramfs-linux-lts-fallback.img
12M -rw-r--r-- 1 root root 12M 21. Mai 13:46 initramfs-linux-lts.img
16K drwx------ 2 root root 16K 4. Apr 14:47 lost+found/
4,3M -rw-r--r-- 1 root root 4,3M 11. Mai 22:23 vmlinuz-linux
4,2M -rw-r--r-- 1 root root 4,2M 19. Mai 21:05 vmlinuz-linux-lts
Now, I already found entries pointing to the non-lts kernel in the grub.cfg but the header tells me not to edit this file. It points me to the utility grub-mkconfig instead but I can not figure out how to use this tool to tell grub which kernel and ramdisk to use.
How to switch archlinux with grub to the lts kernel? What else do I have to be cautious about when switching the kernel?
|
Okay, after joe pointed me the right direction in comments, this is how I did it:
basicly just install pacman -S linux-lts
(optional) check if kernel, ramdisk and fallback are available in ls -lsha /boot
remove the standard kernel pacman -R linux
update the grub config grub-mkconfig -o /boot/grub/grub.cfg
reboot
Note, for syslinux you'll need to edit the syslinux config file in /boot/syslinux/syslinux.cfg accordingly, just point everything to the -lts kernel.
| How to switch arch linux to lts kernel? |
1,329,874,965,000 |
Say I have process 1 and process 2. Both have a file descriptor corresponding to the integer 4.
In each process however the file descriptor 4 points to a totally different file in the Open File Table of the kernel:
How is that possible? Isn't a file descriptor supposed to be the index to a record in the Open File Table?
|
The file descriptor, i.e. the 4 in your example, is the index into the process-specific file descriptor table, not the open file table. The file descriptor entry itself contains an index to an entry in the kernel's global open file table, as well as file descriptor flags.
| How can same fd in different processes point to the same file? |
1,329,874,965,000 |
From reading the man pages on the read() and write() calls it appears that these calls get interrupted by signals regardless of whether they have to block or not.
In particular, assume
a process establishes a handler for some signal.
a device is opened (say, a terminal) with the O_NONBLOCK not set (i.e. operating in blocking mode)
the process then makes a read() system call to read from the device and as a result executes a kernel control path in kernel-space.
while the precess is executing its read() in kernel-space, the signal for which the handler was installed earlier is delivered to that process and its signal handler is invoked.
Reading the man pages and the appropriate sections in SUSv3 'System Interfaces volume (XSH)', one finds that:
i. If a read() is interrupted by a signal before it reads any data (i.e. it had to block because no data was available), it returns -1 with errno set to [EINTR].
ii. If a read() is interrupted by a signal after it has successfully read some data (i.e. it was possible to start servicing the request immediately), it returns the number of bytes read.
Question A):
Am I correct to assume that in either case (block/no block) the delivery and handling of the signal is not entirely transparent to the read()?
Case i. seems understandable since the blocking read() would normally place the process in the TASK_INTERRUPTIBLE state so that when a signal is delivered, the kernel places the process into TASK_RUNNING state.
However when the read() doesn't need to block (case ii.) and is processing the request in kernel-space, I would have thought that the arrival of a signal and its handling would be transparent much like the arrival and proper handling of a HW interrupt would be. In particular I would have assumed that upon delivery of the signal, the process would be temporarily placed into user mode to execute its signal handler from which it would return eventually to finish off processing the interrupted read() (in kernel-space) so that the read() runs its course to completion after which the process returns back to the point just after the call to read() (in user-space), with all of the available bytes read as a result.
But ii. seems to imply that the read() is interrupted, since data is available immediately, but it returns returns only some of the data (instead of all).
This brings me to my second (and final) question:
Question B):
If my assumption under A) is correct, why does the read() get interrupted, even though it does not need to block because there is data available to satisfy the request immediately?
In other words, why is the read() not resumed after executing the signal handler, eventually resulting in all of the available data (which was available after all) to be returned?
|
Summary: you're correct that receiving a signal is not transparent, neither in case i (interrupted without having read anything) nor in case ii (interrupted after a partial read). To do otherwise in case i would require making fundamental changes both to the architecture of the operating system and the architecture of applications.
The OS implementation view
Consider what happens if a system call is interrupted by a signal. The signal handler will execute user-mode code. But the syscall handler is kernel code and does not trust any user-mode code. So let's explore the choices for the syscall handler:
Terminate the system call; report how much was done to the user code. It's up to the application code to restart the system call in some way, if desired. That's how unix works.
Save the state of the system call, and allow the user code to resume the call. This is problematic for several reasons:
While the user code is running, something could happen to invalidate the saved state. For example, if reading from a file, the file might be truncated. So the kernel code would need a lot of logic to handle these cases.
The saved state can't be allowed to keep any lock, because there's no guarantee that the user code will ever resume the syscall, and then the lock would be held forever.
The kernel must expose new interfaces to resume or cancel ongoing syscalls, in addition to the normal interface to start a syscall. This is a lot of complication for a rare case.
The saved state would need to use resources (memory, at least); those resources would need to be allocated and held by the kernel but be counted against the process's allotment. This isn't insurmountable, but it is a complication.
Note that the signal handler might make system calls that themselves get interrupted; so you can't just have a static resource allotment that covers all possible syscalls.
And what if the resources cannot be allocated? Then the syscall would have to fail anyway. Which means the application would need to have code to handle this case, so this design would not simplify the application code.
Remain in progress (but suspended), create a new thread for the signal handler. This, again, is problematic:
Early unix implementations had a single thread per process.
The signal handler would risk overstepping on the syscall's shoes. This is an issue anyway, but in the current unix design, it's contained.
Resources would need to be allocated for the new thread; see above.
The main difference with an interrupt is that the interrupt code is trusted, and highly constrained. It's usually not allowed to allocate resources, or run forever, or take locks and not release them, or do any other kind of nasty things; since the interrupt handler is written by the OS implementer himself, he knows that it won't do anything bad. On the other hand, application code can do anything.
The application design view
When an application is interrupted in the middle of a system call, should the syscall continue to completion? Not always. For example, consider a program like a shell that's reading a line from the terminal, and the user presses Ctrl+C, triggering SIGINT. The read must not complete, that's what the signal is all about. Note that this example shows that the read syscall must be interruptible even if no byte has been read yet.
So there must be a way for the application to tell the kernel to cancel the system call. Under the unix design, that happens automatically: the signal makes the syscall return. Other designs would require a way for the application to resume or cancel the syscall at its leasure.
The read system call is the way it is because it's the primitive that makes sense, given the general design of the operating system. What it means is, roughly, “read as much as you can, up to a limit (the buffer size), but stop if something else happens”. To actually read a full buffer involves running read in a loop until as many bytes as possible have been read; this is a higher-level function, fread(3). Unlike read(2) which is a system call, fread is a library function, implemented in user space on top of read. It's suitable for an application that reads for a file or dies trying; it's not suitable for a command line interpreter or for a networked program that must throttle connections cleanly, nor for a networked program that has concurrent connections and doesn't use threads.
The example of read in a loop is provided in Robert Love's Linux System Programming:
ssize_t ret;
while (len != 0 && (ret = read (fd, buf, len)) != 0) {
if (ret == -1) {
if (errno == EINTR)
continue;
perror ("read");
break;
}
len -= ret;
buf += ret;
}
It takes care of case i and case ii and few more.
| Interruption of system calls when a signal is caught |
1,329,874,965,000 |
The Linux kernel swaps out most pages from memory when I run an application that uses most of the 16GB of physical memory. After the application finishes, every action (typing commands, switching workspaces, opening a new web page, etc.) takes very long to complete because the relevant pages first need to be read back in from swap.
Is there a way to tell the Linux kernel to copy pages from swap back into physical memory without manually touching (and waiting for) each application? I run lots of applications so the wait is always painful.
I often use swapoff -a && swapon -a to make the system responsive again, but this clears the pages from swap, so they need to be written again the next time I run the script.
Is there a kernel interface, perhaps using sysfs, to instruct the kernel to read all pages from swap?
Edit: I am indeed looking for a way to make all of swap swapcached. (Thanks derobert!)
[P.S.
serverfault.com/questions/153946/… and serverfault.com/questions/100448/… are related topics but do not address the question of how to get the Linux kernel to copy pages from swap back into memory without clearing swap.]
|
Based on memdump program originally found here I've created a script to selectively read specified applications back into memory. remember:
#!/bin/bash
declare -A Q
for i in "$@"; do
E=$(readlink /proc/$i/exe);
if [ -z "$E" ]; then
#echo skipped $i;
continue;
fi
if echo $E | grep -qF memdump; then
#echo skipped $i >&2;
continue;
fi
if [ -n "${Q[${E}]}" ]; then
#echo already $i >&2;
continue;
fi
echo "$i $E" >&2
memdump $i 2> /dev/null
Q[$E]=$i
done | pv -c -i 2 > /dev/null
Usage: something like
# ./remember $(< /mnt/cgroup/tasks )
1 /sbin/init
882 /bin/bash
1301 /usr/bin/hexchat
...
2.21GiB 0:00:02 [ 1.1GiB/s] [ <=> ]
...
6838 /sbin/agetty
11.6GiB 0:00:10 [1.16GiB/s] [ <=> ]
...
23.7GiB 0:00:38 [ 637MiB/s] [ <=> ]
#
It quickly skips over non-swapped memory (gigabytes per second) and slows down when swap is needed.
| Making Linux read swap back into memory |
1,329,874,965,000 |
I have a laptop with a multi guesture touchpad. My touchpad never works in any Linux distro such as Ubuntu, Fedora, openSUSE, Linux Mint, Knoppix, Puppy, Slitaz and lots more. I have tried lots of things but nothing worked. I have been struggling with the Synaptics drivers for over one year but it doesn't work either.
Then somewhere I read about the i8042.nomux kernel option. So I booted Ubuntu with following options:
i8042.nomux=1 i8042.reset
This made my touchpad work on all variants of Ubuntu and its derivatives like Linux Mint.
I am eager to know about these options. If I knew what it does exactly, I would be able to use my touchpad in all linux distros, as this option only works with Ubuntu.
|
This is an arcane option, only necessary on some rare devices (one of which you have). The only documentation is one line in the kernel parameters list.
The i8042 controller controls PS/2 keyboards and mice in PCs. It seems that on your laptop, both the keyboard and the touchpad are connected through that chip.
From what I understand from the option name and a brief skim of the source code (don't rely on this to write an i8042 driver!), some i8042 chips are capable of multiplexing data coming from multiple pointing devices. The traditional PS/2 interface only provides for one keyboard and one mouse; modern laptops often have a two or more of a touchpad, a trackstick and an external PS/2 plug. Some controllers follow the active PS/2 multiplexing specification, which permit up to 4 devices; the data sent by each device carries an indication of which device it comes from.
The Linux driver tries to find out whether the i8042 controller supports multiplexing, but sometimes guessing wrongly. With the i8042.nomux=1 parameter, the driver does not try to detect whether the controller supports multiplexing and assumes that it doesn't. With the i8042.reset parameter, the driver resets the controller when starting, which may be useful to disable multiplexing mode if the controller does support it but in a buggy way.
| What does the 'i8042.nomux=1' kernel option do during booting of Ubuntu? |
1,329,874,965,000 |
I have noticed the following option in the kernel: CONFIG_DEVTMPFS
Device Drivers -> Generic Driver Options -> Maintain devtmpfs to mount at /dev
And I see that it is enabled by default in the Debian distribution kernel 3.2.0-4-amd64
I am trying to understand what difference this option brings. Without this option, /dev is mounted as tmpfs , with this option, it is mounted as devtmpfs. Other than that, I don't see any difference.
The help did not clarify it for me either:
This creates a tmpfs/ramfs filesystem instance early at bootup. In this filesystem, the kernel driver core maintains device nodes with their default names and permissions for all registered devices with an assigned major/minor number.
It provides a fully functional /dev directory, where usually udev runs on top, managing permissions and adding meaningful symlinks.
In very limited environments, it may provide a sufficient functional /dev without any further help. It also allows simple rescue systems, and reliably handles dynamic major/minor numbers.
Could somebody please explain the difference between using CONFIG_DEVTMPFS vs the standard /dev?
|
devtmpfs is a file system with automated device nodes populated by the kernel. This means you don't have to have udev running nor to create a static /dev layout with additional, unneeded and not present device nodes. Instead the kernel populates the appropriate information based on the known devices.
On the other hand the standard /dev handling requires either udev, an additional daemon running or to statically create device nodes on /dev.
| using devtmpfs for /dev |
1,329,874,965,000 |
I want to create a USB-to-USB data transfer system in Linux (preferably Ubuntu). For this I want to use no external hardware or switch (except this cable). It's going to be like mounting a USB drive to a system, but in this scenario one of the Linux systems is going to be mounted on the other. How can I create this?
Are there any kernel modules available, given my experience with kernel programming is very basic?
|
Yes this is possible, but it is not possible by cutting two USB cables with USB-A connectors (what is normally going into the USB on your motherboard) and cross connecting the data cables. If you connect the USB power lines on such a self made cable, you are likely to end up frying your on-board USB handling chip. Don't try this at home!
On most computer boards the chips handling USB are host only. Not only that but, it also handles a lot of the low level communication to speed things up and reduce the load on the CPU. It is not as if you could program your computer to handle the pins on the USB port to act as if a non-host. The devices capable, on the chip level, of switching between acting as a host and connecting to a host are few, as this requires a much more expensive chip¹. This is e.g. why intelligent devices like my smart-phone, GPS and ebook, although they all run Linux or something similar, do not allow me to use ssh to communicate when connected via a normal USB cable.
Those devices go into some dumb mode when connected, where the host (my desktop system) can use its storage as a USB disc. After disconnecting the device uses the same interface as a host as to get to the data (although no cable connection is required, this happens internally). With that kind of devices even if Linux runs on both, there is no communication between the systems, i.e. the linuxes. This independent of a normal micro or mini USB cable connecting them to my desktop.
Between two desktop PCs the above is normally impossible to do as you would require a USB-A to USB-A cable, which is is not common (as it would not work with the normal chips that are driving the connections anyway).
Any solution doing USB to USB with two USB-A connectors that I have seen, is based on a cable that has some electronics in between. (much like a USB → Serial plugged into a Serial → USB cable, but then all in one piece). These normally require drivers to do the transfer, although you might be able to use UUCP or something else over such a cable, like you would over a "normal" serial port. This probably requires inetd and proper configuration to login on the other computer as well.
¹ The only device I have that is software changeable in this way is a Arduino board with exactly such a special chip. Just this chip made the board twice as expensive as a normal Arduino board.
| Is USB-to-USB data transfer between two Linux OSes possible? |
1,329,874,965,000 |
This question is two-fold:
First, how do you manually detach a driver from a USB device and attach a different one? For example, I have a device that when connected automatically uses the usb-storage driver.
usbview output
Vendor Id: xxxx
Product Id: xxxx
...
Number of Interfaces: 2
Interface Number: 0
Name: usb-storage
Number of Endpoints: 2
...
Interface Number: 1
Name: (none)
Number of Endpoints: 2
...
I do not want to use the usb-storage driver, so in my application I use the libusb library to detach the usb-storage driver and then I claim the interface. I then can send data to and from the applications running on my USB device and on my host Linux system.
How do you detach a driver manually outside of an application?
Second, how do I automatically assign the driver to attach on device plugin? I currently have a udev rule setup to set the device permissions automatically:
SUBSYSTEM=="usb", ATTR{idVendor}=="xxxx", MODE="0666"
Can I use udev rules to assign drivers to specific interfaces on the USB device? For example, if I wanted the usbnet module to be used on automatically on interface 0 instead of usb-storage, is that possible in udev?
|
For the first part of the question, I've looked and couldn't find a better way to detach a USB driver than what you're already doing with libusb.
As for the second part of the question, udev can react to driver loading, but not force a specific driver to be assigned to a device.
Every driver in the Linux kernel is responsible for one or more devices. The driver itself chooses what devices it supports. It does this programmatically, i.e. by checking the device's vendor and product ID, or, if those aren't available (e.g. an old device), performing some auto-detection heuristics and sanity checks. Once the driver is confident it's found a device it supports, it attaches itself to it. In short, you often can't force a particular driver to use a particular device. Sometimes, however, a device driver is generous with what it accepts, and a device can work that it doesn't know about. Your mileage will vary! In the past, I've had to manually add weird PCI device/vendor IDs to drivers that should support them, with mixed success and a few amusing kernel crashes.
Now, in the case of modules, there's an extra step. The module loader is woken up by the kernel when a new device is detected. It's passed a ‘modalias’ string, which identifies the device and looks something like this for USB devices:
usb:v046DpC221d0170dc00dsc00dp00ic03isc00ip00
This string contains the device class (usb) and class-specific information (vendor/device/serial number, device class, etc). Each kernel driver contains a line such as:
MODULE_ALIAS("usb:...")
Which must match the usbalias (wildcards are used to match multiple devices). If the modalias matches one that the driver supports, this driver is loaded (or notified of the new device, if it's there already).
You can see the supported devices (by modalias) and their associated modules with
less /lib/modules/`uname -r`/modules.alias
If you grep for the usb-storage device driver, you'll see it has some specific devices it supports by vendor and device ID, and will also attempt to support any device with the right (storage) class, no matter the vendor/device.
You can influence this using userspace mechanisms on your OS (/etc/modprobe.d/ on Debian and friends). You can blacklist modules, or you can specify modules to be loaded by modalias, just like the modules.alias file (and using the same syntax). depmod -a will then regenerate the module loader's patterns.
However, even though you can lead this particular horse to water, but you can't make him drink. If the driver has no support for your device, it should ignore it.
This is the theory in the general case.
In practice, and in the case of USB, I see your device appears to have two interfaces, of which storage is one. The kernel will attach to the storage interface of the overall device. If the other interface has the right class, the usbnet driver could attach to it. Yes, you can have multiple drivers attached to the same physical device, because a USB device exports multiple interfaces (e.g. my Logitech G15 keyboard exports two because it has a keyboard device and an LCD screen, each of which is handled by a separate driver).
The fact that the second interface of your USB device isn't detected is indicative of lack of support in the kernel. Whatever the case, you can list the device interfaces/endpoints in excruciating detail using lsusb -v | less, then scroll down to your particular device (you can limit the output by device:vendor ID or USB path if you're so inclined).
Please note: I'm oversimplifying a bit here with respect to the logical structure of USB devices. Blame the USB consortium. :)
| How to assign USB driver to device |
1,329,874,965,000 |
First background. I am developing a driver for Logitech game-panel devices. It's a keyboard with a screen on it. The driver is working nicely but by default the device is handled by HID. In order to prevent HID taking over the device before my driver, I can blacklist it in hid-core.c. This works but is not the best solution as I am working with several people and we all have to keep patching our HID module which is becoming a chore, especially as it often involves rebuilding initramfs and such.
I did some research on this problem and I found this mailing list post, which eventually took me to this article on LWN. This describes a mechanism for binding devices to specific drivers at runtime. This seems like exactly what I need.
So, I tried it. I was able to unbind the keyboard from HID. This worked and as expected I could no longer type on it. But when I tried to bind it to our driver I get "error: no such device" and the operation fails.
So my question is: How do I use kernel bind/unbind operations to replicate what happens when you blacklist a HID device in hid-core and supply your own driver? - that is - to replace the need to patch hid-core.c all the time?
The source of our driver is here: https://github.com/ali1234/lg4l
|
Ok, turns out the answer was staring me in the face.
Firstly, whether using our custom driver, or using the generic one that normally takes over the device, it's still all ultimately controlled by HID, and not USB.
Previously I tried to unbind it from HID, which is not the way to go. HID has sub-drivers, the one that takes over devices that have no specialized driver is called generic-usb. This is what I needed to unbind from, before binding to hid-g19. Also, I needed to use the HID address which looks like "0003:046d:c229.0036" and not the USB address which looks "1-1.1:1.1".
So before rebinding I would see this on dmesg:
generic-usb 0003:046D:C229.0036: input,hiddev0,hidraw4: USB HID v1.11 Keypad [Logitech G19 Gaming Keyboard] on usb-0000:00:13.2-3.2/input1
Then I do:
echo -n "0003:046D:C229.0036" > /sys/bus/hid/drivers/generic-usb/unbind
echo -n "0003:046D:C229.0036" > /sys/bus/hid/drivers/hid-g19/bind
And then I see on dmesg:
hid-g19 0003:046D:C229.0036: input,hiddev0,hidraw4: USB HID v1.11 Keypad [Logitech G19 Gaming Keyboard] on usb-0000:00:13.2-3.2/input1
So like I said, staring me in the face, because the two key pieces of information are the first two things on the line when the device binds...
| How to use Linux kernel driver bind/unbind interface for USB-HID devices? |
1,329,874,965,000 |
"Everything is a file" in the UNIX World.
Above sentence is famous. When I run echo "hello programmer" >> /dev/tty1 , I can watch the given string on TeleType 1 , ....
What and where is file per each socket? Suppose my friend connects to my PC, and its IP is h.h.h.h , how can I access the respective file? Is it possible?
|
man 7 unix:
The AF_UNIX (also known as AF_LOCAL) socket family is used to communicate between processes on the same machine efficiently. Traditionally, UNIX domain
sockets can be either unnamed, or bound to a file system pathname (marked as being of type socket). Linux also supports an abstract namespace which is
independent of the file system.
I.e. not every socket can be seen as a file (in the sense of "no file without a file name").
But there are files with lists of sockets (e.g. /proc/net/tcp); not exactly what "everything is a file" means, though.
| Is there a file for each socket? |
1,329,874,965,000 |
Briefly, I have a physical address inside kernel (9,932,111,872 or 0x250000000), which is apparently aligned to 4KiB (page size). When I use the kernel __va() function to get the kernel virtual address, I got something like 0xf570660f (different on each boot), which is not aligned to 4KiB.
I'm on a 64bit system so there's no HIGHMEM, and I thought that due to the liner memory model, a virtual address of an 4KiB-aligned physical address should also be 4KiB-aligned. Did I miss something? Shouldn't the virtual address be phys_addr + PAGE_OFFSET? Or is it the influence of the sparsemem? But maybe it should also be 4KiB-aligned?
Here are more details:
My workground is on a x86 64bit QEMU VM. I'm trying to use a PMEM in DEV-DAX mode as a normal memory. I can get the physical start address of it (0x250000000), which has been confirmed to be right. Then I need to transfer it to the virtual address in kernel space so that I can use it as I need. Here's some code:
static long nvpc_map_whole_dev(struct dax_device *dax_dev, void **kaddr, pfn_t *pfn)
{
// get the device
struct dev_dax_nvpc *dax_nvpc = (struct dev_dax_nvpc *)dax_get_private(dax_dev);
// get the virtual address and the pfn_t
*kaddr = __va(dax_nvpc->phys_start);
*pfn = phys_to_pfn_t(dax_nvpc->phys_start, PFN_MAP);
pr_info("[NVPC DEBUG]: paddr %#llx kaddr %p pfn %lu\n", dax_nvpc->phys_start, *kaddr, pfn_t_to_pfn(*pfn));
pr_info("[NVPC DEBUG]: kaddr-paddr %#llx\n", __pa(*kaddr));
return PHYS_PFN(dax_nvpc->size);
}
And here's the result I got:
Shown in the marked line, the paddr dax_nvpc->phys_start, and the pfn, are both right. But the kaddr (virtual address) is confusing to me. Then when I transfer the kaddr back to physical address (the next output line), the result turns to be right.
What's more, I can do any operation on the memory from kaddr to kaddr + dax_nvpc->size, there's no page fault.
Could anyone tell me why is the virtual address is not 4KiB-aligned? Am I being a fool on somewhere? Further, can I do something to make sure that the virtual address is also aligned to a page?
|
The reason is the %p in printk. When I change %p to %#llx for a quick check, the output of kernel address becomes 4KB-aligned as expected.
The reason can be found here: Kernel Doc: printk-formats pointer-types. %p inside printk will print a hashed pointer address to prevent leaking kernel information. That's why the pointer seems weird. If you want to check the real virtual address just use %px, or add no_hash_pointers to the boot parameter. Refer to the link for more usage.
| Why is the virtual address not 4KiB-aligned when its physical address is aligned to 4KiB? |
1,329,874,965,000 |
I have been learning some scheduling concepts. Currently my understanding so far is as below.
There are real time processes and non real time processes.
Non real time processes can have nice values for their priority in the range of -20 to +20. The higher positive value indicates that the process has lower priority.
The real time processes will have a niceness value listed as - as explained in this answer here. This is mainly because the real time processes have higher priorities than the non real time processes and niceness value do not apply to them.
Now, I can use chrt to see the real time attributes of a process.
For a real time process, the chrt gives output as,
chrt -p 5
pid 5's current scheduling policy: SCHED_FIFO
pid 5's current scheduling priority: 99
As we can see for process 5, the priority is 99 which is the highest. Also, the scheduling policy is SCHED_FIFO
Now, for a non real time process, the chrt gives output as,
chrt -p 22383
pid 22383's current scheduling policy: SCHED_OTHER
pid 22383's current scheduling priority: 0
As we can see for process 22383, the priority is 0 and the scheduling policy is SCHED_OTHER.
Questions
Is it possible for me to make any process as real time process?
Is it possible for me to set some other scheduling algorithm other
than SCHED_OTHER for a non real time process?
From here, I also see that I could modify the attribute for a
running process as,
chrt -p prio pid
Also, I see chrt -m gives me the list of scheduling algorithms. The command gives me the output as,
SCHED_OTHER min/max priority : 0/0
SCHED_FIFO min/max priority : 1/99
SCHED_RR min/max priority : 1/99
SCHED_BATCH min/max priority : 0/0
SCHED_IDLE min/max priority : 0/0
Now, as suggested above, if I set chrt -p 55 22383 which algorithm will be used?
|
Question 1
It is possible for an user to use real time priority for a process as well. This configuration could be set from /etc/security/limits.conf file. I see the below contents in that file.
# /etc/security/limits.conf
#
#Each line describes a limit for a user in the form:
#
#<domain> <type> <item> <value>
If we check the item section, we see the below entry which enables to set a real time priority for the users.
# - rtprio - max realtime priority
Question 2 and Question 3
To set scheduling policy to SCHED_FIFO, enter:
chrt -f -p [1..99] {pid}
To set scheduling policy to SCHED_RR, enter:
chrt -r -p [1..99] {pid}
So to answer question 3, we should verify the scheduling algorithms available and the priorities using the chrt -m command and then use any scheduling algorithm that suits our need. To set different priorities, we could use the commands as above.
| Real time processes scheduling in Linux |
1,329,874,965,000 |
I've read in many places that Linux creates a kernel thread for each user thread in a Java VM. (I see the term "kernel thread" used in two different ways:
a thread created to do core OS work and
a thread the OS is aware of and schedules to perform user work.
I am talking about the latter type.)
Is a kernel thread the same as a kernel process, since Linux processes support shared memory spaces between parent and child, or is it truly a different entity?
|
There is absolutely no difference between a thread and a process on Linux. If you look at clone(2) you will see a set of flags that determine what is shared, and what is not shared, between the threads.
Classic processes are just threads that share nothing; you can share what components you want under Linux.
This is not the case on other OS implementations, where there are much more substantial differences.
| Are Linux kernel threads really kernel processes? |
1,329,874,965,000 |
As far as I know, there are 4 main types of network interfaces in Linux: tun, tap, bridge and physical.
When I'm doing sys admin on machines running KVM, I usually come across tap, bridge and physical interfaces on the same machine, without being able to tell them apart. I can't see any significant differences in ifconfig results, as in ip results.
How can I know if an interface is a tun, tap, bridge, or physical?
note: I don't claim that there are no other types of network interfaces in Linux, but I know only these 4.
|
I don't think there's an easy way to distinguish them. Poking around in /sys/class/net I found the following distinctions:
Physical devices have a /sys/class/net/eth0/device symlink
Bridges have a /sys/class/net/br0/bridge directory
TUN and TAP devices have a /sys/class/net/tap0/tun_flags file
Bridges and loopback interfaces have 00:00:00:00:00:00 in /sys/class/net/lo/address
| How to know if a network interface is tap, tun, bridge or physical? |
1,329,874,965,000 |
On Ubuntu 15.10, when I want to format using the NTFS file system an external 4TO disk connected by USB3 (on a StarTech USB/eSATA hard disk dock), I have a lot of I/O errors, and the format fails.
I tried GParted v 0.19, and GParted on the latest live CD gparted-live-0.23.0-1-i586.iso, with the same problem.
After that, and using GParted on Ubuntu 15.10 and the same USB3 connection, I tried to format as ext4, without problems. It's really strange.
Because I don't know if the mkfs.ext4 tools used by GParted to format the disk test the disk like (or not like) mkntfs, I first suppose that the problem is linked to the new disk. Perhaps this new disk is causing problems. So I try running e2fsck -c on this HDD. On Ubuntu 15.10, e2fsck -c freezes at 0.45%, and I don't know why.
So, using another version of Ubuntu (15.04) on the same PC, I try to connect the same 4TO disk using the eSATA connection of the StarTech HDD dock. This time, e2fsck -c runs correctly.
After some hours, you can see the result:
sudo e2fsck -c /dev/sdc1
e2fsck 1.42.12 (29-Aug-2014)
ColdCase : récupération du journal
Vérification des blocs défectueux (test en mode lecture seule) : complété
ColdCase: Updating bad block inode.
Passe 1 : vérification des i-noeuds, des blocs et des tailles
Passe 2 : vérification de la structure des répertoires
Passe 3 : vérification de la connectivité des répertoires
Passe 4 : vérification des compteurs de référence
Passe 5 : vérification de l'information du sommaire de groupe
ColdCase: ***** LE SYSTÈME DE FICHIERS A ÉTÉ MODIFIÉ *****
ColdCase : 11/244195328 fichiers (0.0% non contigus), 15377150/976754176 blocs
I'm not an expert in badblock outputs, but it seems there is no bad block at all on this disk?
So, if the problem is not the hard drive, maybe the problem can be linked to mkntfs used over USB3? What other tests can I try?
Some information about the USB dock:
➜ ~ lsusb
...
Bus 002 Device 002: ID 174c:55aa ASMedia Technology Inc. ASM1051E SATA 6Gb/s bridge, ASM1053E SATA 6Gb/s bridge, ASM1153 SATA 3Gb/s bridge
...
➜ ~ sudo lsusb -v -d 174c:55aa
Mot de passe [sudo] pour reyman :
Bus 002 Device 002: ID 174c:55aa ASMedia Technology Inc. ASM1051E SATA 6Gb/s bridge, ASM1053E SATA 6Gb/s bridge, ASM1153 SATA 3Gb/s bridge
Device Descriptor:
bLength 18
bDescriptorType 1
bcdUSB 3.00
bDeviceClass 0 (Defined at Interface level)
bDeviceSubClass 0
bDeviceProtocol 0
bMaxPacketSize0 9
idVendor 0x174c ASMedia Technology Inc.
idProduct 0x55aa ASM1051E SATA 6Gb/s bridge, ASM1053E SATA 6Gb/s bridge, ASM1153 SATA 3Gb/s bridge
bcdDevice 1.00
iManufacturer 2 asmedia
iProduct 3 ASM1053E
iSerial 1 123456789012
bNumConfigurations 1
Configuration Descriptor:
bLength 9
bDescriptorType 2
wTotalLength 121
bNumInterfaces 1
bConfigurationValue 1
iConfiguration 0
bmAttributes 0xc0
Self Powered
MaxPower 36mA
Interface Descriptor:
bLength 9
bDescriptorType 4
bInterfaceNumber 0
bAlternateSetting 0
bNumEndpoints 2
bInterfaceClass 8 Mass Storage
bInterfaceSubClass 6 SCSI
bInterfaceProtocol 80 Bulk-Only
iInterface 0
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x81 EP 1 IN
bmAttributes 2
Transfer Type Bulk
Synch Type None
Usage Type Data
wMaxPacketSize 0x0400 1x 1024 bytes
bInterval 0
bMaxBurst 15
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x02 EP 2 OUT
bmAttributes 2
Transfer Type Bulk
Synch Type None
Usage Type Data
wMaxPacketSize 0x0400 1x 1024 bytes
bInterval 0
bMaxBurst 15
Interface Descriptor:
bLength 9
bDescriptorType 4
bInterfaceNumber 0
bAlternateSetting 1
bNumEndpoints 4
bInterfaceClass 8 Mass Storage
bInterfaceSubClass 6 SCSI
bInterfaceProtocol 98
iInterface 0
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x81 EP 1 IN
bmAttributes 2
Transfer Type Bulk
Synch Type None
Usage Type Data
wMaxPacketSize 0x0400 1x 1024 bytes
bInterval 0
bMaxBurst 15
MaxStreams 16
Data-in pipe (0x03)
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x02 EP 2 OUT
bmAttributes 2
Transfer Type Bulk
Synch Type None
Usage Type Data
wMaxPacketSize 0x0400 1x 1024 bytes
bInterval 0
bMaxBurst 15
MaxStreams 16
Data-out pipe (0x04)
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x83 EP 3 IN
bmAttributes 2
Transfer Type Bulk
Synch Type None
Usage Type Data
wMaxPacketSize 0x0400 1x 1024 bytes
bInterval 0
bMaxBurst 15
MaxStreams 16
Status pipe (0x02)
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x04 EP 4 OUT
bmAttributes 2
Transfer Type Bulk
Synch Type None
Usage Type Data
wMaxPacketSize 0x0400 1x 1024 bytes
bInterval 0
bMaxBurst 0
Command pipe (0x01)
Binary Object Store Descriptor:
bLength 5
bDescriptorType 15
wTotalLength 22
bNumDeviceCaps 2
USB 2.0 Extension Device Capability:
bLength 7
bDescriptorType 16
bDevCapabilityType 2
bmAttributes 0x00000002
Link Power Management (LPM) Supported
SuperSpeed USB Device Capability:
bLength 10
bDescriptorType 16
bDevCapabilityType 3
bmAttributes 0x00
wSpeedsSupported 0x000e
Device can operate at Full Speed (12Mbps)
Device can operate at High Speed (480Mbps)
Device can operate at SuperSpeed (5Gbps)
bFunctionalitySupport 1
Lowest fully-functional device speed is Full Speed (12Mbps)
bU1DevExitLat 10 micro seconds
bU2DevExitLat 2047 micro seconds
Device Status: 0x0001
Self Powered
Information about the disk in question: /dev/sdd
➜ ~ sudo fdisk -l /dev/sdd
Disque /dev/sdd : 3,7 TiB, 4000787030016 octets, 7814037168 secteurs
Unités : sectors of 1 * 512 = 512 octets
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 33553920 bytes
Disklabel type: gpt
Disk identifier: ACD5760B-2898-435E-82C6-CB3119758C9B
Périphérique Start Fin Secteurs Size Type
/dev/sdd1 2048 7814035455 7814033408 3,7T Linux filesystem
dmesg returns a lot of errors about the disk; see this extract:
[ 68.856381] scsi host6: uas_eh_bus_reset_handler start
[ 68.968376] usb 2-2: reset SuperSpeed USB device number 2 using xhci_hcd
[ 68.989825] scsi host6: uas_eh_bus_reset_handler success
[ 99.881608] sd 6:0:0:0: [sdd] tag#12 uas_eh_abort_handler 0 uas-tag 13 inflight: CMD OUT
[ 99.881618] sd 6:0:0:0: [sdd] tag#12 CDB: Write(16) 8a 00 00 00 00 00 e8 c4 08 00 00 00 00 08 00 00
[ 99.881856] sd 6:0:0:0: [sdd] tag#5 uas_eh_abort_handler 0 uas-tag 6 inflight: CMD OUT
[ 99.881861] sd 6:0:0:0: [sdd] tag#5 CDB: Write(16) 8a 00 00 00 00 00 cd 01 08 f0 00 00 00 10 00 00
[ 99.881967] sd 6:0:0:0: [sdd] tag#4 uas_eh_abort_handler 0 uas-tag 5 inflight: CMD OUT
[ 99.881972] sd 6:0:0:0: [sdd] tag#4 CDB: Write(16) 8a 00 00 00 00 00 cd 01 08 00 00 00 00 f0 00 00
[ 99.882085] sd 6:0:0:0: [sdd] tag#3 uas_eh_abort_handler 0 uas-tag 4 inflight: CMD OUT
[ 99.882090] sd 6:0:0:0: [sdd] tag#3 CDB: Write(16) 8a 00 00 00 00 00 cd 01 07 10 00 00 00 f0 00 00
[ 99.882171] sd 6:0:0:0: [sdd] tag#2 uas_eh_abort_handler 0 uas-tag 3 inflight: CMD OUT
[ 99.882175] sd 6:0:0:0: [sdd] tag#2 CDB: Write(16) 8a 00 00 00 00 00 cd 01 06 20 00 00 00 f0 00 00
[ 99.882255] sd 6:0:0:0: [sdd] tag#1 uas_eh_abort_handler 0 uas-tag 2 inflight: CMD OUT
[ 99.882258] sd 6:0:0:0: [sdd] tag#1 CDB: Write(16) 8a 00 00 00 00 00 cd 01 05 30 00 00 00 f0 00 00
[ 99.882338] sd 6:0:0:0: [sdd] tag#0 uas_eh_abort_handler 0 uas-tag 1 inflight: CMD OUT
[ 99.882342] sd 6:0:0:0: [sdd] tag#0 CDB: Write(16) 8a 00 00 00 00 00 cd 01 04 40 00 00 00 f0 00 00
[ 99.882419] sd 6:0:0:0: [sdd] tag#11 uas_eh_abort_handler 0 uas-tag 12 inflight: CMD OUT
[ 99.882423] sd 6:0:0:0: [sdd] tag#11 CDB: Write(16) 8a 00 00 00 00 00 cd 00 f9 00 00 00 00 f0 00 00
[ 99.882480] sd 6:0:0:0: [sdd] tag#10 uas_eh_abort_handler 0 uas-tag 11 inflight: CMD OUT
[ 99.882483] sd 6:0:0:0: [sdd] tag#10 CDB: Write(16) 8a 00 00 00 00 00 cd 00 f9 f0 00 00 00 f0 00 00
[ 99.882530] sd 6:0:0:0: [sdd] tag#9 uas_eh_abort_handler 0 uas-tag 10 inflight: CMD OUT
[ 99.882532] sd 6:0:0:0: [sdd] tag#9 CDB: Write(16) 8a 00 00 00 00 00 cd 00 fa e0 00 00 00 f0 00 00
[ 99.882593] sd 6:0:0:0: [sdd] tag#8 uas_eh_abort_handler 0 uas-tag 9 inflight: CMD
[ 99.882596] sd 6:0:0:0: [sdd] tag#8 CDB: Write(16) 8a 00 00 00 00 00 cd 00 fb d0 00 00 00 f0 00 00
[ 99.882667] scsi host6: uas_eh_bus_reset_handler start
[ 99.994700] usb 2-2: reset SuperSpeed USB device number 2 using xhci_hcd
[ 100.015613] scsi host6: uas_eh_bus_reset_handler success
[ 135.962175] sd 6:0:0:0: [sdd] tag#5 uas_eh_abort_handler 0 uas-tag 6 inflight: CMD OUT
[ 135.962185] sd 6:0:0:0: [sdd] tag#5 CDB: Write(16) 8a 00 00 00 00 00 cd 40 78 f0 00 00 00 10 00 00
[ 135.962428] sd 6:0:0:0: [sdd] tag#4 uas_eh_abort_handler 0 uas-tag 5 inflight: CMD OUT
[ 135.962436] sd 6:0:0:0: [sdd] tag#4 CDB: Write(16) 8a 00 00 00 00 00 cd 40 78 00 00 00 00 f0 00 00
[ 135.962567] sd 6:0:0:0: [sdd] tag#3 uas_eh_abort_handler 0 uas-tag 4 inflight: CMD OUT
[ 135.962576] sd 6:0:0:0: [sdd] tag#3 CDB: Write(16) 8a 00 00 00 00 00 cd 40 77 10 00 00 00 f0 00 00
[ 135.962682] sd 6:0:0:0: [sdd] tag#2 uas_eh_abort_handler 0 uas-tag 3 inflight: CMD OUT
[ 135.962690] sd 6:0:0:0: [sdd] tag#2 CDB: Write(16) 8a 00 00 00 00 00 cd 40 76 20 00 00 00 f0 00 00
[ 135.962822] sd 6:0:0:0: [sdd] tag#1 uas_eh_abort_handler 0 uas-tag 2 inflight: CMD
[ 135.962830] sd 6:0:0:0: [sdd] tag#1 CDB: Write(16) 8a 00 00 00 00 00 cd 40 75 30 00 00 00 f0 00 00
[ 160.904916] sd 6:0:0:0: [sdd] tag#0 uas_eh_abort_handler 0 uas-tag 1 inflight: CMD OUT
[ 160.904926] sd 6:0:0:0: [sdd] tag#0 CDB: Write(16) 8a 00 00 00 00 00 00 00 29 08 00 00 00 08 00 00
[ 160.905068] scsi host6: uas_eh_bus_reset_handler start
I found this information on this forum post, that there is some problem with UAS and new Linux kernels? It seems the problem is known in many places on the internet, USB3 + Linux seems problematic in many cases -- but for old kernels. Any ideas to resolve this problem on a more recent kernel?
My kernel is:
➜ ~ uname -r
4.2.0-16-generic
Hmm, it seems UAS is broken for different USB3 chips of ASMedia Technology Inc.; you can see more information here.
I suppose this is my problem, but how can I resolve it now, and which chip is used for the USB3 implementation in the StarTech dock?
|
I ran into this issue today on a 4.8.0 kernel.
According to this forum post it can be circumvented by
$ echo options usb-storage quirks=357d:7788:u | sudo tee /etc/modprobe.d/blacklist_uas_357d.conf
$ sudo update-initramfs -u
and rebooting.
| Connection problem with USB3 external storage on Linux (UAS driver problem) |
1,329,874,965,000 |
Do I need to have to run perf userspace tool as system administrator (root), or can I run it (or at least some subcommands) as an ordinary user?
|
What you can do with perf without being root depends on the kernel.perf_event_paranoid sysctl setting.
kernel.perf_event_paranoid = 2: you can't take any measurements. The perf utility might still be useful to analyse existing records with perf ls, perf report, perf timechart or perf trace.
kernel.perf_event_paranoid = 1: you can trace a command with perf stat or perf record, and get kernel profiling data.
kernel.perf_event_paranoid = 0: you can trace a command with perf stat or perf record, and get CPU event data.
kernel.perf_event_paranoid = -1: you get raw access to kernel tracepoints (specifically, you can mmap the file created by perf_event_open, I don't know what the implications are).
| Do I need root (admin) permissions to run userspace 'perf' tool? (perf events are enabled in Linux kernel) |
1,329,874,965,000 |
I would like to know how Message Queues are implemented in the Linux Kernel.
|
The Linux kernel (2.6) implements two message queues:
(rather 'message lists', as the implementation is done by using a linked list not strictly following the FIFO principle)
System V IPC messages
The message queue from System V.
A process can invoke msgsnd() to send a message. He needs to pass the IPC identifier of the receiving message queue, the size of the message and a message structure, including the message type and text.
On the other side, a process invokes msgrcv() to receive a message, passing the IPC identifier of the message queue, where the message should get stored, the size and a value t.
t specifies the message returned from the queue, a positive value means the first message with its type equal to t is returned, a negative value returns the last message equal to type t and zero returns the first message of the queue.
Those functions are defined in include/linux/msg.h and implemented in ipc/msg.c
There are limitations upon the size of a message (max), the total number of messages (mni) and the total size of all messages in the queue (mnb):
$ sysctl kernel.msg{max,mni,mnb}
kernel.msgmax = 8192
kernel.msgmni = 1655
kernel.msgmnb = 16384
The output above is from a Ubuntu 10.10 system, the defaults are defined in msg.h.
More incredibly old System V message queue stuff explained here.
POSIX Message Queue
The POSIX standard defines a message queue mechanism based on System V IPC's message queue, extending it by some functionalities:
Simple file-based interface to the application
Support for message priorities
Support for asynchronous notification
Timeouts for blocking operations
See ipc/mqueue.c
Example
util-linux provides some programs for analyzing and modifying message queues and the POSIX specification gives some C examples:
Create a message queue with ipcmk; generally you would do this by calling C functions like ftok() and msgget():
$ ipcmk -Q
Lets see what happened by using ipcs or with a cat /proc/sysvipc/msg:
$ ipcs -q
------ Message Queues --------
key msqid owner perms used-bytes messages
0x33ec1686 65536 user 644 0 0
Now fill the queue with some messages:
$ cat <<EOF >msg_send.c
#include <string.h>
#include <sys/msg.h>
int main() {
int msqid = 65536;
struct message {
long type;
char text[20];
} msg;
msg.type = 1;
strcpy(msg.text, "This is message 1");
msgsnd(msqid, (void *) &msg, sizeof(msg.text), IPC_NOWAIT);
strcpy(msg.text, "This is message 2");
msgsnd(msqid, (void *) &msg, sizeof(msg.text), IPC_NOWAIT);
return 0;
}
EOF
Again, you generally do not hardcode the msqid in the code.
$ gcc -o msg_send msg_send.c
$ ./msg_send
$ ipcs -q
------ Message Queues --------
key msqid owner perms used-bytes messages
0x33ec1686 65536 user 644 40 2
And the other side, which will be receiving the messages:
$ cat <<EOF >msg_recv.c
#include <stdio.h>
#include <sys/msg.h>
int main() {
int msqid = 65536;
struct message {
long type;
char text[20];
} msg;
long msgtyp = 0;
msgrcv(msqid, (void *) &msg, sizeof(msg.text), msgtyp, MSG_NOERROR | IPC_NOWAIT);
printf("%s \n", msg.text);
return 0;
}
EOF
See what happens:
$ gcc -o msg_recv msg_recv.c
$ ./msg_recv
This is message 1
$ ./msg_recv
This is message 2
$ ipcs -q
------ Message Queues --------
key msqid owner perms used-bytes messages
0x33ec1686 65536 user 644 0 0
After two receives, the queue is empty again.
Remove it afterwards by specifying the key (-Q) or msqid (-q):
$ ipcrm -q 65536
| How is a message queue implemented in the Linux kernel? |
1,329,874,965,000 |
We all know that Linus Torvalds created Git because of issues with Bitkeeper. What is not known (at least to me) is, how were issues/tickets/bugs tracked up until then? I tried but didn't get anything interesting. The only discussion I was able to get on the subject was this one where Linus shared concerns with about using Bugzilla.
Speculation: - The easiest way for people to track bugs in the initial phase would have been to put tickets in a branch of its own but am sure that pretty quickly that wouldn't have scaled with the noise over-taking the good bugs.
I've seen and used Bugzilla and unless you know the right 'keywords' at times you would be stumped. NOTE: I'm specifically interested in the early years (1991-1995) as to how they used to track issues.
I did look at two threads, "Kernel SCM saga", and "Trivia: When did git self-host?" but none of these made mention about bug-tracking of the kernel in the early days.
I searched around and wasn't able to get of any FOSS bug-tracking software which was there in 1991-1992. Bugzilla, Request-tracker, and others came much later, so they appear to be out.
Key questions
How did then Linus, the subsystem-maintainers, and users report and track bugs in those days?
Did they use some bug-tracking software, made a branch of bugs and manually committed questions and discussions on the bug therein (would be expensive and painful to do that) or just use e-mail.
Much later, Bugzilla came along (first release 1998) and that seems to be the primary way to report bugs afterwards.
Looking forward to have a clearer picture of how things were done in the older days.
|
In the beginning, if you had something to contribute (a patch or a bug report), you mailed it to Linus. This evolved into mailing it to the list (which was [email protected] before kernel.org was created).
There was no version control. From time to time, Linus put a tarball on the FTP server. This was the equivalent of a "tag". The available tools at the beginning were RCS and CVS, and Linus hates those, so everybody just mailed patches. (There is an explanation from Linus about why he didn't want to use CVS.)
There were other pre-Bitkeeper proprietary version control systems, but the decentralized, volunteer-based development of Linux made it impossible to use them. A random person who just found a bug will never send a patch if it has to go through a proprietary version control system with licenses starting in the thousands of dollars.
Bitkeeper got around both of those problems: it wasn't centralized like CVS, and while it was not Free Software, kernel contributors were allowed to use it without paying. That made it good enough for a while.
Even with today's git-based development, the mailing lists are still where the action is. When you want to contribute something, you'll prepare it with git of course, but you'll have to discuss it on the relevant mailing list before it gets merged. Bugzilla is there to look "professional" and soak up half-baked bug reports from people who don't really want to get involved.
To see some of the old bug-reporting instructions, get the historical Linux repository. (This is a git repository containing all the versions from before git existed; mostly it contains one commit per release since it was reconstructed from the tarballs). Files of interest include README, MAINTAINERS, and REPORTING-BUGS.
One of the interesting things you can find there is this from the Linux-0.99.12 README:
- if you have problems that seem to be due to kernel bugs, please mail
them to me ([email protected]), and possibly to any other
relevant mailing-list or to the newsgroup. The mailing-lists are
useful especially for SCSI and NETworking problems, as I can't test
either of those personally anyway.
| How did the Linux Kernel project track bugs in the Early Days? |
1,329,874,965,000 |
man getrusage 2 says
ru_maxrss (since Linux 2.6.32)
This is the maximum resident set size used (in kilobytes). For RUSAGE_CHILDREN, this is the resident set size of the largest
child, not the maximum resident set size of the process tree.
So what does this number mean exactly?
|
A process's resident set size is the amount of memory that belongs to it and is currently present (resident) in RAM (real RAM, not swapped or otherwise not-resident).
For instance, if a process allocates a chunk of memory (say 100Mb) and uses it actively (reads/writes to it), its resident set size will be about 100Mb (plus overhead, the code segment, etc.). If after the process then stops using (but doesn't release) that memory for a while, the OS could opt to swap chunks of that memory to swap, to make room for other processes (or cache). The resident set size would then decrease by the amount the kernel swapped out. If the process wakes up and starts re-using that memory, the kernel would re-load the data from swap, and the resident set size would go up again.
The ru_maxrss field of struct rusage is the "high water mark" for the resident set size. It indicates the peak RAM use for this process (when using RUSAGE_SELF).
You can limit a process's resident set size to avoid having a single application "eat up" all the RAM on your system and forcing other applications to swap (or fail entirely with out-of-memory conditions).
| getrusage system call: what is "maximum resident set size" |
1,329,874,965,000 |
I'm trying to detect what filesystems a kernel can support. Ideally in a little list of their names but I'll take anything you've got.
Note that I don't mean the current filesystems in use, just ones that the current kernel could, theoretically support directly (obviously, fuse could support infinite numbers more).
|
Can I list the filesystems a running kernel can support?
Well, answer /proc/filesystems is bluntly wrong — it reflects only those FSes that already were brought in use, but there are way more of them usually that kernel can support:
ls /lib/modules/$(uname -r)/kernel/fs
Another source is /proc/config.gz which might be absent in your distro (and I always wonder «why?!» in case), but a snapshot of config used to build the kernel typically can be found in the boot directory along with kernel and initrd images.
| Can I list the filesystems a running kernel can support? |
1,329,874,965,000 |
I've been reading up about how pipes are implemented in the Linux kernel and wanted to validate my understanding. If I'm incorrect, the answer with the correct explanation will be selected.
Linux has a VFS called pipefs that is mounted in the kernel (not in user space)
pipefs has a single super block and is mounted at it's own root (pipe:), alongside /
pipefs cannot be viewed directly unlike most file systems
The entry to pipefs is via the pipe(2) syscall
The pipe(2) syscall used by shells for piping with the | operator (or manually from any other process) creates a new file in pipefs which behaves pretty much like a normal file
The file on the left hand side of the pipe operator has its stdout redirected to the temporary file created in pipefs
The file on the right hand side of the pipe operator has its stdin set to the file on pipefs
pipefs is stored in memory and through some kernel magic, shouldn't be paged
Is this explanation of how pipes (e.g. ls -la | less) function pretty much correct?
One thing I don't understand is how something like bash would set a process' stdin or stdout to the file descriptor returned by pipe(2). I haven't been able to find anything about that yet.
|
Your analysis so far is generally correct. The way a shell might set the stdin of a process to a pipe descriptor could be (pseudocode):
pipe(p) // create a new pipe with two handles p[0] and p[1]
fork() // spawn a child process
close(p[0]) // close the write end of the pipe in the child
dup2(p[1], 0) // duplicate the pipe descriptor on top of fd 0 (stdin)
close(p[1]) // close the other pipe descriptor
exec() // run a new process with the new descriptors in place
| How pipes work in Linux |
1,329,874,965,000 |
Until recently I thought the load average (as shown for example in top) was a moving average on the n last values of the number of process in state "runnable" or "running". And n would have been defined by the "length" of the moving average: since the algorithm to compute load average seems to trigger every 5 sec, n would have been 12 for the 1min load average, 12x5 for the 5 min load average and 12x15 for the 15 min load average.
But then I read this article: http://www.linuxjournal.com/article/9001. The article is quite old but the same algorithm is implemented today in the Linux kernel. The load average is not a moving average but an algorithm for which I don't know a name. Anyway I made a comparison between the Linux kernel algorithm and a moving average for an imaginary periodic load:
.
There is a huge difference.
Finally my questions are:
Why this implementation have been choosen compared to a true moving average, that has a real meaning to anyone ?
Why everybody speaks about "1min load average" since much more than the last minute is taken into account by the algorithm. (mathematically, all the measure since the boot; in practice, taking into account the round-off error -- still a lot of measures)
|
This difference dates back to the original Berkeley Unix, and stems from the fact that the kernel can't actually keep a rolling average; it would need to retain a large number of past readings in order to do so, and especially in the old days there just wasn't memory to spare for it. The algorithm used instead has the advantage that all the kernel needs to keep is the result of the previous calculation.
Keep in mind the algorithm was a bit closer to the truth back when computer speeds and corresponding clock cycles were measured in tens of MHz instead of GHz; there's a lot more time for discrepancies to creep in these days.
| Why isn't a straightforward 1/5/15 minute moving average used in Linux load calculation? |
1,329,874,965,000 |
I'm running an Ubuntu 12.04 derivative (amd64) and I've been having really strange issues recently. Out of the blue, seemingly, X will freeze completely for a while (1-3 minutes?) and then the system will reboot. This system is overclocked, but very stable as verified in Windows, which leads me to believe I'm having a kernel panic or an issue with one of my modules. Even in Linux, I can run LINPACK and won't see a crash despite putting ridiculous load on the CPU. Crashes seem to happen at random times, even when the machine is sitting idle.
How can I debug what's crashing the system?
On a hunch that it might be the proprietary NVIDIA driver, I reverted all the way down to the stable version of the driver, version 304 and I still experience the crash.
Can anyone walk me through a good debugging procedure for after a crash? I'd be more than happy to boot into a thumb drive and post all of my post-crash configuration files, I'm just not sure what they would be. How can I find out what's crashing my system?
Here are a bunch of logs, the usual culprits.
.xsession-errors: http://pastebin.com/EEDtVkVm
/var/log/Xorg.0.log: http://pastebin.com/ftsG5VAn
/var/log/kern.log: http://pastebin.com/Hsy7jcHZ
/var/log/syslog: http://pastebin.com/9Fkp3FMz
I can't even seem to find a record of the crash at all.
Triggering the crash is not so simple, it seem to happen when the GPU is trying to draw multiple things at once. If I put on a YouTube video in full screen and let it repeat for a while or scroll through a ton of GIFs and a Skype notification pops up, sometimes it'll crash. Totally scratching my head on this one.
The CPU is overclocked to 4.8GHz, but it's completely stable and has survived huge LINPACK runs and 9 hours of Prime95 yesterday without a single crash.
Update
I've installed kdump, crash, and linux-crashdump, as well as the kernel debug symbols for my kernel version 3.2.0-35. When I run apport-unpack on the crashed kernel file and then crash on the VmCore crash dump, here's what I see:
KERNEL: /usr/lib/debug/boot/vmlinux-3.2.0-35-generic
DUMPFILE: Downloads/crash/VmCore
CPUS: 8
DATE: Thu Jan 10 16:05:55 2013
UPTIME: 00:26:04
LOAD AVERAGE: 2.20, 0.84, 0.49
TASKS: 614
NODENAME: mightymoose
RELEASE: 3.2.0-35-generic
VERSION: #55-Ubuntu SMP Wed Dec 5 17:42:16 UTC 2012
MACHINE: x86_64 (3499 Mhz)
MEMORY: 8 GB
PANIC: "[ 1561.519960] Kernel panic - not syncing: Fatal Machine check"
PID: 0
COMMAND: "swapper/5"
TASK: ffff880211251700 (1 of 8) [THREAD_INFO: ffff880211260000]
CPU: 5
STATE: TASK_RUNNING (PANIC)
When I run log from the crash utility, I see this at the bottom of the log:
[ 1561.519943] [Hardware Error]: CPU 4: Machine Check Exception: 5 Bank 3: be00000000800400
[ 1561.519946] [Hardware Error]: RIP !INEXACT! 33:<00007fe99ae93e54>
[ 1561.519948] [Hardware Error]: TSC 539b174dead ADDR 3fe98d264ebd MISC 1
[ 1561.519950] [Hardware Error]: PROCESSOR 0:206a7 TIME 1357862746 SOCKET 0 APIC 1 microcode 28
[ 1561.519951] [Hardware Error]: Run the above through 'mcelog --ascii'
[ 1561.519953] [Hardware Error]: CPU 0: Machine Check Exception: 4 Bank 3: be00000000800400
[ 1561.519955] [Hardware Error]: TSC 539b174de9d ADDR 3fe98d264ebd MISC 1
[ 1561.519957] [Hardware Error]: PROCESSOR 0:206a7 TIME 1357862746 SOCKET 0 APIC 0 microcode 28
[ 1561.519958] [Hardware Error]: Run the above through 'mcelog --ascii'
[ 1561.519959] [Hardware Error]: Machine check: Processor context corrupt
[ 1561.519960] Kernel panic - not syncing: Fatal Machine check
[ 1561.519962] Pid: 0, comm: swapper/5 Tainted: P M C O 3.2.0-35-generic #55-Ubuntu
[ 1561.519963] Call Trace:
[ 1561.519964] <#MC> [<ffffffff81644340>] panic+0x91/0x1a4
[ 1561.519971] [<ffffffff8102abeb>] mce_panic.part.14+0x18b/0x1c0
[ 1561.519973] [<ffffffff8102ac80>] mce_panic+0x60/0xb0
[ 1561.519975] [<ffffffff8102aec4>] mce_reign+0x1f4/0x200
[ 1561.519977] [<ffffffff8102b175>] mce_end+0xf5/0x100
[ 1561.519979] [<ffffffff8102b92c>] do_machine_check+0x3fc/0x600
[ 1561.519982] [<ffffffff8136d48f>] ? intel_idle+0xbf/0x150
[ 1561.519984] [<ffffffff8165d78c>] machine_check+0x1c/0x30
[ 1561.519986] [<ffffffff8136d48f>] ? intel_idle+0xbf/0x150
[ 1561.519987] <<EOE>> [<ffffffff81509697>] ? menu_select+0xe7/0x2c0
[ 1561.519991] [<ffffffff815082d1>] cpuidle_idle_call+0xc1/0x280
[ 1561.519994] [<ffffffff8101322a>] cpu_idle+0xca/0x120
[ 1561.519996] [<ffffffff8163aa9a>] start_secondary+0xd9/0xdb
bt outputs the backtrace:
PID: 0 TASK: ffff880211251700 CPU: 5 COMMAND: "swapper/5"
#0 [ffff88021ed4aba0] machine_kexec at ffffffff8103947a
#1 [ffff88021ed4ac10] crash_kexec at ffffffff810b52c8
#2 [ffff88021ed4ace0] panic at ffffffff81644347
#3 [ffff88021ed4ad60] mce_panic.part.14 at ffffffff8102abeb
#4 [ffff88021ed4adb0] mce_panic at ffffffff8102ac80
#5 [ffff88021ed4ade0] mce_reign at ffffffff8102aec4
#6 [ffff88021ed4ae40] mce_end at ffffffff8102b175
#7 [ffff88021ed4ae70] do_machine_check at ffffffff8102b92c
#8 [ffff88021ed4af50] machine_check at ffffffff8165d78c
[exception RIP: intel_idle+191]
RIP: ffffffff8136d48f RSP: ffff880211261e38 RFLAGS: 00000046
RAX: 0000000000000020 RBX: 0000000000000008 RCX: 0000000000000001
RDX: 0000000000000000 RSI: ffff880211261fd8 RDI: ffffffff81c12f00
RBP: ffff880211261e98 R8: 00000000fffffffc R9: 0000000000000f9f
R10: 0000000000001e95 R11: 0000000000000000 R12: 0000000000000003
R13: ffff88021ed5ac70 R14: 0000000000000020 R15: 12d818fb42cfe42b
ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018
--- <MCE exception stack> ---
#9 [ffff880211261e38] intel_idle at ffffffff8136d48f
#10 [ffff880211261ea0] cpuidle_idle_call at ffffffff815082d1
#11 [ffff880211261f00] cpu_idle at ffffffff8101322a
Any ideas?
|
I have two suggestions to start.
The first you're not going to like. No matter how stable you think your overclocked system is, it would be my first suspect. And any developer you report the problem to will say the same thing. Your stable test workload isn't necessarily using the same instructions, stressing the memory subsystem as much, whatever. Stop overclocking. If you want people to believe the problem's not overclocking, then make it happen when not overclocking so you can get a clean bug report. This will make a huge difference in how much effort other people will invest in solving this problem. Having bug-free software is a point of pride, but reports from people with particularly questionable hardware setups are frustrating time-sinks that probably don't involve a real bug at all.
The second is to get the oops data, which, as you've noticed,
doesn't go to any of the places you've mentioned.
If the crash happens only while running X11, I think local console is pretty much out (it's a pain anyway), so you need to do this over a serial console, over the network, or by saving to a local disk
(which is trickier than it may sound, because you don't want an untrustworthy kernel to corrupt your filesystem).
Here are some ways to do this:
use netdump to save to a server over the network. I haven't done this in years, so I'm not sure this software is still around and working with modern kernels, but it's easy enough that it's worth a shot.
boot using a serial console (archived version, current version); you'll need a serial port free on both machines (whether an old-school one or a USB serial adapter) and a null modem cable; you'd configure the other machine to save the output.
kdump seems to be what the cool kids use nowadays, and seems quite flexible, although it wouldn't be my preference because it looks complex to set up. In short, it involves booting a different kernel that can do anything and inspect the former kernel's memory contents, but you have to essentially build the whole process and I don't see a lot of canned options out there.
Update: There are some nice distro things, actually;
on Ubuntu, linux-crashdump
(archived version, current version).
Once you get the debug info, there's a tool called ksymoops (archived version, current version (with ads))
that you can use to turn the addresses into symbol names
and start getting an idea how your kernel crashed.
And if the symbolized dump doesn't mean anything to you,
at least this is something helpful to report here
or perhaps on your Linux distribution's mailing list / bug tracker.
From crash on your crashdump,
you can try typing log and bt to get a bit more information
(things logged during the panic and a stack back trace).
Your Fatal Machine check seems to be coming from here, though. From skimming the code, your processor has reported a Machine Check Exception – a hardware problem.
Again, my first bet would be due to overclocking. It seems like there might be a more specific message in the log output which could tell you more.
Also from that code, it looks like if you boot with the mce=3 kernel parameter, it will stop crashing... but I wouldn't really recommend this except as a diagnostic step. If the Linux kernel thinks this error is worth crashing over, it's probably right.
| Determining cause of Linux kernel panic |
1,329,874,965,000 |
I have a device that needs a block of memory that is reserved solely for it, without the OS intervening. Is there any way to tell BIOS or the OS that a block of memory is reserved, and it must not use it?
I am using this device on an openSUSE machine.
|
What you're asking for is called DMA. You need to write a driver to reserve this memory.
Yes, I realize you said you didn't want the OS to intervene, and a driver becomes part of the OS, but in absence of a driver's reservation, the kernel believes all memory belongs to it. (Unless you tell the kernel to ignore the memory block, per Aaron's answer, that is.)
Chapter 15 (PDF) of "Linux Device Drivers, 3/e" by Rubini, Corbet and Kroah-Hartmann covers DMA and related topics.
If you want an HTML version of this, I found the second-edition version of the chapter elsewhere online. Beware that the 2nd edition is over a decade old now, having come out when kernel 2.4 was new. There's been a lot of work on the memory management subsystem of the kernel since those days, so it may not apply very well any more.
| How can I reserve a block of memory from the Linux kernel? |
1,329,874,965,000 |
In books, I typically read references to the Linux Source Tree at /usr/src/linux with the usual set of subdirectories (arch, block, crypto, ...).
I was expecting this tree to contain the binary files making up the kernel. In my system (Ubuntu 10.04)...
for the different kernels I have (using automated software downloads, not manually installed), I find in this location instead two sub-directories for each kernel as follows:
/usr/src/linux-headers-2.6.32-22
/usr/src/linux-headers-2.6.32-22-generic
In the sub directories I expected binary files, among others. However, I checked a fair amount of the tree, and the last sub-directory from here seems to always have a Makefile (when reading it, it sounds typically more like a configuration file then an install file), plus occasionally a few isolated other files (mostly Kconfig).
My question may be naive, but I'm a bit confused. Is (2) what I should expect to see in the Kernel Source Tree; and why do I have the explicit reference to 'headers'? I needed to install linux-generic-headers a while back for some other software and am unsure if this might be related. I realize there is good reason for the makefiles (eg, to install modules in the /driver sub-directory), but (pretty much) only makefiles?
|
Distribution kernel-header packages contain, as their name implies, only kernel header files (plus the necessary plumbing) that are required to build software like kernel modules.
You shouldn't expect to find binary files at all in a kernel source directory, except for build output. (If you configure and build a kernel yourself, the kernel source directory will also contain the compiled objects, modules, the built kernel itself and a few other binary bits and pieces that make it work.)
KConfig files are a description of the kernel configuration options (and their dependencies) that are available for a given directory/module.
Apart from that, it's all (mostly) C source code, header files and Makefiles. There are a few helper scripts here and there, and assembly source too.
Header packages (what you installed) only contain the header part of the above (and not all of that - only the "exported" headers), and some of the build infrastructure. So what you are seeing is expected. Header packages do not contain C source code (except for some stubs and build infrastructure code). The whole point of having this type of package is to save space (and bandwidth) - the whole Linux kernel source tree is rather large, and completely unnecessary if you don't intend to compile the kernel yourself. The header packages are built and shipped by distributions to provide just the right things necessary to build modules, but no more. (They certainly do not contain the compiled kernel.)
Addressing your comment: header packages don't relocate anywhere. They are built for specific versions of the kernel, packaged in a specific directory, and that's that. It's just a set of files. (Note that the header packages don't necessarily have the same version as the current stable kernel binary packages - the header packages are generic, and can lag behind the actual kernel you're running. They should not, however, be from a kernel version that is more recent than the current installed (or target) kernel.)
Installed kernel binaries are usually installed in the /boot directory, along with bootloader binaries and configuration files. (This is sometimes an independent filesystem, not mounted by default.) The exact name of the files depends on the kernel and distribution. (So does the bootloader.)
Installed kernel modules reside in sub-directories of:
/lib/modules/`uname -r`/
So for instance on my system, they are currently in
/lib/modules/3.1.4-gentoo/
Full kernel source code: On Ubuntu, if you want the full kernel sources to build a kernel yourself, you should install following the instructions here.
You could also download a source tarball from kernel.org and unpack it somewhere (do not overwrite Ubuntu-installed files if you use this tarball, keep your personal stuff and the stuff managed by RPM separate).
/usr/src/linux is a traditional place to put kernel sources, but nothing prevents you from putting kernel sources elsewhere. This path is also often just a symbolic link to a directory. e.g. I have this on my machine:
$ ls -l /usr/src/linux
lrwxrwxrwx 1 root root 18 Dec 7 17:03 /usr/src/linux -> linux-3.1.4-gentoo
The symlink is there to simplify building applications that depend on the kernel source. You link that path to your running (or target) kernel so that you don't have to specify exact version or path information when you build a module out-of-tree. Helps a bunch for source-based distributions at least.
| What does a kernel source tree contain? Is this related to Linux kernel headers? |
1,329,874,965,000 |
What do the terms "in-tree" and "out-of-tree" exactly mean? Also, does "source tree" specifically refer to the official kernel released from / maintained at kernel.org or is it a more general term which can refer to any (modified) Linux kernel source?
|
"source tree" is not a term specific to kernel source development, so it has to be a more general term and its meaning with regards to kernel source is context dependent.
I have not come across "in-tree" and "out-of-tree" outside of the Linux kernel source development and then only for working with modules. All modules start out as "out-of-tree" developments, that can be compiled using the context of a source-tree. Once a module gets accepted to be included, it becomes an in-tree module. A I have not come across an official definition for both terms though, maybe that was never necessary as it was clear to those working with modules what was meant.
E.g. while Reiserfs module was still an out-of-tree module I did the RPM package generation for SuSE, once it became in-tree there was no longer need for that.
| Linux kernel: meaning of source-tree, in-tree and out-of-tree |
1,329,874,965,000 |
Not much to put here in the body.
|
Processes need to have a parent (PPID). The kernel, despite not being a real process, is nevertheless handcrafting some real processes like at least init, and is giving itself the process ID 0. Depending on the OS it might or might not be displayed as a process in ps output but is always displayed as a PPID:
eg on Linux:
$ ps -ef|head
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 09:09 ? 00:00:00 /sbin/init
root 2 0 0 09:09 ? 00:00:00 [kthreadd]
root 3 2 0 09:09 ? 00:00:00 [ksoftirqd/0]
...
on Solaris:
$ ps -ef|head
UID PID PPID C STIME TTY TIME CMD
root 0 0 0 Oct 19 ? 0:01 sched
root 5 0 0 Oct 19 ? 11:20 zpool-rpool1
root 1 0 0 Oct 19 ? 0:13 /sbin/init
root 2 0 0 Oct 19 ? 0:07 pageout
root 3 0 1 Oct 19 ? 117:10 fsflush
root 341 1 0 Oct 19 ? 0:15 /usr/lib/hal/hald --daemon=yes
root 9 1 0 Oct 19 ? 0:59 /lib/svc/bin/svc.startd
...
Note also that pid 0 (and -1 and other negative values for that matter) have different meanings depending on what function use them like kill, fork and waitpid.
Finally, while the init process is traditionally given pid #1, this is no more the case when OS level virtualization is used like Solaris zones, as there can be more than one init running:
$ ps -ef|head
UID PID PPID C STIME TTY TIME CMD
root 4733 3949 0 11:07:25 ? 0:26 /lib/svc/bin/svc.configd
root 4731 3949 0 11:07:24 ? 0:06 /lib/svc/bin/svc.startd
root 3949 3949 0 11:07:14 ? 0:00 zsched
daemon 4856 3949 0 11:07:46 ? 0:00 /lib/crypto/kcfd
root 4573 3949 0 11:07:23 ? 0:00 /usr/sbin/init
netcfg 4790 3949 0 11:07:34 ? 0:00 /lib/inet/netcfgd
root 4868 3949 0 11:07:48 ? 0:00 /usr/lib/pfexecd
root 4897 3949 0 11:07:51 ? 0:00 /usr/lib/utmpd
netadm 4980 3949 0 11:07:54 ? 0:01 /lib/inet/nwamd
| If computers start counting at 0, why does the init process have a pid of 1? |
1,329,874,965,000 |
The output of dmesg shows the number of second+milliseconds since the system start.
[ 10.470000] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
[ 14.610000] device eth0 entered promiscuous mode
[ 18.750000] cfg80211: Calling CRDA for country: DE
[ 18.750000] cfg80211: Regulatory domain changed to country: DE
Q: How to put the seconds+milliseconds in to a readable format?
My dmesg:
root@OpenWrt:/tmp# dmesg -h
dmesg: invalid option -- h
BusyBox v1.19.4 (2013-03-14 11:28:31 UTC) multi-call binary.
Usage: dmesg [-c] [-n LEVEL] [-s SIZE]
Print or control the kernel ring buffer
-c Clear ring buffer after printing
-n LEVEL Set console logging level
-s SIZE Buffer size
To install util-Linux won't be possible, because there is not much available space:
root@OpenWrt:~# df -h
Filesystem Size Used Available Use% Mounted on
rootfs 1.1M 956.0K 132.0K 88% /
/dev/root 2.0M 2.0M 0 100% /rom
tmpfs 14.3M 688.0K 13.6M 5% /tmp
tmpfs 512.0K 0 512.0K 0% /dev
/dev/mtdblock3 1.1M 956.0K 132.0K 88% /overlay
overlayfs:/overlay 1.1M 956.0K 132.0K 88% /
.
root@OpenWrt:/tmp# which awk perl sed bash sh shell tcsh
/usr/bin/awk
/bin/sed
/bin/sh
root@OpenWrt:~# date -h
date: invalid option -- h
BusyBox v1.19.4 (2013-03-14 11:28:31 UTC) multi-call binary.
Usage: date [OPTIONS] [+FMT] [TIME]
Display time (using +FMT), or set time
[-s,--set] TIME Set time to TIME
-u,--utc Work in UTC (don't convert to local time)
-R,--rfc-2822 Output RFC-2822 compliant date string
-I[SPEC] Output ISO-8601 compliant date string
SPEC='date' (default) for date only,
'hours', 'minutes', or 'seconds' for date and
time to the indicated precision
-r,--reference FILE Display last modification time of FILE
-d,--date TIME Display TIME, not 'now'
-D FMT Use FMT for -d TIME conversion
-k Set Kernel timezone from localtime and exit
|
I think that what you're looking for is -T as documented in man dmesg:
-T, --ctime
Print human readable timestamps. The timestamp could be inaccurate!
The time source used for the logs is not updated after system
SUSPEND/RESUME.
So, for example:
[ 518.511925] usb 2-1.1: new low-speed USB device number 7 using ehci-pci
[ 518.615735] usb 2-1.1: New USB device found, idVendor=1c4f, idProduct=0002
[ 518.615742] usb 2-1.1: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[ 518.615747] usb 2-1.1: Product: USB Keykoard
Becomes:
[Mon Jan 27 16:22:42 2014] hid-generic 0003:1C4F:0002.0007: input,hidraw0: USB HID v1.10 Keyboard [USB USB Keykoard] on usb-0000:00:1d.0-1.1/input0
[Mon Jan 27 16:22:42 2014] input: USB USB Keykoard as /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.1/2-1.1:1.1/input/input24
[Mon Jan 27 16:22:42 2014] hid-generic 0003:1C4F:0002.0008: input,hidraw1: USB HID v1.10 Device [USB USB Keykoard] on usb-0000:00:1d.0-1.1/input1
I found a cool trick here. The sed expression used there was wrong since it would fail when there was more than one ] in the dmesg line. I have modified it to work with all cases I found in my own dmesg output. So, this should work assuming your date behaves as expected:
base=$(cut -d '.' -f1 /proc/uptime);
seconds=$(date +%s);
dmesg | sed 's/\]//;s/\[//;s/\([^.]\)\.\([^ ]*\)\(.*\)/\1\n\3/' |
while read first; do
read second;
first=`date +"%d/%m/%Y %H:%M:%S" --date="@$(($seconds - $base + $first))"`;
printf "[%s] %s\n" "$first" "$second";
done
Output looks like:
[27/01/2014 16:14:45] usb 2-1.1: new low-speed USB device number 7 using ehci-pci
[27/01/2014 16:14:45] usb 2-1.1: New USB device found, idVendor=1c4f, idProduct=0002
[27/01/2014 16:14:45] usb 2-1.1: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[27/01/2014 16:14:45] usb 2-1.1: Product: USB Keykoard
| Human readable dmesg time stamps on OpenWRT |
1,329,874,965,000 |
Between Debian 5 and 6, the default suggested value for kernel.printk in /etc/sysctl.conf was changed from kernel.printk = 4 4 1 7 to kernel.printk = 3 4 1 3. I understand that the first value corresponds to what is going to the console. What are the next 3 values for?
Do the numerical values have the same meaning as the syslog log levels? Or do they have different definitions?
Am I missing some documentation in my searching, or is the only location to figure this out the kernel source.
|
Sysctl settings are documented in Documentation/sysctl/*.txt in the kernel source tree. On Debian, install linux-doc to have the documentation in usr/share/doc/linux-doc-*/Documentation/ (most distributions have a similar package). From Documentation/sysctl/kernel.txt:
The four values in printk denote: console_loglevel,
default_message_loglevel, minimum_console_loglevel and
default_console_loglevel respectively.
These values influence printk() behavior when printing or
logging error messages. See man 2 syslog for more info on
the different loglevels.
console_loglevel: messages with a higher priority than
this will be printed to the console
default_message_loglevel: messages without an explicit priority
will be printed with this priority
minimum_console_loglevel: minimum (highest) value to which
console_loglevel can be set
default_console_loglevel: default value for console_loglevel
I don't find any clear prose explanation of what default_console_loglevel is used for. In the Linux kernel source, the kernel.printk sysctl sets console_printk. The default_console_loglevel field doesn't seem to be used anywhere.
| Description of kernel.printk values |
1,329,874,965,000 |
I recently went through Unpacking kernel-source rpm off-system (OpenSuse)?; and as it took > 10 h on my machine, imagine my surprise that after doing the process described there, I find no Module.symvers anywhere!
When I search for "generate Module.symvers", I get this:
NOTE: "modules_prepare" will not build Module.symvers even if
CONFIG_MODVERSIONS is set; therefore, a full kernel build needs to be
executed to make module versioning work.
(Linux Kernel Documentation :: kbuild : modules.txt)
... but I don't really get it - didn't the kernel get built in the previous step (described in the link given above? I sure know it took > 10 h for CC to generate a whole bunch of *.o files, and LD to link them, so something must have been built. But if so, where is then Module.symvers?
In more explicit terms, exactly what command should I call to generate Module.symvers? I know that make prepare will not work - but what is the command that will?
|
The Module.symvers is (re)generated when you (re)compile modules. Run make modules, and you should get a Module.symvers file at the root of the kernel tree.
Note that if you only ran make and not make modules, you haven't built any modules yet. The symbols from the kernel itself (vmlinux or one of the architecture-dependent image formats) are in System.map.
| How to generate Module.symvers? |
1,329,874,965,000 |
I found a similar question but still it doesn't answer my questions
Do the virtual address spaces of all the processes have the same content in their "Kernel" parts?
First off, considering user processes don't have access to this part and I guess if they try to access it, it would lead to an error, then why even include this part in the user process virtual space?
Can you guys give me a real life scenario of this part being essential and useful?
Also, one more question is I always thought the kernel part of memory is dynamic, meaning it might grow for example when we use dynamic libraries in our programs, so is it true? If so, then how can the OS determine how big the size of kernel is in the virtual space of our processes?
And when our kernel in physical memory grows or changes, does the same effect happens in the kernel part of virtual memory for all the processes? Is the mapping of this virtual kernel to real kernel a one to one mapping?
|
The kernel mapping exists primarily for the kernel’s purposes, not user processes’. From the CPU’s perspective, any physical memory address which isn’t mapped as a linear address might as well not exist. But the CPU does need to be able to call into the kernel: to service interrupts, to handle exceptions... It also needs to be able to call into the kernel when a user process issues a system call (there are various ways this can happen so I won’t go into details). On most if not all architectures, this happens without the opportunity to switch page tables — see for example SYSENTER. So at minimum, entry points into the kernel have to be mapped into the current address space at all times.
Kernel allocations are dynamic, but the address space isn’t. On 32-bit x86, various splits are available, such as the 3/1 GiB split shown in your diagram; on 64-bit x86, the top half of the address space is reserved for the kernel (see the memory map in the kernel documentation). That split can’t move. (Note that libraries are loaded into user space. Kernel modules are loaded into kernel space, but again that only changes the allocations, not the address space split.)
In user mode, there is a single mapping for the kernel, shared across all processes. When a kernel-side page mapping changes, that change is reflected everywhere.
When KPTI is enabled, the kernel has its own private mappings, which aren’t exposed when running user-space code; so with KPTI there are two mappings, and changes to the kernel-private one won’t be visible to user-space (which is the whole point of KPTI).
The kernel memory map always maps all the kernel (in kernel mode when running KPTI), but it’s not necessarily one-to-one — on 64-bit x86 for example it includes a full map of physical memory, so all kernel physical addresses are mapped at least twice.
| What's the use of having a kernel part in the virtual memory space of Linux processes? |
1,329,874,965,000 |
Host - Windows 7
Guest - CentOS
I am trying to install kernel-headers using yum since during the installation of vmware-tools I get a message asking for the path to the kernel header files for the 3.10.0-229.7.2.e17.x86_64.
Running yum install kernel-headers returns Package kernel-headers-3.10.0-229.7.2.e17.x86_64 already installed and latest version. But the directory /usr/src/kernels is empty.
Are the kernel headers installed somewhere else? Or should I be asking yum to install something else?
Path provided to vmware-tools for kernel headers
Searching for a valid kernel header path...
The path "" is not a valid path to the 3.10.0-229.7.2.e17.x86_64 kernel headers.
Would you like to change it? [yes]
Providing the path /usr/include/linux gives the same response again but with "" replaced with the path provided.
|
The correct package to install all of the required dependencies for building kernel modules is kernel-devel (see the CentOS documentation for more information).
The headers are not installed in /usr/src/kernels, rather they're installed in a number of directories below /usr/include (the default location for C header files). You can list the contents of the kernel-headers package you installed using:
rpm -ql kernel-headers
| Empty kernel directory but kernel-headers are installed |
1,329,874,965,000 |
I've moved a server from one mainboard to another due a disk controller failure.
Since then I've noticed that constantly a 25% of one of the cores goes always to IRQ however I haven't managed myself to know which is the IRQ responsible for that.
The kernel is a Linux 2.6.18-194.3.1.el5 (CentOS). mpstat -P ALLshows:
18:20:33 CPU %user %nice %sys %iowait %irq %soft %steal %idle intr/s
18:20:33 all 0,23 0,00 0,08 0,11 6,41 0,02 0,00 93,16 2149,29
18:20:33 0 0,25 0,00 0,12 0,07 0,01 0,05 0,00 99,49 127,08
18:20:33 1 0,14 0,00 0,03 0,04 0,00 0,00 0,00 99,78 0,00
18:20:33 2 0,23 0,00 0,02 0,03 0,00 0,00 0,00 99,72 0,02
18:20:33 3 0,28 0,00 0,15 0,28 25,63 0,03 0,00 73,64 2022,19
This is the /proc/interrupts
cat /proc/interrupts
CPU0 CPU1 CPU2 CPU3
0: 245 0 0 7134094 IO-APIC-edge timer
8: 0 0 49 0 IO-APIC-edge rtc
9: 0 0 0 0 IO-APIC-level acpi
66: 67 0 0 0 IO-APIC-level ehci_hcd:usb2
74: 902214 0 0 0 PCI-MSI eth0
169: 0 0 79 0 IO-APIC-level ehci_hcd:usb1
177: 0 0 0 7170885 IO-APIC-level ata_piix, b4xxp
185: 0 0 0 59375 IO-APIC-level ata_piix
NMI: 0 0 0 0
LOC: 7104234 7104239 7104243 7104218
ERR: 0
MIS: 0
How can I identify which IRQ is causing the high CPU usage?
Edit:
Output from dmesg | grep -i b4xxp
wcb4xxp 0000:30:00.0: probe called for b4xx...
wcb4xxp 0000:30:00.0: Identified Wildcard B410P (controller rev 1) at 00012000, IRQ 177
wcb4xxp 0000:30:00.0: VPM 0/1 init: chip ver 33
wcb4xxp 0000:30:00.0: VPM 1/1 init: chip ver 33
wcb4xxp 0000:30:00.0: Hardware echo cancellation enabled.
wcb4xxp 0000:30:00.0: Port 1: TE mode
wcb4xxp 0000:30:00.0: Port 2: TE mode
wcb4xxp 0000:30:00.0: Port 3: TE mode
wcb4xxp 0000:30:00.0: Port 4: TE mode
wcb4xxp 0000:30:00.0: Did not do the highestorder stuff
wcb4xxp 0000:30:00.0: new card sync source: port 3
|
Well, since you're specifically asking how to know which IRQ is responsible for the number in mpstat, you can assume it's not the local interrupt timer (LOC), since those numbers are fairly equal, and yet mpstat shows some of those cpus at 0 %irq.
That leaves IRQ 0, which is the system timer, and which you can't do anything about, and IRQ 177, which is tied to your b4xxp driver.
My guess is that IRQ 177 would be your culprit.
If this is causing a problem, and you would like to change the behavior your see, try:
disabling the software that uses that card, and see if the interrupts decrease.
removing that card from the system, and unloading the driver, and see if there's improvement.
move that card to another slot and see if that helps.
check for updated drivers or patches for the software.
If it's not a problem, and you were just curious, then carry on. :)
| How can I know which IRQ is responsible of high CPU usage |
1,329,874,965,000 |
I have a IR receiver that is using the imon-driver and I would like to get it working with the kernel. Right now half of the keys on the remote (image) works, but an all important think like the numeric keys doesn't!
The weird think is that the kernel keymap module (rc-imon-pad) seems to be correct but it seems that it is not really used since excatly the same keys are working without that module.
It seems that the rc-imon-pad module always gets loaded when I load imon, and then I suspect that the keycodes are cached so it doesn't make a difference if I unload rc-imon-pad
Now I am lost, if I do cat /dev/input/event5 or ir-keytable -t there is data no matter what key I press, so the driver registers the buttons but it just seems that they are translated to the wrong keycodes.
My kernels is an ubuntu stock kernel from Natty (Linux xbmc 2.6.37-11-generic #25-Ubuntu SMP Tue Dec 21 23:42:56 UTC 2010 x86_64 GNU/Linux)
|
I have the same remote and I have it sending correct keycodes to my 2.6.38-gentoo-r3 kernel. I did not compile keycodes as a module, because they probably haven't had time to make it possible to select individual keymaps yet. It's all or nothing and I don't like a gazillion useless modules cluttering me. Instead I'm letting v4l-utils handle it with udev.
Couple of things I learned:
Check output of ir-keytable -r, it should list all the keycodes applicable to your remote.
Load the keytable manually: ir-keytable -c -w bleh/keymaps/imon_pad, after which ir-keytable -r should give you the table back
You might actually have a faulty receiver, you mention nothing about history. I remember seeing at least one message on lirc-list where guy said sending the case back and getting a new one solved his issues.
Let us know how it went.
| How to debug the input from an input-device (/dev/input/event*) |
1,329,874,965,000 |
I've started installing Debian testing on amd64 and I've come to a screen asking me to install a kernel. It gives me a choice between linux-image-3.16-2-amd64, linux-image-amd64, and none.
What is the difference between these options? Which do I choose?
|
linux-image-amd64 is a generic metapackage, which depends on the specific default kernel package. In your particular case, linux-image-amd64 probably depends on linux-image-3.16-2-amd64. In general it suffices to install the generic metapackage. You could alternatively install the specific linux-image-3.16-2-amd64 package, but in general it is better style to install the generic metapackage.
One specific advantage of installing the generic metapackage (and keeping it installed) is that it makes sure you always stay current on system upgrades. Otherwise, supposing you are upgrading from one Debian release to the next, or even from Debian stable to Debian testing, your kernel version will not automatically be upgraded, aside from minor Debian-specific upgrades for security reasons. However, if you have the generic metapackage installed, the latest kernel will be pulled in as a dependency.
Note however, that the kernel that was already installed, as a dependency of linux-image-amd64 or otherwise, will not be removed. So you will need to periodically prune the list of obsolete kernel packages, including header packages.
| Which Debian kernel should I install? |
1,329,874,965,000 |
While troubleshooting a problem with my ethernet card, I've found that the driver I'm currently using may have some issues with old kernel versions. What command can I use to check the kernel version I am currently running ?
|
You can execute:
uname -r
It will display something like
3.13.0-62-generic
Found on https://askubuntu.com/questions/359574/how-do-i-find-out-the-kernel-version-i-am-running (view that QA to learn other commands you could use)
| How do I check the running kernel version? |
1,329,874,965,000 |
It's said that compiling GNU tools and Linux kernel with -O3 gcc optimization option will produce weird and funky bugs. Is it true? Has anyone tried it or is it just a hoax?
|
It's used in Gentoo, and I didn't notice anything unusual.
| Compiling GNU/Linux with -O3 optimization |
1,356,020,314,000 |
dmesg shows lots of messages from serial8250:
$ dmesg | grep -i serial
[ 0.884481] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled
[ 6.584431] systemd[1]: Created slice system-serial\x2dgetty.slice.
[633232.317222] serial8250: too much work for irq4
[633232.453355] serial8250: too much work for irq4
[633248.378343] serial8250: too much work for irq4
...
I have not seen this message before. What does it generally mean? Should I be worried?
(From my research, it is not distribution specific, but in case it is relevant, I see the messages on an EC2 instance running Ubuntu 16.04.)
|
There is nothing wrong with your kernel or device drivers. The problem is with your machine hardware. The problem is that it is impossible hardware.
This is an error in several virtualization platforms (including at least XEN, QEMU, and VirtualBox) that has been plaguing people for at least a decade. The problem is that the UART hardware that is emulated by various brands of virtual machine behaves impossibly, sending characters at an impossibly fast line speed. To the kernel, this is indistinguishable from faulty real UART hardware that is continually raising an interrupt for an empty output buffer/full input buffer. (Such faulty real hardwares exist, and you will find embedded Linux people also discussing this problem here and there.) The kernel pushes the data out/pulls the data in, and the UART is immediately raising an interrupt saying that it is ready for more.
H. Peter Anvin provided a patch to fix QEMU in 2008. You'll need to ask Amazon when EC2 is going to catch up.
Further reading
Alan Cox (2008-01-12). Re: [PATCH] serial: remove "too much work for irq" printk. Linux Kernel Mailing List.
H. Peter Anvin (2008-02-07). Re: 2.6.24 says "serial8250: too much work for irq4" a lot.. Linux Kernel Mailing List.
Casey Dahlin (2009-05-15). 'serial8250: too much work for irq4' message when viewing serial console on SMP full-virtualized xen domU. 501026. Red Hat Bugzilla.
Sibiao Luo (2013-07-21). guest kernel will print many "serial8250: too much work for irq3" when using kvm with isa-serial. 986761. Red Hat Bugzilla.
schinkelm (2008-12-16). serial port in linux guest gives "serial8250: too much work for irq4". 2752. VirtualBox bugs.
Marc PF (2015-09-05). EC2 instance becomes unresponsive. AWS Developer Forums.
| Understanding "serial8250: too much work for irq4" kernel message |
1,356,020,314,000 |
Last Friday I upgraded my Ubuntu server to 11.10, which now runs with a 3.0.0-12-server kernel. Since then the overall performance has dropped dramatically. Before the upgrade the system load was about 0.3, but currently it is at 22-30 on an 8 core CPU system with 16GB of RAM (10GB free, no swap used).
I was going to blame the BTRFS file system driver and the underlaying MD array, because [md1_raid1] and [btrfs-transacti] consumed a lot of resources. But all the [kworker/*:*] consume a lot more.
sar has outputted something similar to this constantly since Friday:
11:25:01 CPU %user %nice %system %iowait %steal %idle
11:35:01 all 1,55 0,00 70,98 8,99 0,00 18,48
11:45:01 all 1,51 0,00 68,29 10,67 0,00 19,53
11:55:01 all 1,40 0,00 65,52 13,53 0,00 19,55
12:05:01 all 0,95 0,00 66,23 10,73 0,00 22,10
And iostat confirms a very poor write rate:
sda 129,26 3059,12 614,31 258226022 51855269
sdb 98,78 24,28 3495,05 2049471 295023077
md1 191,96 202,63 611,95 17104003 51656068
md0 0,01 0,02 0,00 1980 109
The question is: How can I track down why the kworker threads consume so many resources (and which one)? Or better: Is this a known issue with the 3.0 kernel, and can I tweak it with kernel parameters?
Edit:
I updated the Kernel to the brand new version 3.1 as recommended by the BTRFS developers. But unfortunately this didn't change anything.
|
I found this thread on lkml that answers your question a little. (It seems even Linus himself was puzzled as to how to find out the origin of those threads.)
Basically, there are two ways of doing this:
$ echo workqueue:workqueue_queue_work > /sys/kernel/debug/tracing/set_event
$ cat /sys/kernel/debug/tracing/trace_pipe > out.txt
(wait a few secs)
For this you will need ftrace to be compiled in your kernel, and to enable it with:
mount -t debugfs nodev /sys/kernel/debug
More information on the function tracer facilities of Linux is available in the ftrace.txt documentation.
This will output what threads are all doing, and is useful for tracing multiple small jobs.
cat /proc/THE_OFFENDING_KWORKER/stack
This will output the stack of a single thread doing a lot of work. It may allow you to find out what caused this specific thread to hog the CPU (for example). THE_OFFENDING_KWORKER is the pid of the kworker in the process list.
| Why is kworker consuming so many resources on Linux 3.0.0-12-server? |
1,356,020,314,000 |
I have been working in embedded OS like uCOS, ThreadX. While I have coded apps in Linux, now I’m planning to start learning Linux Kernel. I have few questions regarding the environment.
Which is best distro, which has easy to use tools for kernel development? (so far I had used RHEL and Fedora. While I am comfortable with these, it also looks like Ubuntu has in-built scripts for easy kernel compilation like make_kpkg, etc)
Can you describe the best setup for kernel debugging? While debugging other embedded OSes, I have used serial port to dump progress, JTAG, etc. Which kind of setup does the Linux kernel devs use? (Will my testbed PC with serial port is enough for my needs? If yes, how to configure the kernel to dump to serial port?) I'm planning to redirect kernel messages to serial console which will be read in my laptop.
What tool is best for debugging and tracing kernel code? As mentioned earlier, is serial console the only way? Or any IDE/JTAG kind of interface exists for PC?
|
My personal flavor for Linux Kernel development is Debian. Now for your points:
As you probably guessed Ubuntu doesn't bring anything new to the kernel to ease development afaik, apart from what's already available in Debian. For e.g. make_kpkg is a Debian feature and not Ubuntu. Here are some links to get you started on common Linux Kernel development tasks in Debian:
Chapter 4 - Common kernel-related tasks of Debian Linux Kernel Handbook
Chapter 10 - Debian and the kernel of The Debian GNU/Linux FAQ
The easiest way to do kernel debugging is using QEMU and GDB. Some links to get you started:
http://files.meetup.com/1590495/debugging-with-qemu.pdf
http://www.cs.rochester.edu/~sandhya/csc256/assignments/qemu_linux.html
Though, you should be aware that this method is not viable for certain scenarios like specific hardware issues debugging and such, for which you would be better of using physical serial debugging and real hardware. For this you can use KGDB(it works using ethernet too). KDB is also a good choice. Oh, and by the way, both KGDB and KDB have been merged into the Linux Kernel. More on those two here.
Another cool method, which works marvelously for non-hardware related issues, is using the User-mode Linux Kernel. Running the Kernel in user-mode as any other process allows you to debug it just like any other program(examples). More on User-mode Linux here. UML is part of the Linux Kernel since 2.6.0 , thus you can build any official kernel version above that into UML mode by following these steps.
See item 2. Unfortunately there is no ultimate best method here, since each tool/method has its pro's and con's.
| Kernel Hacking Environment |
1,356,020,314,000 |
Why does RHEL (and its derivatives) use such an old kernel? It uses 2.6.32-xxx, which seems old to me. How do they support newer hardware with that kernel? As far as I know these kind of distributions do run on fairly modern hardware.
|
Because Red Hat Enterprise Linux is foremost about stability, and is a long-lived distribution (some 10 years guaranteed). RHEL users don't want anything to change unless absolutely necessary. But note that the base version of the kernel is old, RHEL's kernel contains lots of backported stuff and bug fixes, so it isn't really old.
| Why does Red Hat Linux use such an old kernel? |
1,356,020,314,000 |
I am in the EU zone +1 or when DST is on, +2. Now is Sunday 31.3.2024. This Sunday morning, we changed into DST, at 2→3 (CET → CEST).
I have Linux servers in separate networks reporting monthly energy consumption at 23:58 of the last day of the month. They have worked perfectly for many years. Networks are up to 8 hours travel apart!
Each server has an RTC and syncs NTP regularly. There are many safeguards if anything is suspicious, and three core servers in each network constantly check against each other whether everything, including the time, is OK. Yes, I am paranoid. I tolerate no (0) bugs. My systems work perfectly stable for many years. Worked, sorry.
Last night, at 23:58 of March 30th, all of my Raspberry Pi servers decided that the March 30th is followed by the 1st of the next month! I verified 7 (seven) different and independent ways per each network, that everything did really occur at March 30th within the minute 23:58! Confirmation includes 9 separate devices ignoring DST, plus an external US and an EU mail provider.
At 23:58 of each day, my servers do a bash test that can apparently go wrong many ways:
(( $(date -d tomorrow +"%-d") == 1 )) && ZadnjiDanMjeseca=1 || ZadnjiDanMjeseca=""
I do not see any way for this test to go wrong! I hope I am wrong. At this moment at 31.3.2024. Everything works perfectly fine, and zdump is correct! (The following example appears misformatted for me. There must be four distinct rows shown.)
hwclock; date; date -d tomorrow +"%-d"
2024-03-31 19:22:23.311664+02:00
ned, 31.03.2024. 19:22:23 CEST
1
The only way I can explain this issue is: The Linux distribution I use, Raspbian, somehow has two separate kernel functions calculating time. One of them, sadly, missed the fact that this is a leap year! Ups!
But, if so, why should it be limited to this Raspberry Pi OS version and kernel? Hell, Microsoft failed to make Excel calculate leap years correctly!
Over this weekend, I have seen several institutions (banks, our national tax office...) in my country having date related issues and closing the shop, too, so it seems not to be limited to Raspbian.
This goes against me as a programmer, but, I can find no alternative explanation. Hopefully, I am wrong...
Before posting, I made a switch from 23:58 to the 00:03 on the 1st, and where I must log data at 23:58 of the last day of the month, I test by adding 300 seconds, not asking for +1 day. These solutions should work.
So it is not buried in the comments: As I never use such calculations, I made a basic error in expecting "tomorrow" actually means tomorrow! It doesn't make any sense to me; it means +1 day. So, I have expected these two commands produce the same output:
> date -d 'next tue'
uto, 2.04.2024. 00:00:00 CEST
> date -d 'tomorrow'
uto, 2.04.2024. 10:24:55 CEST
instead of having to type:
date -d "tomorrow 0"
|
Well, I get:
# date -s '2024-03-30 23:58'; date -d tomorrow
Sat Mar 30 23:58:00 EET 2024
Mon Apr 1 00:58:00 EEST 2024
which isn't really surprising since 24 hours after 2024-03-30 23:58 it is 2024-04-01 00:58. Just that the latter is in daylight savings time.
The manual says:
Relative items adjust a date (or the current date if none) forward or backward. The effects of relative items accumulate.
[...]
More precise units are [...], ‘day’ worth 24 hours,
[...]
The string ‘tomorrow’ is worth one day in the future (equivalent to ‘day’),
[...]
When a relative item causes the resulting date to cross a boundary where the clocks were adjusted, typically for daylight saving time, the resulting date and time are adjusted accordingly.
The way to avoid issues like that is to ask for a time that's not close to midnight.
E.g. noon-ish usually falls on the right day:
# date -s '2024-03-30 23:58'; date -d '+12 hours'
Sat Mar 30 23:58:00 EET 2024
Sun Mar 31 12:58:00 EEST 2024
And this also seems to work:
# date -s '2024-03-30 23:58'; date -d '12:00 tomorrow'
Sat Mar 30 23:58:00 EET 2024
Sun Mar 31 12:00:00 EEST 2024
| How could March 30th 2024 be followed by the 1st? |
1,356,020,314,000 |
As for the "Spectre" security vulnerability, "Retpoline" was introduced to be a solution to mitigate the risk. However, I've read a post that mentioned:
If you build the kernel without CONFIG_RETPOLINE, you can't build modules with retpoline and then expect them to load — because the thunk symbols aren't exported.
If you build the kernel with the retpoline though, you can successfully load modules which aren't built with retpoline. (Source)
Is there an easy and common/generic/unified way to check if kernel is "Retpoline" enabled or not? I want to do this so that my installer can use the proper build of kernel module to be installed.
|
If you’re using mainline kernels, or most major distributions’ kernels, the best way to check for full retpoline support (i.e. the kernel was configured with CONFIG_RETPOLINE, and was built with a retpoline-capable compiler) is to look for “Full generic retpoline” in /sys/devices/system/cpu/vulnerabilities/spectre_v2. On my system:
$ cat /sys/devices/system/cpu/vulnerabilities/spectre_v2
Mitigation: Full generic retpoline, IBPB, IBRS_FW
If you want more comprehensive tests, to detect retpolines on kernels without the spectre_v2 systree file, check out how spectre-meltdown-checker goes about things.
| How to check if Linux kernel is "Retpoline" enabled or not? |
1,356,020,314,000 |
I implemented my own Serial-ATA Host-Bus-Adapter (HBA) in VHDL and programmed it onto a FPGA. A FPGA is chip which can be programmed with any digital circuit. It's also equipped with serial transceivers to generate high speed signals for SATA or PCIe.
This SATA controller supports SATA 6 Gb/s line rates and uses ATA-8 DMA-IN/OUT commands to transfer data in up to 32 MiB chunks to and from the device. The design is proven to work at maximum speed (e.g. Samsung SSD 840 Pro -> over 550 MiB/s).
After some tests with several SSD and HDD devices, I bought a new Seagate 6 TB Archive HDD (ST6000AS0002). This HDD reaches up to 190 MiB/s read performance, but only 30 to 40 MiB/s write performance!
So I dug deeper and measured the transmitted frames (yes that's possible with a FPGA design). As far as I can tell, the Seagate HDD is ready to receive the first 32 MiB of a transfer in one piece. This transfer happens at maximum line speed of 580 MiB/s. After that, the HDD stalls the remaining bytes for over 800 ms! Then the HDD is ready to receive the next 32 MiB and stalls again for 800 ms. All in all an 1 GiB transfer needs over 30 seconds, which equals to circa 35 MiB/s.
I assume that this HDD has a 32 MiB write cache, which is flushed in between the burst cycles. Data transfers with less than 32 MiB don't show this behavior.
My controller uses DMA-IN and DMA-OUT command to transfer data. I'm not using the QUEUED-DMA-IN and QUEUED-DMA-OUT command, which are used by NCQ capable AHCI controllers. Inplementing AHCI and NCQ on a FPGA platform is very complex and not needed by my application layer.
I would like to reproduce this scenario on my Linux PC, but the Linux AHCI driver has NCQ enabled by default. I need to disable NCQ, so I found this website describing how to disable NCQ, but it doesn't work.
The Linux PC still reaches 190 MiB/s write performance.
> dd if=/dev/zero of=/dev/sdb bs=32M count=32
1073741824 bytes (1.1 GB) copied, 5.46148 s, 197 MB/s
I think there is a fault in the article from above: Reducing the NCQ queue depth to 1 does not disable NCQ. It just allows the OS the use only one queue. It can still use QUEUED-DMA-** commands for the transfer. I need to realy disable NCQ so the driver issues DMA-IN/OUT commands to the device.
So here are my questions:
How can I disable NCQ?
If NCQ queue depth = 1, is Linux's AHCI driver using QUEUED-DMA-** or DMA-** commands?
How can I check if NCQ is disable, because changing /sys/block/sdX/device/queue_depth is not reported in dmesg?
|
Thanks to @frostschutz, I could measure the write performance in Linux without NCQ feature. The kernel boot parameter libata.force=noncq disabled NCQ completely.
Regarding my Seagate 6TB write performance problem, there was no change in speed. Linux still reaches 180 MiB/s.
But then I had another idea:
The Linux driver does not use transfers of 32 MiB chunks. The kernel buffer is much smaller, especially if NCQ with 32 queues is enabled (32 queues * 32 MiB => 1 GiB AHCI buffer).
So I tested my SATA controller with 256 KiB transfers and voilà, it's possible to reach 185 MiB/s.
So I guess the Seagate ST6000AS0002 firmware is not capable of handling big ATA burst transfers. The ATA standard allows up to 65.536 logical blocks, which equals 32 MiB.
SMR - Shingled Magnetic Recording
Another possibility for the bad write performance could be the shingled magnetic recording technique, which is used by Seagate in these archive devices. Obviously, I triggered a rare effect with my FPGA implementation.
| How to (really) disable NCQ in Linux |
1,356,020,314,000 |
While going through the "Output of dmesg" I could see a list of values which i am not able to understand properly.
Memory: 2047804k/2086248k available (3179k kernel code, 37232k reserved, 1935k data, 436k init, 1176944k highmem)
virtual kernel memory layout:
fixmap : 0xffc57000 - 0xfffff000 (3744 kB)
pkmap : 0xff800000 - 0xffa00000 (2048 kB)
vmalloc : 0xf7ffe000 - 0xff7fe000 ( 120 MB)
lowmem : 0xc0000000 - 0xf77fe000 ( 887 MB)
.init : 0xc0906000 - 0xc0973000 ( 436 kB)
.data : 0xc071ae6a - 0xc08feb78 (1935 kB)
.text : 0xc0400000 - 0xc071ae6a (3179 kB)
From the values i understand that i have 2GB RAM(Physical memory). But rest of the things seems to be Magic Numbers for me.
I would like to know about each one (fixmap, pkmap,.. etc.) in brief(if more doubts, I will post each one as a separate Question)?
Could someone explain that to me?
|
First off, a 32 bit system has 0xffffffff (4'294'967'295) linear addresses to access a physical location ontop of the RAM.
The kernel divides these addresses into user and kernel space.
User space (high memory) can be accessed by the user and, if necessary, also by the kernel.
The address range in hex and dec notation:
0x00000000 - 0xbfffffff
0 - 3'221'225'471
Kernel space (low memory) can only be accessed by the kernel.
The address range in hex and dec notation:
0xc0000000 - 0xffffffff
3'221'225'472 - 4'294'967'295
Like this:
0x00000000 0xc0000000 0xffffffff
| | |
+------------------------+----------+
| User | Kernel |
| space | space |
+------------------------+----------+
Thus, the memory layout you saw in dmesg corresponds to the mapping of linear addresses in kernel space.
First, the .text, .data and .init sequences which provide the initialization of the kernel's own page tables (translate linear into physical addresses).
.text : 0xc0400000 - 0xc071ae6a (3179 kB)
The range where the kernel code resides.
.data : 0xc071ae6a - 0xc08feb78 (1935 kB)
The range where the kernel data segments reside.
.init : 0xc0906000 - 0xc0973000 ( 436 kB)
The range where the kernel's initial page tables reside.
(and another 128 kB for some dynamic data structures.)
This minimal address space is just large enough to install the kernel in the RAM and to initialize its core data structures.
Their used size is shown inside the parenthesis, take for example the kernel code:
0xc071ae6a - 0xc0400000 = 31AE6A
In decimal notation, that's 3'255'914 (3179 kB).
Second, the usage of kernel space after initialization
lowmem : 0xc0000000 - 0xf77fe000 ( 887 MB)
The lowmem range can be used by the kernel to directly access physical addresses.
This is not the full 1 GB, because the kernel always requires at least 128 MB of linear addresses to implement noncontiguous memory allocation and fix-mapped linear addresses.
vmalloc : 0xf7ffe000 - 0xff7fe000 ( 120 MB)
Virtual memory allocation can allocate page frames based on a noncontiguous scheme. The main advantage of this schema is to avoid external fragmentation, this is used for swap areas, kernel modules or allocation of buffers to some I/O devices.
pkmap : 0xff800000 - 0xffa00000 (2048 kB)
The permanent kernel mapping allows the kernel to establish long-lasting mappings of high-memory page frames into the kernel address space. When a HIGHMEM page is mapped using kmap(), virtual addresses are assigned from here.
fixmap : 0xffc57000 - 0xfffff000 (3744 kB)
These are fix-mapped linear addresses which can refer to any physical address in the RAM, not just the last 1 GB like the lowmem addresses. Fix-mapped linear addresses are a bit more efficient than their lowmem and pkmap colleagues.
There are dedicated page table descriptors assigned for fixed mapping, and mappings of HIGHMEM pages using kmap_atomic are allocated from here.
If you want to dive deeper into the rabbit hole:
Understanding the Linux Kernel
| What does the Virtual kernel Memory Layout in dmesg imply? |
1,356,020,314,000 |
I sent my computer to the manufacturer for diagnosis and help for a video output issue it was having. They updated the BIOS. Since then I've been getting
[Firmware Bug]: TSC_DEADLINE disabled due to Errata; please update microcode to version: 0x20 (or later)
I didn't have any microcode or ucode packages installed before and I didn't used to get this message.
I've contacted the manufacturer and they've responded "don't remember your ticket number but doubt we updated the BIOS", so they're not being very helpful.
It boots and works, but is TSC_DEADLINE important or useful?
The only thing I can find about it is this: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/commit/?id=73b866d89bf7c9a895d5445faad03fa3d56c8af8
But that seems to only apply to VirtualBox, and in any case I'm already running kernel 4.14 so I would think if that commit were going to fix my issue it already would have.
ryan@pocketwee:~$ uname -a
Linux pocketwee 4.14.0-1-amd64 #1 SMP Debian 4.14.2-1 (2017-11-30) x86_64 GNU/Linux
|
The sudden appearance of this message is rather odd; it suggests your updated firmware is no longer upgrading your CPU’s microcode, whereas the previous firmware you had, did. Weird... (Another possible scenario is that your CPU originally didn’t support TSC deadline at all, and your firmware is now upgrading its microcode to a version which declares support for TSC deadline, but has errata rendering it useless.)
In any case, TSC deadline support is nice to have, but not vital. The kernel has an elaborate framework for timekeeping and timed event handling; TSC deadline is one possible implementation of event handling, but not the only one. On CPUs which support it, it is nice to have though, because it’s very efficient.
To upgrade your microcode and hopefully re-enable TSC deadline support, you can install the microcode update packages from Debian’s contrib and non-free repositories. To do so, edit your /etc/apt/sources.list to ensure that your Debian repository definitions include main, contrib and non-free; then run
sudo apt update
followed by
sudo apt install intel-microcode
(for Intel CPUs) or
sudo apt install amd64-microcode
(for AMD CPUs). Once that’s done, reboot, and your microcode should be updated. If TSC deadline support is re-enabled, you won’t see the error message at boot, and you’ll see tsc_deadline_timer in the flags lines of /proc/cpuinfo.
The Debian wiki has more information on microcode updates.
| TSC_DEADLINE disabled due to Errata |
1,356,020,314,000 |
What's the benefit of compiling kernel modules into the kernel (instead of as loadable modules)?
|
It depends. If you have a small amount of memory, the use of modules may improve the resume as they are not reloaded every time (I found it significant on 2 GiB of RAM but not on 4 GiB on traditional harddrives). This is especially true when due to some bug in the battery module (regardless of being compiled-in or as module), it took very long to start (several minutes). Even without bug on gentoo I managed to shorten time (reported by systemd-analysis) from 33s to 18s just by changing from statically compiled kernel to modules - 'surprisingly' the start of kernel changed from 9s to 1.5s.
Also, when you don't know what hardware are you going to use, modules are clearly beneficial.
PS. You can compile even vital drivers as modules as long as you include them in initrd. For example, distros will include the filesystem of /, drivers of harddrive etc. in initrd on installation.
| Benefit of kernel module compiled inside kernel? |
1,356,020,314,000 |
What is the difference between a "Non-preemptive", "Preemptive" and "Selective Preemptive" Kernel?
Hope someone can shed some light into this.
|
On a preemptive kernel, a process running in kernel mode can be replaced by another process while in the middle of a kernel function.
This only applies to processes running in kernel mode, a CPU executing processes in user mode is considered "idle". If a user mode process wants to request a service from the kernel, he has to issue an exception which the kernel can handle.
As an example:
Process A executes an exception handler, Process B gets awaken by an IRQ request, the kernel replaces process A with B (a forced process switch). Process A is left unfinished. The scheduler decides afterwards if process A gets CPU time or not.
On a nonpreemptive kernel, process A would just have used all the processor time until he is finished or voluntarily decides to allow other processes to interrupt him (a planned process switch).
Today's Linux based operating systems generally do not include a fully preemptive kernel, there are still critical functions which have to run without interruption. So I think you could call this a "selective preemptive kernel".
Apart from that, there are approaches to make the Linux kernel (nearly) fully preemptive.
Real Time Linux Wiki
LWN article
| What is the difference between Non-preemptive, Preemptive and Selective Preemptive Kernel? |
1,356,020,314,000 |
I am compiling my own 3.14 kernel. I fear I may have left out some important networking feature to get DNS working.
I can't resolve domain names. I can ping my DNS server.
I can resolve using that DNS on other machines so I know it's not the server.
~ # cat /etc/resolv.conf
nameserver 192.168.13.5
~ # nslookup google.com
Server: 192.168.13.5
Address 1: 192.168.13.5
nslookup: can't resolve 'google.com'
~ # ping -c 1 google.com
ping: bad address 'google.com'
~ # ping -c 1 192.168.13.5
PING 192.168.13.5 (192.168.13.5): 56 data bytes
64 bytes from 192.168.13.5: seq=0 ttl=128 time=0.382 ms
--- 192.168.13.5 ping ststistics ---
1 packets transmitted, 1 packets recieved, 0% packet loss
reound-trip min/avg/max = 0.382/0.382/0.382 ms
Any ideas what I left out? here is my config: http://pastebin.com/vt4vGTgJ
EDIT:
If it's not the kernel, what could I be missing? I am using busybox, statically linked. there are no shared libraries in this system.
|
The problem is with busybox. I switched to a precompiled version and did not have issues. I need to look into compilation options with it. Thanks for your help.
https://gist.github.com/vsergeev/2391575:
There are known issues with DNS functionality in statically-linked glibc programs (like busybox in this case), because libnss must be dynamically loaded. Building a uClibc toolchain and linking busybox against that would resolve this.
| Busybox ping IP works, but hostname nslookup fails with "bad address" |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.