date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,436,241,099,000
I'm building an embedded Linux system, based on Gentoo. Using udev, all tty devices are probed with a PROGRAM stanza to determine if they are a modem. Right now the system boots up with 64 /dev/tty*. When udev probes the tty devices, the system runs out of memory. How can I reduce the number of produced tty devices to 4? Is this an OS setting or a kernel setting?
I'm not sure exactly how the device nodes are created (i.e. the exact sequence of events that lead to their creation), but I'm pretty sure the kernel creates the underlying devices for the 63 /dev/ttyN devices (plus /dev/tty) internally, and udev does the work of making them available inside /dev (except for /dev/tty and /dev/tty1 which are created by /etc/init.d/udev-mount with mknod). I don't think you can limit the number of kernel devices via configuration. Here is a workaround if you want to limit the number of devices that appear in your /dev though. Create a /etc/udev/rules.d/99-my-tty-rules.rules file and put something like the following in it: KERNEL=="tty[2-9][0-9]", RUN="/bin/rm /dev/%k", OPTIONS+="ignore_device" This will get rid of tty device files numbered 20 and above. Notes: Using rm in there looks really strange, but I can't find a way to not create the node in the first place Playing with these entries a bit too enthusiastically can lead to interesting problems - use with caution.
Change the number of generated /dev/tty devices
1,436,241,099,000
I already asked in #debian-next and on the forum, but thought I'll also ask here before switching back to Fedora. Suspend does not work. When I use systemctl suspend, the workstation goes into suspend for about 1 second and then comes back. I have no idea what could cause that. I tried to disable some of those devices in /proc/acpi/wakeup (XHCn), didn't help. I updated all firmware from https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git/, also didn't help. IIRC with Fedora 39 XfCE it worked, but I wasn't able to test it correctly because I was only able to flash a live image to an USB stick, where I experienced the same behavior. The BIOS is up-to-date, it works like a charm on Windows. So, before I switch back to Fedora, if anyone has an idea how to get that working, any help/information/idea is very appreciated. I like Debian, but this is unfortunately a killer for me :/ dmesg: https://pastebin.com/VidzhNgB Edit: I installed fedora for testing. I was wrong, I have the same problem there. So I am wondering what's wrong now. It was working like a charm, I can remember that. But … the only thing I changed are those RGB case FANs. I doubt those can interrupt suspend??
This is not a real solution, but the only way to get this fixed was to replace the mainboard with one from ASRock. Now everything works as expected. Very sad that there was no other way.
Suspend doesn't work in Debian / powers up after a second
1,436,241,099,000
I have built a custom kernel which I want to install. I am afraid that the next upgrade will overwrite my custom kernel. Do I have to put the current kernel on hold or is this not necessary? I cannot find a hint in the many description of how to build a custom kernel. The current kernel is: # uname -r 4.4.0-59-generic The custom kernel debs are: linux-headers-4.4.35-custom_4.4.35-custom-1_i386.deb linux-image-4.4.35-custom_4.4.35-custom-1_i386.deb linux-image-4.4.35-custom-dbg_4.4.35-custom-1_i386.deb linux-libc-dev_4.4.35-custom-1_i386.deb
Package manager will not overwrite your kernel. Actually, package manager never overwrite any kernel, it just add new version parallel to existing versions on system. Depending on distribution, package manager may initiate reconfiguration of boot loader on your system upon installing new or removing existing kernel, but that's really distribution specific. (Ubuntu does exactly that) In that case, after installing new version of kernel, it will be default on startup, so you will have to manually change boot loader config so your custom version would be default choice.
Hold Kernel for Custom Kernel
1,436,241,099,000
I already got the keycode from kernel, but acpi_listen won't recognize it in Arch Linux: # /usr/lib/udev/keymap -i /dev/input/by-path/platform-thinkpad_acpi-event Press ESC to finish, or Control-C if this device is not your primary keyboard scan code: 0x1A key code: micmute Now I try to map the key by: # /usr/lib/udev/keymap /dev/input/by-path/platform-thinkpad_acpi-event 0x01a micmute setting scanode 0x1A to key code 248 But acpi_listen still got no output here. How should I make acpi_listen recognize it? UPDATE2 Well , evdev driver doesn't seem to recognize this , I heard someone saying that xorg won't route key event number that went beyond the limit.. It has to be solved as a acpi event , but don't know how UPDATE Seems to be complicated, $ xmodmap -e 'keycode 248 = XF86MicMute NoSymbol XF86MicMute' xmodmap: commandline:1: bad keysym name 'XF86MicMute' in keysym list xmodmap: commandline:1: bad keysym name 'XF86MicMute' in keysym list xmodmap: 2 errors encountered, aborting.
The problem is that key code micmute is out of range, as explained in this bug report. So you need to remap scan code 0x1A to some other key code you are not using that is in range. If this workaround using prog2 doesn't work you have to pick some other key code. You can look in /usr/include/linux/input.h to see which key codes are defined and then look at your keymap to see what key codes are in use. Remember to pick a key code < 247.
Setting up kernel keyboard mapping
1,436,241,099,000
For security reasons I have to boot Linux from u-boot with all output hidden (silently) until a password is entered. I've configured uBoot to do this correctly using the CONFIG_AUTOBOOT_KEYED macro and can successfully boot silently. The issue I am having is that when uBoot boots the Linux kernel and silent mode is enabled, it passes console= as part of the bootargs to Linux kernel. This is fine for silent booting, but I can't seem to find a way to re-enable the console again after bootup. I've also tried to boot normally and append loglevel=0 to the kernal bootargs which works for silent bootup, but again I cannot re-enable the console. I've tried: dmesg -n 4 and klogd -c 4 to try to set the Kernel loglevel back to KERN_WARNING (4) without luck. These commands work properly when I boot the Kernel normally. The best guide I've found on the matter is Silencing the boot process on blackfin.uclinux.org. Ideally I'd like to use uBoot's silent mode where it passes console= as part of the bootargs but still take input on the console and re-enable output when the password is entered.
If anyone else runs into this issue I never found a good fix. I ended up hacking both u-boot and the linux kernel serial driver and basically checking if the password had been entered. If it had, I allowed the code to run normally. If it hadn't I just returned from the functions so that nothing was actually printed out on the console. For Kernel I edited the receive_chars() function to look for the password (input) and transmit_chars() to mask output. I had u-boot pass the password in as part of the bootargs. If it was null, then the password was already entered and we ignored the special code. If it was a value, then we grabbed input chars via receive_chars() and compare them to the stored string from bootargs. In u-boot I just used the CONFIG_AUTOBOOT_KEYED and associated default macros for the password entry. I then changed common/cmd_bootm.c to not call fixup_silent_linux() to mask the console= value and let the kernel deal with it as stated above. Hopefully this helps someone else.
Silent booting Linux from u-boot
1,440,716,501,000
Are there any other interfaces, e.g. the /proc filesystem?
The Linux kernel syscall API is the the primary API (though hidden under libc, and rarely used directly by programmers), and most standard IPC mechanisms are heavily biased toward the everything is a file approach, which eliminates them here as they ultimately require read/write (and more) calls. However, on most platforms (if you exclude all the system calls to get you there) there is a way: VDSO. This is a mechanism where the kernel maps one (or more) slightly magic pages into each process (usually in the form of an ELF .so). You can see this as linux-vdso.so or similar with ldd or in /proc/PID/maps. This is effectively memory-mapped IPC between the kernel and a user process (albeit one-way in its current implementation). It's used to speed up syscalls in general and was originally implemented (linux-gate.so) to address x86 performance issues, but it may also contain kernel data and access functions. Calls like getcpu() and gettimeofday() may use these rather than making an actual syscall and a kernel context switch. The availability of these optimised calls is detected and enabled by the glibc startup code (subject to platform availability). Current implementations contain a (read-only) page of shared kernel variables known as the "VVAR" page which can be read directly. You can check this by inspecting the output of strace -e trace=clock_gettime date to see if your date command makes any clock_gettime() syscalls, with a working VDSO it will not (the time will be read from the VVARS page by a function in the VDSO page, see arch/x86/vdso/vclock_gettime.c). There's a useful technical summary here: http://blog.tinola.com/?e=5 a more detailed tutorial: http://www.linuxjournal.com/content/creating-vdso-colonels-other-chicken , and the man page: http://man7.org/linux/man-pages/man7/vdso.7.html
Are system calls the only way to interact with the Linux kernel from user land?
1,440,716,501,000
RHEL 7.2 memory use, per free -m: total used free shared buff/cache available Mem: 386564 77941 57186 687 251435 306557 Swap: 13383 2936 16381 we see that used swap is 2936M so we want to decrease it to min by the following echo 1 > /proc/sys/vm/swappiness sysctl -w vm.swappiness=1 echo "vm.swappiness = 1" >> /etc/sysctl.conf and after 10 min we check again , but still OS used the swap free -m: total used free shared buff/cache available Mem: 386564 77941 57186 687 251435 306557 Swap: 13389 2930 16381 why the actions that we did not take affect immeditly? Do we need to restart the OS, in order to get swap used to be 0 ? example we run vmstat: vmstat procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 3 0 85740 20255872 2238248 183126400 0 0 7 162 0 0 7 1 92 0 0 we decrease the vm.swappiness=1 and run vmstat after 10min: procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 3 0 85740 20255872 2238248 183126400 0 0 7 162 0 0 7 1 92 0 0
As you’ve been told before (see Why does swappiness not work?), changing swappiness only affects future decisions made by the kernel when it needs to free memory. Reducing it won’t cause the kernel to reload everything that’s been swapped out. Your vmstat output shows that swap isn’t being actively used, i.e. your current workloads really don’t need the pages which have been swapped out. There’s no point in trying to micro-manage the kernel’s use of swap in the way you tend to do. Depending on your workload, decide whether you need to favour the page cache or not, adjust swappiness accordingly, then leave the system to run. If you really want to clear swap, disable it and re-enable it: swapoff -a && swapon -a
Why does RHEL use swap even when vm.swappiness = 1?
1,440,716,501,000
I have experienced an issue nearly identical to one described in the askubuntu community. Like that of the user who posted this issue, my system features a Kingston NVME disk, and as with that user, my issue resolved by adding the following kernel option in the grub menu: nvme_core.default_ps_max_latency_us=0. The user's stated resolution begins as follows: The problem was of a SSD features, the Autonomous Power State Transitions(APST) was causing the freezes. To mitigate it, until they will release the fix, include the line nvme_core.default_ps_max_latency_us=0 in the GRUB_CMDLINE_LINUX_DEFAULT options. Although helpful, this comment leaves several questions open, including the following: What and where is the specific flaw causing the problem? What does the workaround change to prevent the presentation of the flaw? What functionality or other desired effect is lost due to such a workaround? And especially, what is required to be fixed, the kernel, the storage-media firmware, the system firmware (i.e. UEFI/BIOS), or some other component, to provide a proper a resolution? Any comments are helpful attempting to resolve all or part of this confusion.
The code comment within drivers/nvme/host/core.c in Linux kernel source seems to explain it best: /* * APST (Autonomous Power State Transition) lets us program a table of power * state transitions that the controller will perform automatically. * * Depending on module params, one of the two supported techniques will be used: * * - If the parameters provide explicit timeouts and tolerances, they will be * used to build a table with up to 2 non-operational states to transition to. * The default parameter values were selected based on the values used by * Microsoft's and Intel's NVMe drivers. Yet, since we don't implement dynamic * regeneration of the APST table in the event of switching between external * and battery power, the timeouts and tolerances reflect a compromise * between values used by Microsoft for AC and battery scenarios. * - If not, we'll configure the table with a simple heuristic: we are willing * to spend at most 2% of the time transitioning between power states. * Therefore, when running in any given state, we will enter the next * lower-power non-operational state after waiting 50 * (enlat + exlat) * microseconds, as long as that state's exit latency is under the requested * maximum latency. * * We will not autonomously enter any non-operational state for which the total * latency exceeds ps_max_latency_us. * * Users can set ps_max_latency_us to zero to turn off APST. */ static int nvme_configure_apst(struct nvme_ctrl *ctrl) So, APST is a feature that allows the NVMe controller (within the NVMe SSD) to switch between power management states autonomously, following configurable rules. The NVMe controller specifies how many microseconds it needs to enter and exit each power-save state; the kernel uses this information to configure the state transition rules within the NVMe controller. What and where is the specific flaw causing the problem? It looks like this particular Kingston NVMe SSD is either way too optimistic in its wake-up time estimates, or fails to wake up at all (without fully resetting the controller) after entering a deep enough power saving state. When given the permission to use APST, it apparently goes into some power saving state and then fails to return to operational state within the specified time, which makes the kernel unhappy. What does the workaround change to prevent the presentation of the flaw? It tells the maximum allowed time for waking up from APST power management states is exactly 0 microseconds, which causes the APST feature to be disabled. What functionality or other desired effect is lost due to such a workaround? If the NVMe controller's autonomous power management feature cannot be used, the controller will only be allowed to enter power-saving states when specifically requested by the kernel. This means the power savings most likely won't be as great as with APST in use. And especially, what is required to be fixed, the kernel, the storage-media firmware, the system firmware (i.e. UEFI/BIOS), or some other component, for users to experience a proper a resolution? The optimal fix would be for Kingston to provide a NVMe disk firmware update that either makes the APST power management work correctly, or at minimum, makes the drive not promise something it cannot deliver, i.e. not announce APST modes with overly-optimistic transition times, and/or not announce at all any APST modes that will cause the controller to fail if used. If it turns out the problem can be avoided by e.g. programming APST to avoid the deepest power-saving state completely, it might be possible to create a more specific kernel-level workaround. Many device drivers in the Linux kernel have "quirk tables" specifying workarounds for specific hardware models. In the case of NVMe, you can find one in drivers/nvme/host/pci.c within Linux kernel source: static const struct pci_device_id nvme_id_table[] = { { PCI_VDEVICE(INTEL, 0x0953), /* Intel 750/P3500/P3600/P3700 */ .driver_data = NVME_QUIRK_STRIPE_SIZE | NVME_QUIRK_DEALLOCATE_ZEROES, }, { PCI_VDEVICE(INTEL, 0x0a53), /* Intel P3520 */ .driver_data = NVME_QUIRK_STRIPE_SIZE | NVME_QUIRK_DEALLOCATE_ZEROES, }, { PCI_VDEVICE(INTEL, 0x0a54), /* Intel P4500/P4600 */ .driver_data = NVME_QUIRK_STRIPE_SIZE | NVME_QUIRK_DEALLOCATE_ZEROES | NVME_QUIRK_IGNORE_DEV_SUBNQN, }, { PCI_VDEVICE(INTEL, 0x0a55), /* Dell Express Flash P4600 */ .driver_data = NVME_QUIRK_STRIPE_SIZE | NVME_QUIRK_DEALLOCATE_ZEROES, }, { PCI_VDEVICE(INTEL, 0xf1a5), /* Intel 600P/P3100 */ .driver_data = NVME_QUIRK_NO_DEEPEST_PS | NVME_QUIRK_MEDIUM_PRIO_SQ | NVME_QUIRK_NO_TEMP_THRESH_CHANGE | NVME_QUIRK_DISABLE_WRITE_ZEROES, }, { PCI_VDEVICE(INTEL, 0xf1a6), /* Intel 760p/Pro 7600p */ .driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN, }, { PCI_VDEVICE(INTEL, 0x5845), /* Qemu emulated controller */ .driver_data = NVME_QUIRK_IDENTIFY_CNS | NVME_QUIRK_DISABLE_WRITE_ZEROES | NVME_QUIRK_BOGUS_NID, }, { PCI_VDEVICE(REDHAT, 0x0010), /* Qemu emulated controller */ .driver_data = NVME_QUIRK_BOGUS_NID, }, [...] Here the various NVME_QUIRK_ settings trigger various pieces of workaround code within the driver. Note that there already exists a quirk setting named NVME_QUIRK_NO_DEEPEST_PS which prevents state transitions to the deepest power management state. If the APST problem of your Kingston NVMe turns out to have the same workaround as already implemented for Intel 600P/P3100 and ADATA SX8200PNP, then all it would take is writing a new quirk table entry like this (replacing the things within <angle brackets> with appropriate values, you can get them with lspci -nn): { PCI_DEVICE(<PCI vendor ID>, <PCI product ID of the SSD>), /* <specify make/model of SSD here> */ .driver_data = NVME_QUIRK_NO_DEEPEST_PS, }, and recompiling the kernel with this modification. Obviously, someone who actually has this exact SSD model is needed to test this. If you happen to be familiar with C programming basics and how to compile custom kernels, this could be your chance to get your name to the long list of Linux kernel contributors! If you are interested, you should probably read kernelnewbies.org for more details. The kernel programming is not always deeply intricate: there are lot of simple parts that just need a person with the right kind of hardware and some basic programming knowledge. I've submitted a few minor patches just like this. If setting the NVME_QUIRK_NO_DEEPEST_PS turns out not to fix the problem, then implementing a new quirk might be needed. That could be more complicated, and might require some experimentation or ideally information from Kingston to find out what exactly needs to be done to avoid this problem, and perhaps discussion with the Linux NVMe driver maintainer on the best way to implement it.
clarifying nvme apst problems for linux
1,440,716,501,000
I was reading some sites and came across (LTR) by the number of kernel. Can somebody explain what is means?
LTR stands for Long Term Release. This is also known as a LTSR, short for Long Term Support Release. These releases are supported for a longer time, and are meant to be used in Production Environments, where stability is preferred over new features. In terms of the kernel you are reading about, the LTR cycle is about 3 years. This means if you are a user who needs stability, if you download an LTR kernel, it will be supported upstream for the next 3 Years. The definitive source for Linux Kernels are the Linux Kernel Archives.
What does LTR kernel mean?
1,440,716,501,000
I don't have any swap partition/file on my machine, and only 2GB of RAM. Sometimes it happens that the memory gets saturated by some process (Xorg+browser+compiler+...) and the system hangs indefinitely, and the only way to restart it (other than hard reset) is with SysRq. I understood that the Out Of Memory Killer won't help me because when the memory is completely full, kernel cannot allocate the OOM Killer itself. Is there any way to preload the OOM Killer, so that it can actually work when memory is completely full? Or is it possible to tweak the kernel so that OOM Killer gets activated when my ram is full at ${TOTAL_RAM} - 10MB?
I'm fairly sure that the kernel reserves some memory for itself, i.e. for launching the oom_killer. (What use would a oom_killer be if it fails to load due to lack of memory?)
Preloading the OOM Killer
1,440,716,501,000
As far as I understand, the kernel is not a process, but rather a set of handlers that can be invoked from the runtime of another progress (or by the kernel itself via a timer or something similar?) If a program hits some exception handler that requires long-running synchronous processing before it can start running again (e.g. hits a page fault that requires a disk read), how does the kernel identify that the context should be switched? In order to achieve this, it would seem another process would need to run? Does the kernel spawn a process that takes care of this by intermittently checking for processes in this state? Does the process that invokes the long-running synchronous handler let the kernel know that it should switch contexts until the handler is complete (e.g. the disk read completes)?
"The kernel is not a process." This is pure terminology. (Terminology is important.) The kernel is not a process because by definition processes exist in userland. But the kernel does have threads. "If a program hits some exception handler that requires long-running synchronous processing before it can start running again (e.g. hits a page fault that requires a disk read)". If a userland process executes a machine instruction which references an unmapped memory page then: The processor generates a trap and transitions to ring 0/supervisor mode. (This happens in hardware.) The trap handler is part of the kernel. Assuming that indeed the memory page must be paged in from disk, it will put the process in the state of uninterruptible sleep (this means it saves the process CPU state in the process table and it modifies status field in the process entry in the table of processes), finds a victim memory page, initiates the I/O to page out the victim and page in the requested page, and invokes the scheduler (another part of the kernel) to switch userland context to another process which is ready to run. Eventually, the I/O completes. This generates an interrupt. In response to the interrupt, the processor invokes a handler and transitions to ring 0/supervisor mode. (This happens in hardware.) The interrupt handler is part of the kernel. It clears the waiting for I/O state of the process which was waiting for the memory page and marks it ready to run. It then invokes the scheduler to switch userland context to a process which is ready to run. In general, the kernel runs: In response to a hardware trap or interrupt; this includes timer interrupts. In response to an explicit system call from a user process. Most of the time, the processor is at ring 3/user mode and executes instructions from some userland process. It transitions to ring 0/supervisor mode (where the kernel lives) when an userland process makes a syscall (for example, because it wants to do some input/output operation) or when the hardware generates a trap (invalid memory access, division by zero, and so on) or when an interrupt request is received from the hardware (I/O completion, timer interrupt, mouse move, packet arrived on the network interface, etc.) To answer the question in the title, "how does the kernel scheduler know how to pre-empt a process": the kernel handles timer interrupts. If, when a timer interrupt arrives, the schduler notices that the currently running userland process has exhausted its quantum then the process is put at the end of the running queue and another process is resumed. (In general, the scheduler takes care to ensure that all userland processes which are ready to run receive processor time fairly.)
How does the kernel scheduler know how to pre-empt a process?
1,440,716,501,000
Is there a way to increase the limit of 20 multicast groups that you can join on a given socket? Is there some system setting that I am missing or is there some hard limit which cannot be exceeded?
Well, incase someone is searching for this, the following parameter exists: /proc/sys/net/ipv4/igmp_max_memberships Currently my install says 20, in the sources, I also see: bits/in.h:#define IP_MAX_MEMBERSHIPS 20 I think bumping up the system parameter may be enough, else will have to patch the header as well! EDIT: looks like bumping up the system parameter does the trick.
Is there a way to increase the 20 multicast group limit per socket?
1,440,716,501,000
Each process has 2 memory area: User space (high memory) and kernel space (low memmory). In the kernel space, are the first 896 MB used for mapping kernel code (not fully 1 GB)? This means, when a user -space application performs a system call or anything related to the kernel, the kernel will refer to kernel space for the system call to execute, is it? The reserved 128MB in kernel space (for high memory (user space) access), is it all the references of user-space memory area? So, a kernel process can access any user space by refer to this area, is this true? That's why this area is called highmem in kernel space, isn't it?
"High memory" and "low memory" do not apply to the virtual address space of processes, it's about physical memory instead. In the process' virtual address space, the user space occupies the first 3GB, and the kernel space the fourth GB of this linear address space. The first 896MB of the kernel space (not only kernel code, but its data also) is "directly" mapped to the first 896 MB of the physical memory. It is "direct" in the sense that there's always an offset of 0xc0000000 between any linear address of this 896MB part of the virtual kernel space and its corresponding address in physical memory (note however that the MMU is enabled and that page table entries are actually used for this). The last 128MB part of the virtual kernel space is where are mapped some pieces of the physical "high memory" (> 896MB) : thus it can only map no more than 128MB of "high memory" at a time. Reference: "Understanding the Linux Kernel", third edition - sections "8.1.3. Memory Zones" and "8.1.6. Kernel Mappings of High-Memory Page Frames".
High memory (user space) and highmem (kernel space)
1,440,716,501,000
I recently ran into trouble when installing Debian on a netbook. There were three major hardware/kernel issues that I didn't feel like fixing, and since every single forum/wiki writeup reported the exact same problems with very few easy solutions, I decided to try something new. So I just tried every distro I could get my hands on, hoping to find one that would work more or less perfectly out of the box. To my surprise, the latest version of openSUSE works flawlessly and runs about as fast as Debian. Unfortunately, many packages that are standard in Debian-based systems are not even installed by default in openSUSE. The repository of available software is depressingly small. Is there any way I can get around this? Build a Debian-based system on top of the out-of-the-box kernel setup from a totally unrelated distro?
You can't directly install packages from one distribution onto another distribution. Usually driver issues don't depend on the distribution, they depend on the kernel and driver versions. Try to find a more recent kernel or drivers for Debian. You can install Debian in a chroot (either on a separate partition or with Debootstrap, and run it off the SuSE kernel. I've written a schroot and debootstrap guide for another purpose, but once you have a debootstrap binary on your SuSE installation and the schroot package installed, similar instructions should get you a running Debian or Ubuntu chroot.
can i modify an .rpm-based system to use .deb files, apt-get and debian/ubuntu repositories?
1,440,716,501,000
After upgrading to kernel 6.8.0, VMware's vmmon and vmnet couldn't compile, giving the following error: ... ... /tmp/vmware-host-modules-w17.5.1/vmmon-only/common/task.c:548:1: warning: no previous prototype for ‘TaskGetFlatWriteableDataSegment’ [-Wmissing-prototypes] 548 | TaskGetFlatWriteableDataSegment(void) | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /tmp/vmware-host-modules-w17.5.1/vmmon-only/common/task.o: warning: objtool: .text: unexpected end of section CC [M] /tmp/vmware-host-modules-w17.5.1/vmmon-only/common/vmx86.o In file included from /tmp/vmware-host-modules-w17.5.1/vmmon-only/common/vmx86.c:52: ./arch/x86/include/asm/timex.h: In function ‘random_get_entropy’: ./arch/x86/include/asm/timex.h:12:24: error: implicit declaration of function ‘random_get_entropy_fallback’; did you mean ‘random_get_entropy’? [-Werror=implicit-function-declaration] 12 | return random_get_entropy_fallback(); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~ | random_get_entropy ... ...
Mkubecek's plain patch on the community forum for VMware 17.5.1 couldn't compile either.  On his website at Kernel – 6.8 Released – OK with latest NVIDIA and Patched VMware, there's a link titled modules: fix build with -Wmissing-prototypes to patch VMware 17.5.1, which fixes that compilation error, among others. Hit the green [ <> Code ] button, download the zip file, unzip it somewhere on the local disk, e.g., /tmp/; # unzip vmware-host-modules-workstation-17.5.1.zip # cd vmware-host-modules-workstation-17.5.1/ # make # make install # /etc/init.d/vmware restart Stopping VMware services: VMware Authentication Daemon done Virtual machine monitor done Starting VMware services: Virtual machine monitor done Virtual machine communication interface done VM communication interface socket family done Virtual ethernet done VMware Authentication Daemon done Shared Memory Available done Start VMware: # vmware
VMware vmmon & vmnet 17.5.1 and Linux kernel 6.8.0 won't compile
1,440,716,501,000
The sock struct defined in sock.h, has two attributes that seem very similar: sk_wmem_alloc, which is defined as "transmit queue bytes committed" sk_wmem_queued, defined as "persistent queue size" To me, the sk_wmem_alloc is the amount of memory currently allocated for the send queue. But then, what is sk_wmem_queued? References According to this StackOverflow answer: wmem_queued: the amount of memory used by the socket send buffer queued in the transmit queue and are either not yet sent out or not yet acknowledged. The ss man also gives definitions, which don't really enlighten me (I don't understand what the IP layer has to do with this): wmem_alloc: the memory used for sending packet (which has been sent to layer 3) wmem_queued: the memory allocated for sending packet (which has not been sent to layer 3) Someone already asked a similar question on the LKML, but got no answer The sock_diag(7) man page also has its own definitions for these attributes: SK_MEMINFO_WMEM_ALLOC: The amount of data in send queue. SK_MEMINFO_WMEM_QUEUED: The amount of data queued by TCP, but not yet sent. All these definitions are different, and none of them clearly explain how the _alloc and _queued variants are different.
I emailed Eric Dumazet, who contributes to the Linux network stack, and here is the answer: sk_wmem_alloc tracks the number of bytes for skb queued after transport stack : qdisc layer and NIC TX ring buffers. If you have 1 MB of data sitting in TCP write queue, not yet sent (cwnd limit), sk_wmem_queue will be about 1MB, but sk_wmem_alloc will be about 0 A very good document for understanding what these three types of queues (socket buffer, qdisc queue and device queue) are is this article (rather long) article. In a nutshell, the socket starts by pushing the packets directly onto the qdisc queue, which forwards them to the device queue. When the qdisc queue is full, the socket starts buffering the data in its own write queue. the network stack places packets directly into the queueing discipline or else pushes back on the upper layers (eg socket buffer) if the queue is full So basically: sk_wmem_queues is the memory used by the socket buffer (sock.sk_write_queue) while sk_wmem_alloc is the memory used by the packets in the qdisc and device queues.
What is the difference between sock->sk_wmem_alloc and sock->sk_wmem_queued
1,440,716,501,000
The concepts of IO scheduler and CPU scheduler confuse me. Below are my understanding: Linux uses CFS scheduler + nice values by default to schedule processes. Each process has an IO queue. There is an IO scheduler kernel thread. IO scheduler is in the block level, not in the file level. IO scheduler is a module of the file system. Questions: What is relationship between IO scheduler and CPU scheduler? Conceptually, it seems to me that CPU scheduler is superior over IO scheduler. CPU scheduling happens first. IO scheduler is a thread itself and subject to CPU scheduling. A contrived scenario looks like this: Step 1: CPU scheduler picks a process P1 to execute Step 2: P1 puts IO requests in its own IO queue Step 3+: CPU scheduler picks other threads to run. (Assuming no process has IO other than P1) ....(after a while) Step n: CPU scheduler picks the IO scheduler thread to run. Step n+1: IO scheduler thread 'notices' P1 has IO requests queued up and issues those requests to disk. Does my understanding and the scenario make sense?
Let's start with the IO scheduler first. There's a IO scheduler per block device. Its job is to schedule (order) the requests that pile up in the device queue. There are three different algorithms currently shipped in the linux kernel: deadline, noop and cfq. cfq is the default, and according to its doc: The CFQ I/O scheduler tries to distribute bandwidth equally among all processes in the system. It should provide a fair and low latency working environment, suitable for both desktop and server systems You can configure which scheduler governs which device via the scheduler file corresponding to your block device under /sys/ (You can issue the following command to find it: find /sys | grep queue/scheduler). What that short description doesn't say is that cfq is the only scheduler that looks at the ioprio of a process. ioprio is a setting that you can assign to the process, and the algorithm will take that into account when choosing a request before another. ioprio can be set via the ionice utility. Then, there's the task scheduler. Its job is to allocate the CPUs amongst the processes that are ready to run. It takes into account things like the priority, the class and the niceness of a give process, as well as how long that process has run and other heuristics. Now, to your questions: What is relationship between IO scheduler and CPU scheduler? Not much, besides the name. They schedule different shared resources. The first one orders the requests going to the disks, and the second one schedules the 'requests' (you can view a process as requesting CPU time to be able to run) to the CPU. CPU scheduling happens first. IO scheduler is a thread itself and subject to CPU scheduling. It doesn't happen like the the IO scheduler algorithm is run by whichever process is queuing a request. A good way to see this is to look at crashes that have elv_add_request() in their path. For example: [...] [<c027fac4>] error_code+0x74/0x7c [<c019ed65>] elv_next_request+0x6b/0x116 [<e08335db>] scsi_request_fn+0x5e/0x26d [scsi_mod] [<c019ee6a>] elv_insert+0x5a/0x134 [<c019efc1>] __elv_add_request+0x7d/0x82 [<c019f0ab>] elv_add_request+0x16/0x1d [<e0e8d2ed>] pkt_generic_packet+0x107/0x133 [pktcdvd] [<e0e8d772>] pkt_get_disc_info+0x42/0x7b [pktcdvd] [<e0e8eae3>] pkt_open+0xbf/0xc56 [pktcdvd] [<c0168078>] do_open+0x7e/0x246 [<c01683df>] blkdev_open+0x28/0x51 [<c014a057>] __dentry_open+0xb5/0x160 [<c014a183>] nameidata_to_filp+0x27/0x37 [<c014a1c6>] do_filp_open+0x33/0x3b [<c014a211>] do_sys_open+0x43/0xc7 [<c014a2cd>] sys_open+0x1c/0x1e [<c0102b82>] sysenter_past_esp+0x5f/0x85 Notice how the process enters the kernel calling open(), and this ends up involving the elevator (elv) algorithm.
Relationship between IO scheduler and cpu/process scheduler?
1,440,716,501,000
I am configuring the Linux kernel version 3.9.4. I am being asked questions about RCU (seen below). Specifically, what are each of these and what are the advantages and disadvantages of enabling or disabling some of these? Consider userspace as in RCU extended quiescent state (RCU_USER_QS) [N/y/?] Tree-based hierarchical RCU fanout value (RCU_FANOUT) [64] Disable tree-based hierarchical RCU auto-balancing (RCU_FANOUT_EXACT) [N/y/?] Accelerate last non-dyntick-idle CPU's grace periods (RCU_FAST_NO_HZ) [Y/n/?] Offload RCU callback processing from boot-selected CPUs (RCU_NOCB_CPU) [N/y/?]
There are some details about these options over on the LTTng Project site. RCU's are (read-copy-update). These are data structures in the kernel which allow for the same data to be replicated across cores in a multi-core CPU and they guarantee that the data will be kept in sync across the copies. excerpt liburcu is a LGPLv2.1 userspace RCU (read-copy-update) library. This data synchronization library provides read-side access which scales linearly with the number of cores. It does so by allowing multiples copies of a given data structure to live at the same time, and by monitoring the data structure accesses to detect grace periods after which memory reclamation is possible. Resources There is a good reference to what RCU's are and how they work over on lwn.net titled: What is RCU, Fundamentally?. There's also this resource by the same title as lwn.net but it's different content. There is also the Wikipedia entry on the RCU topic too. Finally there's the Linux kernel documentation available here: rcu.txt. So what are these options? RCU_USER_QS This option sets hooks on kernel / userspace boundaries and puts RCU in extended quiescent state when the CPU runs in userspace. It means that when a CPU runs in userspace, it is excluded from the global RCU state machine and thus doesn't try to keep the timer tick on for RCU. Unless you want to hack and help the development of the full dynticks mode, you shouldn't enable this option. It also adds unnecessary overhead. If unsure say N RCU Fanout This option controls the fanout of hierarchical implementations of RCU, allowing RCU to work efficiently on machines with large numbers of CPUs. This value must be at least the fourth root of NR_CPUS, which allows NR_CPUS to be insanely large. The default value of RCU_FANOUT should be used for production systems, but if you are stress-testing the RCU implementation itself, small RCU_FANOUT values allow you to test large-system code paths on small(er) systems. Select a specific number if testing RCU itself. Take the default if unsure. RCU_FANOUT_EXACT This option forces use of the exact RCU_FANOUT value specified, regardless of imbalances in the hierarchy. This is useful for testing RCU itself, and might one day be useful on systems with strong NUMA behavior. Without RCU_FANOUT_EXACT, the code will balance the hierarchy. Say N if unsure. RCU_FAST_NO_HZ This option permits CPUs to enter dynticks-idle state even if they have RCU callbacks queued, and prevents RCU from waking these CPUs up more than roughly once every four jiffies (by default, you can adjust this using the rcutree.rcu_idle_gp_delay parameter), thus improving energy efficiency. On the other hand, this option increases the duration of RCU grace periods, for example, slowing down synchronize_rcu(). Say Y if energy efficiency is critically important, and you don't care about increased grace-period durations. Say N if you are unsure. RCU_NOCB_CPU Use this option to reduce OS jitter for aggressive HPC or real-time workloads. It can also be used to offload RCU callback invocation to energy-efficient CPUs in battery-powered asymmetric multiprocessors. This option offloads callback invocation from the set of CPUs specified at boot time by the rcu_nocbs parameter. For each such CPU, a kthread ("rcuox/N") will be created to invoke callbacks, where the "N" is the CPU being offloaded, and where the "x" is "b" for RCU-bh, "p" for RCU-preempt, and "s" for RCU-sched. Nothing prevents this kthread from running on the specified CPUs, but (1) the kthreads may be preempted between each callback, and (2) affinity or cgroups can be used to force the kthreads to run on whatever set of CPUs is desired. Say Y here if you want to help to debug reduced OS jitter. Say N here if you are unsure. So do you need it? I would say if you don't know what a particular option does when compiling the kernel then it's probably a safe bet that you can live without it. So I'd say no to those questions. Also when doing this type of work I usually get the config file for the kernel I'm using with my distro and do a comparison to see if I'm missing any features. This is probably your best resource in terms of learning what all the features are about. For example in Fedora there are sample configs included that you can refer to. Take a look at this page for more details: Building a custom kernel.
Understanding RCU when Configuring the Linux Kernel
1,440,716,501,000
I am just confused like how can I break cmd=3222823425 value into different parts to figure out what this command means actually in the Linux kernel. I know, some functions are making ioctl command with following parameters but I want to know what these parameter values mean. fd=21, cmd=3222823425 and arg=3203118816 I have been looking into various forums, man pages and other links to figure this out like what does it mean when a cmd in an ioctl system call has value of 3222823425. I have found that cmd is a command number which consists of type, number and data_type and first twos are 8-bit integers (0-255). So my question is how to decode these parameter values to find out what this call is trying to do?
An ioctl goes to a driver, so the most important thing to figure out what an ioctl is doing is which driver is handling it. What you've read about type, number and data_type is a convention that driver writers are supposed to use when choosing ioctl numbers. Although different drivers can use the same value to mean completely different things, it's best to avoid this, so that if an ioctl is accidentally sent to the wrong device, there is a good chance that it will return an error rather than cause some catastrophic event. A good description of the convention is in the book Linux Device Drivers (LDD), chapter 6. The data_type is in fact (since some time early in the 2.6.x series IIRC) made of two parts, direction and size. type (8 bits) is a constant that must be consistent across the ioctls implemented in a driver and should be different from ioctls of unrelated devices if possible. There is an out-of-date repository of type values in Documentation/ioctl/ioctl-number.txt. number (8 bits) should be different for all the ioctls in a driver. direction (2 bits) indicate the direction of data transfer (0=none, 1=write, 2=read, 3=both). size is the size of the data buffer, if the ioctl argument is a pointer to a data buffer. The ioctl number should be direction << 30 | size << 16 | type << 8 | number (If you're writing a driver, use the _IOC_* macros defined in asm-generic/ioctl.h.) For your ioctl number 3222823425 = 0xc0186201, we get type=0x62 (known as “bit3 vme host bridge” in 1999), number=1, direction=2 and size=0x18=24, so the ioctl takes a 24-byte input parameter. This ioctl value should be defined as _IOR(0x62, 0x01, struct somestruct) or something equivalent like _IOR('b', 1, struct somestruct), where struct somestruct is a 24-byte structure. If you don't know what driver is processing the ioctl, you can search for a call like this in the kernel source to gather candidates. However, note that a simple text search often won't find the driver, because it's common for them to use a macro, e.g. #define FOOIO_TYPE 0x62 followed by #define FOOIO_SOMETHING _IOR(FOOIO_TYPE, 1, struct foobar). An ioctl call has two parameters in addition to the file descriptor that the ioctl acts on: the ioctl number cmd, and an argument arg. The argument can be an immediate value or a pointer to a buffer. Here, if the driver writer is following the convention, arg should be a pointer to a 24-byte buffer in the application memory space.
How to decode cmd = 3222823425 in ioctl in Linux 2.6.29
1,440,716,501,000
when we run the following dmesg on rhel 7.2 , we get errors about assuming drive cache dmesg --level=err [ 5.325381] sd 0:0:0:0: [sda] Assuming drive cache: write through [ 5.325492] sd 0:0:5:0: [sde] Assuming drive cache: write through [ 5.325637] sd 0:0:3:0: [sdc] Assuming drive cache: write through [ 5.325667] sd 0:0:4:0: [sdd] Assuming drive cache: write through [ 5.326309] sd 0:0:2:0: [sdb] Assuming drive cache: write through [ 10.277944] piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! any idea what is the meaning of this kernel error? I have read the redhat post - https://access.redhat.com/solutions/42752 but not clearly what is the solution ? Note - I must to said that servers are after unexpected reboot
From the link you gave, “So these events can typically be ignored.” That’s the solution. The kernel prints the message to err on the side of caution; all it means is that the kernel tried to determine the drives’ caches’ characteristics, using an optional feature of the SCSI specification, but the drives don’t support it so the characteristics couldn’t be obtained. There’s nothing to do about it. (The only drives I’m aware of that provide the required information are SCSI or SAS drives.)
dmesg + Assuming drive cache: write through
1,440,716,501,000
Is there a way to measure elapsed time running the program under gdb? Look to this: <------------bp----------------------------------> Assume that we are debugging a file and in some random place, we set a breakpoint. Now in gdb we perform something and then we let the program continue the execution by using the gdb command line (run). My question is here. I want to measure the elapsed time from the bp until the program either successfully ends or some error occurs. My suggestion is to use .gdbinit file, and in that file we call some C function to start the timer after run command and at the end of the execution we also call a gettime() C fun. So, my pseudo code is a bit like this (.gdbinit file): break *0x8048452 (random place) //start time run //get time
The easiest way to do this (if your gdb has python support): break *0xaddress run # target process is now stopped at breakpoint python import time python starttime=time.time() continue python print (time.time()-starttime) If your gdb doesn't have python support but can run shell commands, then use this: shell echo set \$starttime=$(date +%s.%N) > ~/gdbtmp source ~/gdbtmp continue shell echo print $(date +%s.%N)-\$starttime > ~/gdbtmp source ~/gdbtmp Alternatively, you could make the target process call gettimeofday or clock_gettime, but that is a lot more tedious. These functions return the time by writing to variables in the process's address space, which you'd probably have to allocate by calling something like malloc, and that may not work if your breakpoint stopped the program in the middle of a call to another malloc or free. However, a slight problem with this solution is that the continue and print result lines need to be run right after each other, or else the timing will be inaccurate. We can solve this by putting the commands in a canned script through "define". If we run define checkTime, then gdb will prompt us to enter a list of commands. Just enter any of the command lists above(python/shell), and then you can call the script by just using the command checkTime. Then, the timing will be accurate. Additionally, you can put define checkTime and then the list of commands in the ./gdbinit file so that you don't have to manually redefine it every time you execute a new program.
Elapsed time in gdb
1,440,716,501,000
There are two Linux kernel functions: get_ds() and get_fs() According to this article, I know ds is short for data segment. However, I cannot guess what "fs" is short for. Any explanations?
The FS comes from the additional segment register named FS on the 386 architecture (end of second paragraph). My guess is that after DS for Data Segment and ES for Extra Segment Intel just went for the next characters in the alphabet (FS, GS). You can see the 386 register on the wiki page, on the graphic on the right side. From the linux kernel source on my Linux Mint system (arch/x86/include/asm/uaccess.h): /* * The fs value determines whether argument validity checking should be * performed or not. If get_fs() == USER_DS, checking is performed, with * get_fs() == KERNEL_DS, checking is bypassed. * * For historical reasons, these macros are grossly misnamed. */ #define MAKE_MM_SEG(s) ((mm_segment_t) { (s) }) #define KERNEL_DS MAKE_MM_SEG(-1UL) #define USER_DS MAKE_MM_SEG(TASK_SIZE_MAX) #define get_ds() (KERNEL_DS) #define get_fs() (current_thread_info()->addr_limit) #define set_fs(x) (current_thread_info()->addr_limit = (x))
What is "fs" short for in kernel function "get_fs()"?
1,440,716,501,000
I came across the following parameter in kernel settings (in sysctl): vm.min_free_kbytes This is the amount of free memory (RAM) that's always free no matter what. In my case, I have only 1 GiB of RAM, and this parameter was set to about 64MiB. I thought that's pretty high, so I lowered it to 8MiB so far. I don't know if I can lower it any further, or if lowering it to 8MiB can cause any troubles, so the question is what would happen if the amount of the free memory was too low? Can I safely lower the value to 1MiB?
Should be safe but can't guarentee it. From kernel docs: min_free_kbytes: This is used to force the Linux VM to keep a minimum number of kilobytes free. The VM uses this number to compute a watermark[WMARK_MIN] value for each lowmem zone in the system. Each lowmem zone gets a number of reserved free pages based proportionally on its size. Some minimal amount of memory is needed to satisfy PF_MEMALLOC allocations; if you set this to lower than 1024KB, your system will become subtly broken, and prone to deadlock under high loads. Setting this too high will OOM your machine instantly. Essentially if you set it to low you'll have problems with memory allocation.
What would happen if the amount of free memory (vm.min_free_kbytes) was too low?
1,440,716,501,000
Here is an example that will explain better: I have a selected the audio driver from the picture and i would like to browse through its source. How do i get to the path of the source files from here?
You have to use grep -r CONFIG_SND_SOC_MXS_SGTL5000. Each of these config options just represents a #define macro. Many of them don't belong to a single file but instead are checked in multiple source files. CONFIG_64BIT for example appears in around 1k source code files.
How to relate a kernel config setting to the source files?
1,440,716,501,000
I am compiling my first Kernel (3.5 rc1) from source through menuconfig. Certain configuration options are pre-set. Who / what determines if they are pre-set? Does the make menuconfig somehow detect my computer and its devices and characteristics and generates them? Or do the default configs come with the source, pre-determined by someone (who put the source out)?
make menuconfig doesn't dynamically determine your environment and tries to set the appropriate config but uses your .config file and the default entries in kconfig. So yes the defaults come with the source and are specified in the kconfig files which also specify the help text, dependencies and other things. Have a look at a sample kconfig file like net/Kconfig. make localmodconfig on the other hand tries to create a custom tailored kernel configuration for your system based on the loaded modules. It takes your current configuration (typically from your distribution) and will only enable the loaded modules.
How are Linux kernel compile config's determined?
1,440,716,501,000
Windows 10 support Intel speed shift. Does Linux (kernel) also support it? Speed shift related info: https://www.anandtech.com/show/9751/examining-intel-skylake-speed-shift-more-responsive-processors
Should support since 3.19: https://elixir.bootlin.com/linux/v3.19/source/drivers/cpufreq/intel_pstate.c static void intel_pstate_hwp_enable(void) { hwp_active++; pr_info("intel_pstate HWP enabled\n"); wrmsrl( MSR_PM_ENABLE, 0x1); } another commit (v3.19-rc1): intel_pstate: Add support for HWP https://github.com/torvalds/linux/commit/2f86dc4cddcb21290ca099e1dce2a53533c86e0b#diff-d06e88b1dd6d576c23e3654d87258879
Does linux (kernel) support Intel speed shift?
1,440,716,501,000
A swap partition doesn't contain a structured filesystem. The kernel doesn't need that because it stores memory pages on the partition marked as a swap area. Since there could be several memory pages in the swap area, how does the kernel locate each page when a process requests its page to be loaded into memory? Let's explain more: Looking at the header of the swap partition from Devuan OS: #define SWAP_UUID_LENGTH 16 #define SWAP_LABEL_LENGTH 16 struct swap_header_v1_2 { char bootbits[1024]; /* Space for disklabel etc. */ unsigned int version; unsigned int last_page; unsigned int nr_badpages; unsigned char uuid[SWAP_UUID_LENGTH]; char volume_name[SWAP_LABEL_LENGTH]; unsigned int padding[117]; unsigned int badpages[1]; }; So when mkswap command is executed for a partition, that's what gets placed on that partition, the swap header. Now, let's have a scenario where "process A" has its memory page swapped, so there's one memory page in the swap area. Of course, there could be many memory pages in the swap area. "Process A" needs to access that memory page that was swapped. "Process A" tells the kernel, may I have my swapped memory page, please? The kernel says: sure, my dear friend. The kernel looks for "process A"'s memory page in the swap partition. Since the swap partition isn't a sophisticated structure (not a filesystem), how would the kernel know how to locate that specific memory page of "process A" in the swap partition? Perhaps the kernel somewhere stores sector addresses for those swapped pages, so when a process asks for its memory page, the kernel knows where to look in the swap partition, reads the memory page from the partition and loads it into memory.
Swap is only valid during a given boot, so all the tracking information is kept in memory. Swapping pages in and out is handled entirely by the kernel, and is transparent to processes. Basically, memory is split up into pages, tracked in page tables; these are structures defined by each CPU architecture. When a page is swapped out, the kernel marks it as invalid; thus, the next time anything tries to access the page, the CPU will fault, which will cause a handler in the kernel to be invoked; it’s this handler’s responsibility to restore the page’s contents. In Linux, there’s a swap_info structure which describes each swap device or file. Within that structure, a swap_map maps memory pages to blocks in the swap device or file. When a page is swapped out, the kernel stores the swap_info index and swap_map offset in the corresponding page table entry, which allows it to find the page on disk when necessary. (All supported architectures provide enough space for this in their page tables, but there are limits — e.g. the available space means that Linux can manage at most 64GiB of swap on x86.) You’ll find a much more detailed description of all this in the “Swap Management” chapter of Mel Gorman’s Understanding the Linux Virtual Memory Manager.
How does the kernel address swapped memory pages on swap partition or swap file?
1,440,716,501,000
I am using Arch Linux with a custom kernel stored as /boot/vmlinuz-linux1. Some features I would like to have don't work in it, but there is also a /boot/vmlinuz-linux kernel where those features work. How can I retrieve the .config kernel configuration file from the second vmlinuz file in order to compare it with the configuration of the first kernel in a text editor?
As far as I'm aware, extracting the .config configuration file from a kernel is possible only if you've compiled it with the configuration option CONFIG_IKCONFIG (available in the configuration menu as entry General setup > Kernel .config support). Here is the documentation of that configuration option: CONFIG_IKCONFIG: This option enables the complete Linux kernel ".config" file contents to be saved in the kernel. It provides documentation of which kernel options are used in a running kernel or in an on-disk kernel. This information can be extracted from the kernel image file with the script scripts/extract-ikconfig and used as input to rebuild the current kernel or to build another kernel. It can also be extracted from a running kernel by reading /proc/config.gz if enabled (below). The last sentence refers to an additional configuration option CONFIG_IKCONFIG_PROC which gives you access to the configuration of a running kernel through a file in the proc pseudo-filesystem. If your kernel has not been compiled with CONFIG_IKCONFIG, I don't think you can retrieve its configuration easily. Otherwise, it's as simple as gunzip /proc/config.gz > .config if CONFIG_IKCONFIG_PROC has been selected and you're currently running your /boot/vmlinuz-linux kernel, or scripts/extract-ikconfig /boot/vmlinuz-linux The script extract-ikconfig is the one available along with the kernel sources, in folder scripts.
Can I get .config file from vmlinuz file?
1,440,716,501,000
I want to see all hardware supported by the kernel in use. For example, if I have the 3.8.x.x version of the Linux kernel, how will I know what hardware is supported there. Tools like lspci, lshw, lscpu and dmidecode only check the hardware that is used at the moment and trying to find this using the loaded modules with lsmod is not handy either. What I want is something that checks all hardware actually supported by the kernel in use without taking into consideration if I am using that hardware or not. For the moment I thought of stuff like: Reading the /lib/modules/3.8.0-5-generic/kernel/drivers and parsing every file. Downloading the git source of the kernel and grepping it for information about this. Any other crazy and very long way of doing it. Is there any other smaller way of achieving this.
What I want is something that checks all hardware actually supported by the kernel in use without taking into consideration if I am using that hardware or not. If you have the .config file the kernel was built with, you can download the source and run make menuconfig, which will give you an idea of A) what hardware it is possible to configure a kernel for (but see NOTE), and B) what hardware your kernel is actually configured for. So to do this: Download the source. Your distro may have a package, or you can get them from kernel.org; find your version with uname -r. Find the .config used for your kernel. If you got the source via your distro, it will hopefully be included; you may also be able to find it somewhere in /boot. Even better: often now distro kernels are built with the /proc/config.gz feature. If it exists, copy that out, ungzip it, rename the file .config and copy it into the top level of the kernel source tree. Run make menuconfig from inside the top level of the source tree. You will need the ncurses development package installed (ncurses-dev or ncurses-devel) and you need to be root. You can't do anything bad while using menuconfig beyond change the contents of the .config file, which won't matter (just don't confuse yourself with it later). NOTE: You can't actually see all the possible hardware configurations at the same time, since different options may appear in one place based on what has been selected some other place. Kernel configuration is a bit of a labyrinth. However, you will definitely see everything that is actually selected (M means it is a module, * means it is built in).
How to list all hardware supported by kernel
1,440,716,501,000
I using embedded Linux, I have compiled the kernel without initramfs and kernel is booting fine. But It shows me rcS file is not found I have put it in /etc/init.d/rcS and my rcS file look like #!/bin/sh echo "Hello world" After the file system is mounted by the kernel it prints Hello world. Can any one tell/explain me why this file is require and how could I start those start up scripts in particular order? I am using Raspberry Pi with busybox and it works fine but get I got stuck in the startup.
/etc/init.d/rcS allows you to run additional programs at boot time. Its typical use is to mount additional filesystems (only the root filesystem is mounted at that point) and launch some daemons. Usually rcS is a shell script, which can easily be customized on the fly. Typical distributions make rcS a simple script that executes further scripts in /etc/rcS.d (the exact location is distribution-dependent); this allows each daemon to be package with its own init script. The file /etc/rc.local is also executed by rcS if present; it is intended for commands written by the system administrator. With the traditional SysVinit implementation of init, /etc/init.d/rcS is listed in /etc/inittab (the sysinit setting). With BusyBox, you can also supply an inittab (if the feature is compiled in) but there is a built-in default that makes it read /etc/init.d/rcS (among other things).
Why is rcS required after file system is mounted by the kernel?
1,440,716,501,000
I downloaded the sources for my kernel, applied a patch and rebuilt it and now I have a kernel module that, when I try to insmod, complains "Unknown symbol in module" with dmesg giving the error "disagrees about the version of symbol ...". Without having to hunt down the source for this module and rebuild it against my kernel is it possible to somehow force the kernel to accept this module? I realise this would be dangerous but I'll to take the risk if its possible.
insmod isn't the best tool to load modules - use modprobe instead, it's smarter. In modprobe's man page, you'll find that it has a --force option which might load a module with conflicting version information. As you said, this is dangerous and should essentially never be used. You pick up the pieces if your system blows up.
possible to load kernel module that "disagrees about version of symbol"
1,440,716,501,000
we have moon server - version rhel 7.5 the behavior of consuming swap on this server is very strange we configured the /proc/sys/vm/swappiness to 1 and restart the server but we can see that server is eating 15G , when the available is 44G !! HOW IT CAN BE?? [root@moon01 network-scripts]# more /proc/sys/vm/swappiness 1 [root@moon01 network-scripts]# free -g total used free shared buff/cache available Mem: 125 80 38 0 6 44 Swap: 15 15 0 from my understanding only if available is near last few giga ram then swap will increased but this isn't the situation
Even with swappiness=1 Linux will continue to use swap if available. Your user-space programs do not need to exhaust free RAM for the kernel to start swapping. I first discovered this because I was having problems on an Ubuntu Linux desktop. In answers and comments to my question, someone pointed out that disk caching is the probable cause. "Free" space in memory is very rarely empty. The kernel will quietly use it for caches including disk caches, safe in the knowledge that it can abandon the cache whenever applications require more memory. I'm hunting for the reference in the kernel docks. But there's a good description somewhere of the way the majority of programs will have a lot of memory (including code) which is only used during startup and then never again. So particularly on a server, you will have a lot of "junk" sitting in memory, stealing space from useful things like disk caches. Linux knows this and will chose to swap out the junk rather than abandon pages from a disk cache. Over all this has the effect of gradually increasing the swap usage and slowly growing the size of caches. It does this even though "free" memory remains relatively high. In short, this is expected behaviour and there's no simple way to turn it off.
swap is high inspite configuration is swappiness=1
1,440,716,501,000
I have a system that is becoming unresponsive for anywhere from a few seconds to a couple minutes. The only messages I see in the logs are like this: Sep 16 18:07:33 server kernel: igb 0000:01:00.3: exceed max 2 second Sep 16 18:07:50 server kernel: igb 0000:01:00.3: exceed max 2 second Sep 16 18:07:58 server kernel: igb 0000:01:00.3: exceed max 2 second Sep 16 18:08:08 server kernel: igb 0000:01:00.3: exceed max 2 second Sep 16 18:08:17 server kernel: igb 0000:01:00.3: exceed max 2 second Sep 16 18:08:57 server kernel: igb 0000:01:00.3: exceed max 2 second Sep 16 18:09:04 server kernel: igb 0000:01:00.3: exceed max 2 second Sep 16 18:09:11 server kernel: igb 0000:01:00.3: exceed max 2 second Sep 16 18:09:25 server kernel: igb 0000:01:00.3: exceed max 2 second Sep 16 18:09:58 server kernel: igb 0000:01:00.3: exceed max 2 second Sep 16 18:10:05 server kernel: igb 0000:01:00.3: exceed max 2 second Sep 16 18:10:12 server kernel: igb 0000:01:00.3: exceed max 2 second Sep 16 18:10:24 server kernel: igb 0000:01:00.3: exceed max 2 second Sep 16 18:10:31 server kernel: igb 0000:01:00.3: exceed max 2 second Sep 16 18:10:38 server kernel: igb 0000:01:00.3: exceed max 2 second I'm not sure where to start troubleshooting this. Could these messages be related to the system becoming unresponsive?
I've been seeing this problem (together with even more NIC Link is Down and NIC Link is Up messages) since updating from Devuan beowulf (with Kernel 4.19) to Chimera (with Kernel 5.10), on 00:14.0 Ethernet controller: Intel Corporation Ethernet Connection I354 (rev 03) on a Supermicro A1SRi-2558F board. The network interface it mostly happened on is connected to a FRITZ!Box 6660 Cable router with FRITZ!OS: 07.29 (the machine with the Intel NIC that runs Devuan acts as a second router/firewall behind the provider-controller FritzBox). The issue usually happened when there was some load, like when running a speedtest, but also (less frequently) with less load, like video conferences. What appears to fix the issues (both "exceed max 2 second" and the Link going down for a few seconds) is to disable EEE (some energy saving thing) on the NIC, with: ethtool --set-eee eth1 eee off In case this answer is too late for the original poster, I hope it's at least helpful for others who find this via a search engine (only to read a comment telling them to google the issue.. I haven't found this particular solution anywhere else).
kernel: igb exceed max 2 second (system is unresponsive)
1,440,716,501,000
I updated the kernel from 3.10.0-514.26.2.el7.x86_64 to 3.10.0-693.11.6.el7.x86_64 I noticed all the kernel modules in 3.10.0-693.11.6.el7.x86_64 are now appended with ".xz" (sample output below) /usr/lib/modules/3.10.0-693.11.6.el7.x86_64/kernel/sound/soc/intel/skylake/snd-soc-skl-ipc.ko.xz /usr/lib/modules/3.10.0-693.11.6.el7.x86_64/kernel/sound/soc/intel/skylake/snd-soc-skl.ko.xz /usr/lib/modules/3.10.0-693.11.6.el7.x86_64/kernel/sound/soc/snd-soc-core.ko.xz /usr/lib/modules/3.10.0-693.11.6.el7.x86_64/kernel/sound/soundcore.ko.xz /usr/lib/modules/3.10.0-693.11.6.el7.x86_64/kernel/sound/synth/emux/snd-emux-synth.ko.xz /usr/lib/modules/3.10.0-693.11.6.el7.x86_64/kernel/sound/synth/snd-util-mem.ko.xz /usr/lib/modules/3.10.0-693.11.6.el7.x86_64/kernel/sound/usb/6fire/snd-usb-6fire.ko.xz /usr/lib/modules/3.10.0-693.11.6.el7.x86_64/kernel/sound/usb/bcd2000/snd-bcd2000.ko.xz /usr/lib/modules/3.10.0-693.11.6.el7.x86_64/kernel/sound/usb/caiaq/snd-usb-caiaq.ko.xz /usr/lib/modules/3.10.0-693.11.6.el7.x86_64/kernel/sound/usb/hiface/snd-usb-hiface.ko.xz But just the previous version, everything was still just standard *.ko /usr/lib/modules/3.10.0-229.7.2.el7.x86_64/kernel/sound/synth/emux/snd-emux-synth.ko /usr/lib/modules/3.10.0-229.7.2.el7.x86_64/kernel/sound/synth/snd-util-mem.ko /usr/lib/modules/3.10.0-229.7.2.el7.x86_64/kernel/sound/usb/6fire/snd-usb-6fire.ko /usr/lib/modules/3.10.0-229.7.2.el7.x86_64/kernel/sound/usb/caiaq/snd-usb-caiaq.ko /usr/lib/modules/3.10.0-229.7.2.el7.x86_64/kernel/sound/usb/misc/snd-ua101.ko /usr/lib/modules/3.10.0-229.7.2.el7.x86_64/kernel/sound/usb/snd-usb-audio.ko /usr/lib/modules/3.10.0-229.7.2.el7.x86_64/kernel/sound/usb/snd-usbmidi-lib.ko /usr/lib/modules/3.10.0-229.7.2.el7.x86_64/kernel/sound/usb/usx2y/snd-usb-us122l.ko /usr/lib/modules/3.10.0-229.7.2.el7.x86_64/kernel/sound/usb/usx2y/snd-usb-usx2y.ko When I actually try to decompress the ko.xz, it looks like they are misnamed and not actually compressed tar -xJf ip_gre.ko.xz tar: This does not look like a tar archive tar: Skipping to next header tar: Exiting with failure status due to previous errors xz -l shows the file as "compressed" xz -l ip_gre_default.ko.xz Strms Blocks Compressed Uncompressed Ratio Check Filename 1 1 8,924 B 32.2 KiB 0.271 CRC64 ip_gre_default.ko.xz Does this mean modprobe will automatically handle compressed ko's? It looks more like a build problem than anything else.
This is fine, modules can be compressed using either gzip or xz. Compression is enabled using the MODULE_COMPRESS kernel build option, with MODULE_COMPRESS_GZIP or MODULE_COMPRESS_XZ to select the compression tool.
CentOS7 latest kernel moved from "kernel.ko" to "kernel.ko.xz"
1,440,716,501,000
I am studying for a Computer Security exam, and I am struggling to understand the following sample question. 'Explain the difference between running in ring 0 on x86 and running as UID 0 in Linux. Give an example of something that each one enables, but the other does not.' My current understanding is that ring 0 on x86 is the most privileged OS level and that kernel code is run in ring 0. UID 0 is the linux superuser that can essentially run anything. With my current understanding of these concepts, I don't understand how to answer this question. Please Note, this is NOT a homework question and is NOT something I will be graded upon, it is study material only.
Your understanding is correct. “Ring 0” is the x86 term for the kernel mode of the processor. “Running in ring 0” means “kernel code”. In terms of security, everything that can be done by a process (under any UID) can be done by the kernel. Some things are very inconvenient to do from kernel code, for example opening a file, but they are possible. Conversely, under normal circumstances, if you can run code under UID 0, then you can run kernel code, by loading a kernel module. Thus there is no security barrier between UID 0 and kernel level under a typical configuration. However code running in a process is still bound by the limitations of the processor's user mode: every access to a peripheral (including disks, network, etc.) still has to go via a system call. It is possible to configure a machine to have a UID 0 that isn't all powerful, for example: Disable the loading of kernel modules. Use a security framework such as SELinux to take away privileges from a process: UID 0 does not necessarily trump those, for example it's possible to make a guest account with UID 0 but essentially no privileges with the right SELinux policy. UID 0 in a user namespace only has the permissions of the namespace creator.
Linux Permissions UID 0 vs Ring 0
1,442,202,278,000
I am compiling the Linux kernel for my specific hardware and I only select the drivers/options which I really need. This is in contrast to a typical distribution kernel, where they compile almost everything, to be compatible with as many hardware configurations as possible. I imagine, for my kernel, I am only using 1% of the total kernel code (order of magnitude estimate). Is there any way to find out which files from the kernel source I have actually used when I build my kernel? This is not an academical question. Suppose I have compiled my kernel 3.18.1. Now a security update comes along, and 3.18.2 is released. I have learned in my other question how to find which files have change between releases. If I knew whether any of the files I am using have changed, I would recompile my kernel to the new version. If, on the other hand, the changes only affect files which I am not using anyway, I can keep my current kernel version.
Compiling my comment as answer: Run the following command on one shell. You can make a shell script of it or demonize with the -d option. inotifywait -m -r -e open --format '%f' /kernel_sources/directory/in_use/ -o /home/user/kernel_build.log On other shell, execute make The log file /home/user/kernel_build.log will have a list of the files that have been opened(read operation) during the build process.
Find out which kernel source files were used when kernel was compiled
1,442,202,278,000
I've heard that FUSE-based filesystems are notoriously slow because they are implemented in a userspace program. What is it about userspace that is slower than the kernel?
Code executes at the same speed whether it's in the kernel or in user land, but there are things that the kernel code can do directly while user land code has to jump through hoops. In particular, kernel code can map application memory directly, so it can directly copy the file contents between the application memory and the internal buffers from or to which the hardware copies. User code has to either make an extra copy via a pipe or socket, or make a more complex memory sharing operation. Furthermore each file operation has to go through the kernel — the only way for a process to interact with anything is via a system call. If the file operation is performed entirely inside the kernel, there's only one user/kernel transition and one kernel/user transition to perform, which is pretty fast. If the file operation is performed by another process, there has to be a context switch between processes, which requires a far more expensive operation in the MMU. The speed performance is still negligible against most hardware access times, but it can be observed when the hardware isn't the bottleneck, especially as many hardware operations can be performed asynchronously while the main processor is doing something else, whereas context switches and data copies between processes keep the CPU busy.
Why is userspace slower than the kernel?
1,442,202,278,000
Installing Debian I have to choose between the kernels linux-image-3.2.0.4-amd64 linux-image-amd64 What's the difference?
According to Debian wiki the package linux-image-amd64 is a metapackage meaning it does not exist but represent a set of package. In fact installing this package is the same as installing the last kernel available for amd64 architecture. If you install Linux-image-3.2.0.4-amd64 and this package is the only one available for your system then both package will represent the same thing. If Debian uses metapackage it's to avoid the needs of knowing the exact correct version of each package you want to install.
Linux-image-3.2.0.4-amd64 vs linux-image-amd64 [duplicate]
1,442,202,278,000
I monitor value from /proc/meminfo file, namely the MemTotal: number. It changes if a ram module breaks, roughly by size of the memory module - this is obvious. I know the definition for the field from kernel documentation: MemTotal: Total usable ram (i.e. physical ram minus a few reserved bits and the kernel binary code) The dmesg also lists kernel data. What other particular actions make the MemTotal number change if I omit hardware failure of the memory module? This happens on both physical & virtual systems. I monitor hundreds of physical, thousands of virtual systems. Although the change is rather rare, it does happen.
I was not comfortable with having bug in kernel or a module, so I digged further and found out... that MemTotal can regularly change, downwards, or upwards. It is not a constant and this value is definitely modified by kernel code on many places, under various circumstances. E.g. virtio_balloon kmod can decrease MemTotal as well as increase it back again. Then of course, mm/memory_hotplug.c is exporting [add|remove]_memory, both of which are used by lot of drivers too.
Why does MemTotal in /proc/meminfo change?
1,442,202,278,000
I'm experimenting with different kernel configuration files and wanted to keep a log on the ones I used. Here is the situation: There is configuration file called my_config which i want to use as a template I do make menuconfig, load my_config make NO changes and save as .config. When i do diff .config my_config, there are differences in the files Why would here be differences between the old file and the new file?
Why would here be differences Because you loaded my_config into menuconfig, made changes, then saved it as .config. Of course they are different. If you saved it twice, once with each name, then they would be the same. If you mean, they are more different than you think they should be, keep in mind there is not a 1:1 correspondence between things you select in menuconfig and changes that appear in the config file. Also, if my_config was the product of an earlier version of the kernel source, make menuconfig will notice this and convert the file to reflect the newer source version. This means even if you change nothing, just loading it and saving it will result in substantial changes to the text of the file. However, the actual configuration should be essentially the same (generally the changes are the addition of new options with appropriate default values).
Saving a kernel config file through menuconfig results with different options?
1,442,202,278,000
I'm working on building a kernel module for an input device, and I noticed that in the module source, there's a couple calls to input_get_keycode(data->input_dev, scancode, &keycode); When I was compiling I was getting errors that there's no function with that prototype. Looking into the input/input.c source code, this is the definition of input_get_keycode: int input_get_keycode(struct input_dev *dev, struct input_keymap_entry *ke) I tried to look online, and I found a couple obscure references to changing the kernel to be able to deal with large keymaps better, and apparently this function was changed to better handle that. Looking at an older source from input/input.c, the input_get_keycode function was defined as int input_get_keycode(struct input_dev *dev, unsigned int scancode, unsigned int *keycode) My question is, when was this changed. Is there notes on the change? I'm building the ubuntu natty kernel from git which is from my understanding from the 2.6.37-rc3 branch. Is this a ubuntu specific change? Or is this a change in the mainline kernel. I also have the maverick source from git which has the old style (3 input) function.
If you are working on a kernel module, I very much recommend that you get a git tree. Obviously Linus's tree is mandatory - I also get the stable trees. Since you are working on Ubuntu, I'd check to see if they have a tree with their changes you can pull from. Using the git tree, I was able to checkout master and run git blame drivers/input/input.c to see that the function signature for input_get_keycode was last changed in commit 8613e4c2. Running git show 8613e4c2 gives me the commit message for that change (the notes that you wanted) as well as the patch that implements the change. I can see that the change was made on 2010-09-09. Starting up gitk (a graphical git viewer) and telling it to go to that commit I can see that the commit precedes v2.6.37-rc1, telling me it was merged into that release. Following the branch up to when Linus merged it, I can see it was merged on 2010-10-26 in commit 3a99c631. This is all mainline without looking at Ubuntu patches, so it looks like the change has nothing to do with Ubuntu.
Changes to input_get_keycode function in linux kernel (input/input.c)
1,442,202,278,000
I'm trying to boot Linux from U-boot on an embedded ARM board using a filesystem on a remote machine served via NFS. It appears that the ethernet connection is not coming up correctly, which results in a failure to mount the NFS share. However, I know that the ethernet hardware works, because U-boot loads the kernel via TFTP. How can I debug this? I can try tweaking the kernel, but that means recompiling the kernel for every iteration, which is slow. Is there a way that I can make the kernel run without being able to mount an external filesystem?
You can compile a initrd image into kernel (General Setup -> Initial RAM filesystem and RAM disk (initramfs/initrd) support -> Initramfs source file(s)). You specify file in special format like (my init for x86): dir /bin 0755 0 0 file /bin/busybox /bin/busybox 0755 0 0 file /bin/lvm /sbin/lvm.static0755 0 0 dir /dev 0755 0 0 dir /dev/fb 0755 0 0 dir /dev/misc 0755 0 0 dir /dev/vc 0755 0 0 nod /dev/console 0600 0 0 c 5 1 nod /dev/null 0600 0 0 c 1 3 nod /dev/snapshot 0600 0 0 c 10 231 nod /dev/tty1 0600 0 0 c 4 0 dir /etc 0755 0 0 dir /etc/splash 0755 0 0 dir /etc/splash/natural_gentoo 0755 0 0 dir /etc/splash/natural_gentoo/images 0755 0 0 file /etc/splash/natural_gentoo/images/silent-1680x1050.jpg /etc/splash/natural_gentoo/images/silent-1680x1050.jpg 0644 0 0 file /etc/splash/natural_gentoo/images/verbose-1680x1050.jpg /etc/splash/natural_gentoo/images/verbose-1680x1050.jpg 0644 0 0 file /etc/splash/natural_gentoo/1680x1050.cfg /etc/splash/natural_gentoo/1680x1050.cfg 0644 0 0 slink /etc/splash/tuxonice /etc/splash/natural_gentoo 0755 0 0 file /etc/splash/luxisri.ttf /etc/splash/luxisri.ttf 0644 0 0 dir /lib64 0755 0 0 dir /lib64/splash 0755 0 0 dir /lib64/splash/proc 0755 0 0 dir /lib64/splash/sys 0755 0 0 dir /proc 0755 0 0 dir /mnt 0755 0 0 dir /root 0770 0 0 dir /sbin 0755 0 0 file /sbin/fbcondecor_helper /sbin/fbcondecor_helper 0755 0 0 slink /sbin/splash_helper /sbin/fbcondecor_helper 0755 0 0 file /sbin/tuxoniceui_fbsplash /sbin/tuxoniceui_fbsplash 0755 0 0 file /sbin/tuxoniceui_text /sbin/tuxoniceui_text 0755 0 0 dir /sys 0755 0 0 file /init /usr/src/init 0755 0 0 I haven't used it on ARM but it should work. /init is file you are can put startup commands. Rest are various files needed (like busybox etc.).
Debugging ethernet before NFS boot
1,442,202,278,000
Asked this on superuser, got no response: Can anyone tell me of the status/state of WLM (Workload Management) kernel scheduler systems in Linux? Alternatively, any user-space process goal-based load management programs? This is a good start, but I'm not aware if these proposals are implemented? http://www.computer.org/plugins/dl/pdf/proceedings/icac/2004/2114/00/21140314.pdf http://ckrm.sourceforge.net/downloads/ckrm-linuxtag04-paper.pdf AIX has inclusive WLM, anything comparable for Linux?
Not very sure, but the closest I can think of is cgroups: Control Groups provide a mechanism for aggregating/partitioning sets of tasks, and all their future children, into hierarchical groups with specialized behaviour. For more information, see one of: Arch Wiki page for cgroups Wikipedia cgroups page. RedHat cgroups page.
Are there any Workload Management subsystems for Linux?
1,442,202,278,000
If I compile latest kernel from kernel.org, make a deb package and install it on my Debian system, shall I worry about libc (and any other libraries?) and kernel being out-of-sync? I vaguely understand that kernel developers try hard to not break API/ABI exposed to userspace, but I guess once in a while breaks do happen, at least for some legitimate reasons? If so, is there a place that documents the mapping of working libc version vs kernel version?
Linus is tougher on userspace breakage than pretty much any of his other policies. Breaks are way rarer than "once in a while", essentially unheard-of. Just don't worry about it at all.
How/Shall I keep libc and kernel in sync?
1,442,202,278,000
I would like to ask question about the output from sar -q . I appreciate if someone can help me out with understanding runq-sz. I have a system which cpu threads are 8 cpu threads on RHEL 7.2 . [ywatanabe@host2 ~]$ cat /proc/cpuinfo | grep processor | wc -l 8 Below is sar -q result from my system but runq-sz seems to be low compared to ldavg-1 . runq-sz plist-sz ldavg-1 ldavg-5 ldavg-15 blocked 05:10:01 PM 0 361 0.29 1.68 2.14 0 05:11:01 PM 0 363 1.18 1.61 2.08 2 05:12:01 PM 0 363 7.03 3.15 2.58 1 05:13:01 PM 0 365 8.12 4.15 2.96 1 05:14:01 PM 3 371 7.40 4.64 3.20 1 05:15:01 PM 2 370 7.57 5.26 3.51 1 05:16:01 PM 0 366 8.42 5.90 3.84 1 05:17:01 PM 0 365 8.78 6.45 4.16 1 05:18:01 PM 0 363 7.05 6.40 4.28 2 05:19:02 PM 1 364 8.05 6.74 4.53 0 05:20:01 PM 0 367 7.96 6.96 4.74 1 05:21:01 PM 0 367 7.86 7.11 4.93 1 05:22:01 PM 1 366 7.84 7.31 5.14 0 From the man sar , I was thinking that runq-sz represents the number of tasks inside the run queue which states are TASK_RUNNING which corresponds to R sate in ps . runq-sz Run queue length (number of tasks waiting for run time). What does runq-sz actually represent ?
This man page has a more detailed explanation of this property: runq-sz The number of kernel threads in memory that are waiting for a CPU to run. Typically, this value should be less than 2. Consistently higher values mean that the system might be CPU-bound. Interpreting results As is the case with many "indicators" you have to use them in combination with one another to interpret if there's a performance issue or not. This particular indicator indicates if your system is starved for CPU time. Whereas the load1,5,15 indicate processes that are in the run queue, but are being forced to wait for time to run. The load1,5,15 variety tells you the general trend of the system and if it's got a lot of processes waiting (ramping up load) vs. trending down. But processes can wait for a variety of things with load1,5,15, typically it's I/O that's blocking when you see high load1,5,15 times. With runq-sz, you're waiting for time on a CPU. References How to Check Queue Activity (sar -q)
How is runq-sz counted in sar?
1,442,202,278,000
I am trying to debug a kernel running on QEMU with GDB. The kernel has been compiled with these options: CONFIG_DEBUG_INFO=y CONFIG_GDB_SCRIPTS=y I launch the kernel in qemu with the following command: qemu-system-x86_64 -s -S -kernel arch/x86_64/boot/bzImage In a separate terminal, I launch GDB from the same path and issue these commands in sequence: gdb ./vmlinux (gdb) target remote localhost:1234 (gdb) hbreak start_kernel (gdb) c I did not provide a rootfs, as I am not interested in a full working system as of now, just the kernel. I also tried combinations of hbreak/break. The kernel just boots and reaches a kernel panic as rootfs cannot be found... expected. I want it to stop at start_kernel and then step through the code. observation: if I set an immediate breakpoint, it works and stops, but not on start_kernel / startup_64 / main Is it possible that qemu is not calling all these functions, or is it being masked in some way? Kernel: 4.13.4 GDB: GNU gdb (Ubuntu 7.7.1-0ubuntu5~14.04.3) 7.7.1 GCC: gcc (Ubuntu 4.8.4-2ubuntu1~14.04.3) 4.8.4 system: ubuntu 14.04 LTS NOTE: This exact same procedure worked with kernel 3.2.93, but does not work with 4.13.4, so I guess some more configurations are needed. I could not find resources online which enabled this debug procedure for kernel 4.0 and up, so as of now I am continuing with 3.2, any and all inputs on this are welcome.
I ran into the same problem and found the solution from the linux kernel newbies mailing list. You should disable KASLR in your kernel command line with nokaslr option, or disable kernel option "Randomize the kernel memory sections" inside "Processor type and features" when you build your kernel image.
Hardware breakpoint in GDB +QEMU missing start_kernel
1,442,202,278,000
I want to enable google bbr on my VPS. But I don't know this feature is integrated on linux kernel or not. How can I check it?
Below command used to find the available tcp congestion control algorithms supported. 1. cat /proc/sys/net/ipv4/tcp_available_congestion_control bic reno cubic 2. This command used to find which tcp congestion control configured for your Linux. sysctl net.ipv4.tcp_congestion_control 3. Below command is used to change to desired one from the available list. sysctl -w net.ipv4.tcp_congestion_control=bic
How to check what congestion algorithm supported on my linux kernel?
1,442,202,278,000
How are the signals handled in the kernel. What happens internally if I send a kill signal to a kernel thread/process. Does a crash in kernel process means kernel panic always, if not will it generate coredump.
When a thread is running code in kernel mode, signals are queued, i.e. the kernel remembers that a signal has been sent but doesn't act on it. When a kernel thread is waiting for an event, the wait may be interrupted by the signal — it's up to the author of the kernel code. For example, the Linux kernel API has pairs of functions like wait_event and wait_event_interruptible; only the “interruptible” function will return immediately if the thread receives a signal. The reason kernel code isn't interrupted by signals is that it can put kernel memory or hardware devices in an inconsistent state. Therefore the code is always given a chance to clean things up. Linux's kernel threads (i.e. threads of the kernel, listed with no corresponding executable in process lists) can't receive signals at all. More precisely, any signal delivered to a kernel thread is ignored. A crash in kernel code may or may not cause a panic, depending on which part of the code caused the crash. Linux, for example, tries to recover from crashes in driver code, but whether this is possible depends on what went wrong. Crashes in kernel code may or may not generate a dump depending on the kernel and on the system configuration; for example Linux has a kernel crash dump mechanism.
How signals are handled in kernel
1,442,202,278,000
My Raspberry Pi (which is 10,000 km away from me right now) works as follows: It is running Raspbian (July 2016's version) The SD card contains /boot An encrypted hard disk drive (using LUKS cryptsetup) contains / When the Pi boots, I can unlock the HDD remotely using dropbear over SSH. It asks for the HDD's password and then the boot sequence continues normally. For more info about how I did all of this, read http://blog.romainpellerin.eu/raspberry-pi-the-ultimate-guide.html. TL;DR here is a shortened version: apt-get install busybox cryptsetup rsync echo "initramfs initramfs.gz 0x00f00000" >> /boot/config.txt sed -e "s|root=/dev/mmcblk0p2|root=/dev/mapper/hddcrypt cryptdevice=/dev/sda1:hddcrypt|" -i /boot/cmdline.txt sed -e "s|/dev/mmcblk0p2|/dev/mapper/hddcrypt|" -i /etc/fstab echo -e "hddcrypt\t/dev/sda1\tnone\tluks" >> /etc/crypttab cryptsetup --verify-passphrase -c aes-xts-plain64 -s 512 -h sha256 luksFormat /dev/sda1 mkinitramfs -o /boot/initramfs.gz $(uname -r) aptitude install dropbear // Configuring the SSH access here... mkinitramfs -o /boot/initramfs.gz $(uname -r) update-initramfs -u Problem Up until yesterday, everything was working fine. I could reboot it and unlock the HDD over SSH. However, yesterday I did aptitude update && aptitude upgrade. As far as I know, this does not upgrade the kernel. Anyway, I rebooted it. Now, I'm stuck at the unlocking step. Even though I type the right password, it immediately says Can't change directory to <something/a kernel version> and Cannot initialize device-mapper. Is dm_mod kernel module loaded? and keeps asking again for the password. I cannot tell you what kernel it is running as I set up a while ago and do not use it that much. Sorry for the lack of details, I do not have a physical access to my Raspberry and I turned it off yesterday, thus I am telling from what I remember. Supposition I am pretty sure I could fix it by tweaking /boot/initramfs.gz but I do not know how. Can you help me please? Thank you very much.
I do not know what gave you the impression that aptitude upgrade would leave your kernel untouched, it simply doesn't. I had the same trouble after a kernel update on my encrypted pi. The problem is that your initramfs needs to be rebuilt. Here is how you do that on an external machine. First, plug in your SD card with the crypted raspbian on it into your external computer and mount everything like so: cryptsetup -v luksOpen /dev/mmcblk0p2 thunderdome mount /dev/mapper/thunderdome /mnt mount /dev/mmcblk0p1 /mnt/boot mount -o bind /dev /mnt/dev mount -t sysfs none /mnt/sys mount -t proc none /mnt/proc Install qemu to emulate raspberry pi binaries: apt-get install qemu qemu-user-static binfmt-support Accoding to this gist, it is better to remove all lines from /mnt/etc/ld.so.preload before proceeding, this is what the sed commands do in the following: # comment out ld.so.preload sed -i 's/^/#/g' /mnt/etc/ld.so.preload # copy qemu binary cp /usr/bin/qemu-arm-static /mnt/usr/bin/ # chroot to raspbian and rebuild initramfs chroot /mnt /bin/bash mkinitramfs -o /boot/initramfs.gz [NEW RASPBIAN KERNEL VERSION] exit # undo damage sed -i 's/^#//g' /mnt/etc/ld.so.preload umount /mnt/{dev,sys,proc,boot} You can find the new raspbian kernel version by checking out /lib/modules, inside the chroot. After doing that, my raspberry pi booted just fine again.
initramfs, LUKS and dm_mod can't boot after upgrade
1,442,202,278,000
Are the system calls like fork(), exit() saved in some kind of function pointer table , just like the Interrupt Descriptor Table ? where does my OS go when I call my fork() or exit() ? I guess this image explains it, but I would like an explanation from a person who really knows what's happening , I don't want knowledge based on my own assumptions.
There's a fantastic pair of articles on LWN that describe how syscalls work on Linux: "Anatomy of a system call", part 1 and part 2.
Is there any Syscall table just like Interrupt Table?
1,442,202,278,000
I've created a basic linux kernel module, which does the following: static __init int init(void) { printk(KERN_DEBUG "Banana"); return 0; } And of course: module_init(init); Strangely, I cannot find the string "Banana" after I insert the module via insmod banana_module.ko The command dmesg -k | grep Banana doesn't return anything. I can find it however when I remove the module and insert it again. Then I find two bananas, the one from before and one from the current insertion. Is this due to flushing issues? I find this behavior a bit strange and couldn't find a similar problem on the internet. Btw, this happens on both my virtual machine on my desktop and on my laptop (without a VM). So, why doesn't the kernel like bananas?
I've figured out what the problem was: I did not specify the endline character \n at the end of my kernel message. If you leave it out, it behaves like described above. The reason is, that kernel messages are rather seen as records that are only printed out when completed. For more information see this article about printk problems
module prints to kernel log with delay
1,442,202,278,000
While I know that lot of packet processing(CRC calculations, packet segmentation handling, etc) can be offloaded to NIC, then does each packet still cause an interrupt to CPU? Is there a difference if NIC is in promiscuous mode?
Normally, the NIC will only interrupt the CPU if it needs to send the received packet to the system. In non-promiscuous mode, this would only be for packets addressed to its MAC address, the broadcast address ff:ff:ff:ff:ff:ff, or a multicast address to which it has been subscribed. It also does validation before sending the packet to the CPU: the normal Ethernet CRC check, and IP/TCP/UDP checksums if the NIC has that capability and the driver has enabled this offloading. Some NICs have a limited number of multicast subscription addresses; if this is exceeded, it will send all multicast packets to the CPU, and the OS has to discard the ones it doesn't care about.
Does each network packet cause an interrupt to CPU?
1,442,202,278,000
After building a new OpenBSD kernel, the install target of the kernel Makefile does the following: rm -f /obsd ln /bsd /obsd cp bsd /nbsd mv /nbsd /bsd I understand that the first two lines remove the old backup kernel /obsd and create a hard link /obsd pointing to the currently running kernel /bsd. In particular, the running kernel is not moved at all. This makes sense to me. However, what is the purpose of moving the newly built kernel ./bsd first to /nbsd and then renaming it to /bsd? Why not replace the third and fourth line by the apparently simpler cp bsd /bsd? If this should matter: the default partitioning scheme of OpenBSD places the kernel build tree in a different filesystem (disklabel) than the root filesystem.
A makefile recipe will stop executing if any command in it returns a failure status (unless the command is preceded by a -). The recipe you cited will ensure that /bsd only gets replaced if the cp bsd /nbsd command succeeds. The cp could fail if the partition were full or out of inodes.
Installing a new OpenBSD kernel "safely": why does `make install` go through these extra hoops?
1,442,202,278,000
It is said that kernel responsible for the transport, internet and network access layers for a network data. Then, the network data is passed on the appropriate process based on port number. How security programs like firewall, IPS and IDS have access to network data that do not belong them while they are just user level program and not part of kernel? How about proxy server? How come the network data has to pass the firewall first before the appropriate process?
Generally, those security programs contain two parts, one running in kernel space, one running in user space. The user space part is only an interface to iteract with kernel space part. For example, iptables contains: netfilter, a set of hooks to the networking code in the kernel. It also includes mechanisms for passing packets to user space program. ip_tables, a module that uses netfilter to troubleshoot the network packets, set up rules... iptables, a user space tool for setting up rules in the ip_tables module. Netfilter and ip_iptables run in kernel space while iptables run in user space.
How security programs like firewall, IPS and IDS have access to network data?
1,442,202,278,000
I am trying to compile OpenOnload from Solarflare for my nic on a server that I'm building. It is saying something about not having a kernel build. root@server:/usr/src/openonload-201310-u2# ./scripts/onload_install onload_install: Building OpenOnload. mmakebuildtree: No kernel build at '/lib/modules/3.2.0-4-amd64/build' onload_build: FAILED: mmakebuildtree --driver -d x86_64_linux-3.2.0-4-amd64 onload_install: ERROR: Build failed. Not installing. ` What is it talking about when it's saying there is supposed to be kernel build at /lib/modules/3.2.0-4-amd64/build? How would I get that file? I'm using Debian 7 "Wheezy".
It's talking about the kernel development headers which are needed for compiling certain applications. On Debian-based distributions, you can install them with this command: sudo apt-get install linux-headers-`uname -r` If you're asked for that, you may also require the following: sudo apt-get install build-essentials That will install tools like make which might not be installed by default, I'm not sure.
What is the "kernel build", and where do I get it?
1,442,202,278,000
What's the prescribed way to set Linux kernel runtime parameters? I've seen sometimes that people will set these in files such as /etc/rc.local. Is this really the right way to do this?
You can use sysctl to set some of the kernel parameters, specifically the ones under /proc/sys. These can be set in the file /etc/sysctl.conf or added to a single file (the preferred method on some distro's such as Fedora) in the directory /etc/sysctl.d. On distros that have this directory it's meant for customization's. excerpt from sysctl's man page sysctl - configure kernel parameters at runtime Example You can get a partial list of what kernel parameters are currently set using this command: $ sudo sysctl -a | head -5 abi.vsyscall32 = 1 debug.exception-trace = 1 debug.kprobes-optimization = 1 dev.cdrom.autoclose = 1 dev.cdrom.autoeject = 0 Making a change /etc/sysctl.conf Simply add rules to the file sysctl.conf. # sysctl.conf sample # kernel.domainname = example.com ; this one has a space which will be written to the sysctl! kernel.modprobe = /sbin/mod probe You can also use the sysctl.conf command line to make edits to this file without having to edit it directly. $ sysctl -w kernel.domainname="example.com" After making any changes be sure to make them active. $ sysctl -p /etc/sysctl.d To add your override of this parameter simply put it in a file named similarly to the files that are already present in the /etc/sysctl.d directory. $ ls -l /etc/sysctl.d total 40 -rw-r--r-- 1 root root 77 Jul 16 2012 10-console-messages.conf -rw-r--r-- 1 root root 490 Jul 16 2012 10-ipv6-privacy.conf -rw-r--r-- 1 root root 726 Jul 16 2012 10-kernel-hardening.conf -rw-r--r-- 1 root root 1184 Jul 16 2012 10-magic-sysrq.conf -rw-r--r-- 1 root root 509 Jul 16 2012 10-network-security.conf ... In a file named something like 99-myparam.conf. $ more 10-console-messages.conf # the following stops low-level messages on console kernel.printk = 4 4 1 7 Where the name of the parameter is on the left, and it's corresponding value is on the right. See sysctl's man page for more details.
What's the right way to set Linux kernel runtime parameters?
1,442,202,278,000
Is it possible to limit the access to a root account from within the kernel? I mean that if I press a button then the system detects the button being pressed and then allows access to the root account at boot. To me this could be useful to prevent a hacking attack from a bug of one of my programs that have an open port and maybe are vulnerable to attacks! If I can limit the attacker's movement maybe I can restrict the information it can have access to not compromising sensitive data. Is there a special way of doing this? Permissions like how android manage access, a boot parameter or another kernel? It is possible?
Anything is usually possible with Linux/Unix but most of the time a break-in isn't coming through the front door by logging into the root account. The more typical vector of attack is that a process that is running with elevated privileges is targeted for attack and then a weakness in the applications functionality is determined, and exploited. The more general approach is to not run your applications with such highly elevated privileges. You typically start the application up as root, but then fork another copy of the process as an alternative user. Additionally a chroot environment is used to also limit exposure of the underlying filesystem. If you go through the Server Security Reference from Red Hat it covers much of what I've mentioned in more detail if you're interested.
Linux Kernel limit access to root with a button?
1,442,202,278,000
I am trying to get some logging in place and am trying to troubleshoot it and this question became relevant. I use rsyslog config files to redirect some of my logging. (Will use iptables logging since I am working with it but please assume the general case) Under my rsyslog.d config I have :msg, startswith, "iptables: " /var/log/iptables.log :msg, startswith, "iptables denied: " /var/log/iptables.log & ~ I would assume if this was working that it would no longer log to kern.log. Since it is no longer going to kern.log does the redirect also effect the kernel ring buffer?
The & only applies to the preceeding selector, so you will need one for each of those :msg lines. http://www.rsyslog.com/doc/rsyslog_conf_actions.html
If I redirect logs using rsyslog, will dmesg be affected?
1,442,202,278,000
I'm running a 64bit kernel, already have CONFIG_IA32_EMULATION set, so do I still need CONFIG_IA32_AOUT enabled? From the help in menuconfig, I don't quite get it.
Short answer: If your system is a normal desktop/laptop and you don't run any really archaic software, you should be safe to disable CONFIG_IA32_AOUT. Keep CONFIG_IA32_EMULATION, as chances are that some of your binaries are still 32-bit. Explanation: There are two issues involved here: executable file formats and executing 32-bit code on a 64-bit system. You can read about file formats on wikipedia and have a look at their comparison, but the most important information for you is that ELF is the current standard and a.out is its predecessor. It is very unlikely that you'll find any recent program in a form of an a.out binary (don't mistake file format with the default output name that compilers give to binaries - the latter typically is still a.out for historical reasons, in spite of the binaries being in ELF format). If you have a 64-bit system, chances are that some of your programs are still 32-bit. This is much more probable than coming across an a.out binary. To make it clear: binaries in both ELF and a.out format can be both 32- and 64-bit. These distinctions are separate (as you can see from the comparison).
What does CONFIG_IA32_AOUT do actually?
1,442,202,278,000
I am using Gentoo, and I need to load an extra firmware to get my USB Wifi adapter work. I found an EXTRA_FIRMWARE_DIR kernel option, but I do not understand if it is used during compile time only or if it is effective after the new kernel is used. My WiFi adapter chip is Atheros, and according to this page, I have to put the firmware to the right place. On Ubuntu, I found the /lib/firmware directory as it is indicated in that page, but I cannot find that directory on Gentoo.
Take a look at this: http://www.kernel.org/doc/menuconfig/drivers-base-Kconfig.html In particular: EXTRA_FIRMWARE "allows firmware to be built into the kernel, for the cases where the user either cannot or doesn't want to provide it from userspace at runtime" EXTRA_FIRMWARE_DIR "controls the directory in which the kernel build system looks for the firmware files listed in the EXTRA_FIRMWARE option. The default is the firmware/ directory in the kernel source tree, but by changing this option you can point it elsewhere, such as the /lib/firmware/ directory or another separate directory containing firmware files". By the way, as far as getting your wireless card working, have you taken a look at these pages?: http://en.gentoo-wiki.com/wiki/TL-WN821N http://bugs.gentoo.org/278385
How to use the EXTRA_FIRMWARE_DIR kernel option?
1,442,202,278,000
I have 4GB RAM installed on my machine, and I'm considering using all of it (IE, installing PAE-enabled kernel). I heard there's a performance penalty for this, so I wanted to know about other's experiences. Should I proceed, or should I remain content with 3GB? [note] I will be running Linux 2.6.32.
If you have a 64-bit processor, an alternative would be to try a 64-bit kernel. According to this RedHat white paper, a typical server experiences around 1% performance hit, and other tasks suffered a performance hit of 0% - 10%. In addition to having more available memory, enabling PAE means you have an NX bit, which can increase security.
Is PAE worth it when I have 4GB RAM?
1,442,202,278,000
I removed kernel modules installed with rpm using yum remove kmodname. The *.ko was located under /lib/modules/$(uname -r)/extra/. I run depmod -a and I get depmod: ERROR: fstatat(4, kmodname.ko.xz): No such file or directory How can I force depmod to update its database?
Most likely you have symlinks, probably under one of your kernels' weak-modules, pointing at modules that have been deleted.
Why does depmod keep trying to load deleted modules
1,442,202,278,000
On one install, dnf managed to upgrade that kernel. On a newer machine (installed and upgraded today), it fails. Not sure why… Here is a full run: ; sudo dnf upgrade -y Last metadata expiration check: 5:42:13 ago on Wed 06 Mar 2019 10:56:30 GMT. Dependencies resolved. Problem 1: cannot install both kernel-3.10.0-957.5.1.el7.x86_64 and kernel-3.10.0-957.5.1.el7.x86_64 - cannot install the best update candidate for package kernel-3.10.0-957.5.1.el7.x86_64 - cannot install the best update candidate for package kernel-3.10.0-957.el7.x86_64 Problem 2: cannot install both kernel-devel-3.10.0-957.5.1.el7.x86_64 and kernel-devel-3.10.0-957.5.1.el7.x86_64 - cannot install the best update candidate for package kernel-devel-3.10.0-957.5.1.el7.x86_64 - cannot install the best update candidate for package kernel-devel-3.10.0-957.el7.x86_64 ================================================================================ Package Arch Version Repository Size ================================================================================ Reinstalling: kernel x86_64 3.10.0-957.5.1.el7 updates 48 M kernel-devel x86_64 3.10.0-957.5.1.el7 updates 17 M replacing kernel-devel.x86_64 3.10.0-957.5.1.el7 Transaction Summary ================================================================================ Total size: 65 M Downloading Packages: [SKIPPED] kernel-3.10.0-957.5.1.el7.x86_64.rpm: Already downloaded [SKIPPED] kernel-devel-3.10.0-957.5.1.el7.x86_64.rpm: Already downloaded Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Reinstalling : kernel-devel-3.10.0-957.5.1.el7.x86_64 1/4 Running scriptlet: kernel-devel-3.10.0-957.5.1.el7.x86_64 1/4 Reinstalling : kernel-3.10.0-957.5.1.el7.x86_64 2/4 Running scriptlet: kernel-3.10.0-957.5.1.el7.x86_64 2/4 Obsoleting : kernel-devel-3.10.0-957.5.1.el7.x86_64 3/4 Running scriptlet: kernel-3.10.0-957.5.1.el7.x86_64 4/4 Cleanup : kernel-3.10.0-957.5.1.el7.x86_64 4/4 Running scriptlet: kernel-3.10.0-957.5.1.el7.x86_64 4/4 Verifying : kernel-3.10.0-957.5.1.el7.x86_64 1/5 Verifying : kernel-3.10.0-957.5.1.el7.x86_64 2/5 Verifying : kernel-devel-3.10.0-957.5.1.el7.x86_64 3/5 Verifying : kernel-devel-3.10.0-957.el7.x86_64 4/5 Verifying : kernel-devel-3.10.0-957.5.1.el7.x86_64 5/5 The downloaded packages were saved in cache until the next successful transaction. You can remove cached packages by executing 'dnf clean packages'. Traceback (most recent call last): File "/bin/dnf", line 58, in <module> main.user_main(sys.argv[1:], exit_code=True) File "/usr/lib/python2.7/site-packages/dnf/cli/main.py", line 179, in user_main errcode = main(args) File "/usr/lib/python2.7/site-packages/dnf/cli/main.py", line 64, in main return _main(base, args, cli_class, option_parser_class) File "/usr/lib/python2.7/site-packages/dnf/cli/main.py", line 99, in _main return cli_run(cli, base) File "/usr/lib/python2.7/site-packages/dnf/cli/main.py", line 123, in cli_run ret = resolving(cli, base) File "/usr/lib/python2.7/site-packages/dnf/cli/main.py", line 154, in resolving base.do_transaction(display=displays) File "/usr/lib/python2.7/site-packages/dnf/cli/cli.py", line 240, in do_transaction tid = super(BaseCli, self).do_transaction(display) File "/usr/lib/python2.7/site-packages/dnf/base.py", line 872, in do_transaction tid = self._run_transaction(cb=cb) File "/usr/lib/python2.7/site-packages/dnf/base.py", line 1021, in _run_transaction self._verify_transaction(cb.verify_tsi_package) File "/usr/lib/python2.7/site-packages/dnf/base.py", line 1059, in _verify_transaction self.history.end(rpmdbv, 0) File "/usr/lib/python2.7/site-packages/dnf/db/history.py", line 504, in end bool(return_code) File "/usr/lib64/python2.7/site-packages/libdnf/transaction.py", line 758, in endTransaction return _transaction.Swdb_endTransaction(self, dtEnd, rpmdbVersionEnd, state) RuntimeError: TransactionItem state is not set: kernel-devel-3.10.0-957.el7.x86_64 As per commenter's request: ; dnf repolist Extra Packages for Enterprise Linux 7 - x86_64 3.6 MB/s | 16 MB 00:04 CentOS-7 - Base 5.6 MB/s | 10 MB 00:01 CentOS-7 - Updates 4.1 MB/s | 5.2 MB 00:01 IUS Community Packages for Enterprise Linux 7 - 3.9 MB/s | 941 kB 00:00 slack 29 kB/s | 33 kB 00:01 CentOS-7 - Extras 1.2 MB/s | 339 kB 00:00 repo id repo name status base CentOS-7 - Base 10,019 *epel Extra Packages for Enterprise Linux 7 - x86_64 13,008 extras CentOS-7 - Extras 382 ius IUS Community Packages for Enterprise Linux 7 - x86_64 570 slack slack 47 updates CentOS-7 - Updates 1,457 and ; dnf repolist -v | grep "^Repo-filename" | awk '{print $2}' | sort ; ls /etc/yum.repos.d /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/epel.repo /etc/yum.repos.d/ius.repo /etc/yum.repos.d/slack.repo total 60K 4.0K CentOS-Base.repo 8.0K CentOS-Vault.repo 4.0K ius-archive.repo 4.0K CentOS-CR.repo 4.0K CentOS-fasttrack.repo 4.0K ius-dev.repo 4.0K CentOS-Debuginfo.repo 4.0K epel.repo 4.0K ius-testing.repo 4.0K CentOS-Media.repo 4.0K epel-testing.repo 4.0K slack.repo 4.0K CentOS-Sources.repo 4.0K ius.repo
As far as Sardathrion and I can tell, we're jointly hitting a dnf breakage in the version currently shipped in our respective distributions of EL7. Sardathrion gets a Python traceback while I get a basic_string::_S_construct null not valid, ignoring this repo (which I cannot place in the dnf code). In both cases, we see that dnf confuses itself with the cannot install both <kernel> and <same-kernel> message and does unexpected things. For my part, my symptoms go away when I downgrade dnf by issuing dnf downgrade --allowerasing dnf which lowers dnf from 4.0.9 to 2.7.5 on Scientific Linux 7.6. I see the same SRPMs in the CentOS vault, suggesting CentOS users should be able to do the same. Since I don't observe any such issue in Fedora 29 shipping dnf 4.1.0, our first line of follow-up should be with our distribution maintainers before we ping the libdnf maintainers. EDIT: TUV is aware of the issue where dnf offers to reinstall a stale kernel. It doesn't address my disabled sl repo and I don't know if it fixes Sardathrion's big traceback, either.
dnf upgrade kernel on Centos 7…
1,442,202,278,000
I am looking for the basic kernel drivers to enable SATA support. I have a Braswell (Intel SoC) setup and I would like to reduce the number of kernel drivers to a minimum. Does SATA support need the ATA drivers ? What about the SCSI drivers ? Or Device Mapper Support (from the RAID menu) ? It seems there is more than 10 different generic drivers needed to support SATA besides the manufacturer's driver. I am using the linux kernel 4.4 and I could not find much information in the Documentation. It seems that the ATA, SATA and SCSI menuconfig options are scattered across multiple sections. I guess the most important one is the libata driver, but it is unclear for me if they need the ATA or SCSI drivers Device Drivers ---> Serial ATA and Parallel ATA drivers (libata) ---> I searched the subject but didn't find a clear answer. I liked this answer about the historical perspective of ATA and SCSI and how they can talk to each other. Also, would there be any major difference when enabling SATA for another SoC, like an ARM SoC, beside the vendor specific driver ? An ideal answer would refer to the specific options in menuconfig ! Thanks !
Partial answer: The kernel layers are a bit complex, and I can't give you a complete picture. Today, nearly all storage devices use some kind of SCSI commands (which why they show up as /dev/sdX instead of /dev/hdX), though that can be transported over different mechanisms (ATA packets, or USB, or others). So you need at least: The SATA driver for your particular hardware (possibly several modules, e.g. libahci) The generic ATA layer (possibly several modules, including libata) The generic SCSI layer, at least for the kind of storage devices you use (definitely several modules, including scsi_mod). I think the kernel should be able to figure out the minimal dependencies itself in menuconfig: If you first disable everything, and then enable only the bottom driver (hardware specific) and the top driver (SCSI disk, CONFIG_BLK_DEV_SD, module sd_mod) you'll likely end up with a pretty minimal workable configuration.
SATA: what linux kernel drivers are needed for basic support?
1,442,202,278,000
I wrote the following simple linux kernel module to test the param feature: #include<linux/module.h> int a = 5; module_param(a, int, S_IRUGO); int f1(void){ printk(KERN_ALERT "hello world\n"); printk(KERN_ALERT " value passed: %d \n", a); return 0; } void f2(void){ printk(KERN_ALERT "value of parameter a now is: %d \n", a); printk(KERN_ALERT "bye bye qworld\n"); } module_init(f1); module_exit(f2); MODULE_AUTHOR("l"); MODULE_LICENSE("GPL v2"); MODULE_DESCRIPTION("experimanting with parameters"); Now when I try to echo a value to it, I get the "Permission Desnied" error, as expected: [root@localhost param]# insmod p.ko [root@localhost param]# dmesg -c [ 7247.734491] hello world [ 7247.734498] value passed: 5 [root@localhost param]# echo 32 >/sys/module/ Display all 145 possibilities? (y or n) [root@localhost param]# echo 32 >/sys/module/p/parameters/a bash: /sys/module/p/parameters/a: Permission denied So far so good. However, I can write to the file a using vim. It does try to warn me with the follwing messages at the status line: "/sys/module/p/parameters/a" "/sys/module/p/parameters/a" E667: Fsync failed WARNING: Original file may be lost or damaged don't quit the editor until the file is successfully written! Press ENTER or type command to continue But I force the write with ! and get out of vim, and to my surprise the value of the parameter is re-written! [root@localhost param]# vim /sys/module/p/parameters/a [root@localhost param]# cat /sys/module/p/parameters/a 32 (Oriuginal value was 5 and I wrote 32 using vim). Not only that, the value of the parameter in the module is changed as well!!: [root@localhost param]# rmmod p.ko [root@localhost param]# dmesg -c [ 7616.109704] value of parameter a now is: 32 [ 7616.109709] bye bye qworld [root@localhost param]# What does this mean? READ Only permissions can just be overruled by a userland application like vim? What is the use of permission bits then..?
The /sys (sysfs) filesystem is somewhat special; many operations are not possible, for example creating or removing a file. Changing the permissions and ownership of a file or setting an ACL is permitted; that allows the system administrator to allow certain users or groups to access certain kernel entry points. There is no special case that restricts a file that's initially read-only for everyone from being changed to being writable for some. That's what Vim does when it is thwarted in its initial attempt to save. The permissions are the only thing that prevent the file from being written. Thus, if they're changed, the file content changes, which for a module parameter changes the parameter value inside the module. Normally this doesn't have any security implication since only root can change the permissions on the file and root can change the value through /dev/kmem or by loading another module. It's something to keep in mind if root is restricted from loading modules or accessing physical memory directly by a security framework such as SELinux; the security framework needs to be configured to forbid problematic permission changes under /sys. If a user is given ownership of the file, they'll be able to change the permissions; to avoid this, if a specific user needs to have permission to read a parameter, don't chown the file to that user, but set an ACL (setfacl -m u:alice:r /sys/…).
Why am I able to write a module parameter with READ ONLY permissions?
1,442,202,278,000
I have an embedded device with this SD card: [root@(none) ~]# busybox fdisk -l Disk /dev/mmcblk0: 3965 MB, 3965190144 bytes 4 heads, 16 sectors/track, 121008 cylinders Units = cylinders of 64 * 512 = 32768 bytes Device Boot Start End Blocks Id System /dev/mmcblk0p1 305 8497 262144+ 83 Linux /dev/mmcblk0p2 8497 16689 262144+ 83 Linux /dev/mmcblk0p3 16689 60352 1397247 b Win95 FAT32 and these partitions: [root@(none) ~]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/root 253871 140291 113580 55% / none 16384 4 16380 0% /tmp none 127016 4 127012 0% /dev none 16 4 12 25% /var/lib none 16 0 16 0% /var/log none 128 16 112 13% /var/run /dev/mmcblk0p3 1394520 118036 1276484 8% /mnt/onboard I have a u-boot kernel image file, uImage, of ~2 Mb. What happens exactly if I do the following? dd if=uImage of=/dev/mmcblk0 bs=512 seek=2048 Why am I asking this? This command is strange for me because: the copied image is smalled than target partition it seems that the image is extracted on /dev/mmcblk0p1, that is the root partition. It starts at 305, while dd skips 2048 blocks EDIT: see Anthon's answer there's not a boot partition uImage is extracted; on the contrary I expected it will be used by u-boot as-is Background: the device is a Kobo Glo, and the command is executed by update scripts to update the kernel.
I am guessing here, as I have no Kobo Glo (I wish my Bookeen HD was reprogrammable). You seem to have a 2Gb SD memory internally ( 60352 cylinders of 32K each) The dd does skip 2048 blocks of 512 (1048576), which is less than the 305 cylinder offset (9994240). In fact have to write more than 8Mb to reach the /dev/mmcblk0p1 partition that way. How the device boots depends on its firmware, but it is likely that there is some basic bootstrapping done via the first 1Mb on the SD memory, that in turn then calls the image written with dd. /dev/mmcblk0p1 is 256Mb ( (8497 - 305)*32768 ) and that seems to be mounted as / with maybe a backup of it on /dev/mmcblk0p2 or vv.
What does this dd command do exactly?
1,442,202,278,000
I wanted to know how to add packages to the Linux kernel and then package it to a ISO or CD for friends. Thanks in advance and please don't point me to LFS - Linux From Scratch!
Most of the distros can be used as a base and then customizations can be applied to this base, and written to an ISO. Fedora Fedora offers what's called a a "spin" or "respin". You can check them out here on the spins website: http://spins.fedoraproject.org/ It's pretty straight-forward to "roll your own" versions of Fedora, mixing in your own custom RPMs as well as customizing the UI. You can even use the tool revisor which is a GUI for selecting the packages you want to bundle into your own custom .ISO. There's a pretty good tutorial here, titled: Create Your Own Fedora Distribution with Revisor. The primary page for revisor is here: http://revisor.fedoraunity.org/ screenshot of revisor     Ubuntu Ubuntu offers this howto on the community wiki, titled: LiveCDCustomizationFromScratch. For Ubuntu/Debian you also have a couple of other alternatives. remastersys relink Of these 2, relink seems to be the most promising in both ease of use and being able to create a fairly customized version of Ubuntu. References Relinux – An easy way to create a Linux distro relink launchpad site
How can I create my own distro and include a custom kernel & packages? [closed]
1,442,202,278,000
My PC has two processors, and I know that each one runs at 1.86 GHz. I want to measure the clock pulse of my PC processor/s manually, and my idea is just to compute the quotient between the number of assembler lines a program have, and the time my computer spends to execute it, so then I have the number of assembly instructions per time processed by the CPU (this is what I understood a 'clock cycle' is). I thought to do it in the following way: I write a C program and I convert it into assembly code. I do: $gcc -S my_program.c , which tells to gcc compiler to do the whole compiling process except the last step: to transform my_program.c into a binary object. Thus, I have a file named my_program.s that contains the source of my C program translated into assembler code. I count the lines my program have (let's call this number N). I did: $ nl -l my_program.s | tail -n 1 and I obtained the following: 1000015 .section .note.GNU-stack,"",@progbits It is to say, the program has a million of lines of code. I do: $gcc my_program.c so that I can execute it. I do: $time ./a.out ("a.out" is the name of the binary object of my_program.c) for obtaining the time (let's call it T) it is spent for running the program and I obtain: real 0m0.059s user 0m0.000s sys 0m0.004s It is supposed that the time T I'm searching for is the first one represented in the list: the "real", because the other ones refer on other resources that are running in my system at the same right moment I execute ./a.out. So I have that N=1000015 lines and T=0.059 seconds. If I perform N/T division I obtain that the frequency is near to 17 MHz, which is obviously not correct. Then I thought that maybe the fact that there are other programs running on my computer and consuming hardware resources (without going any further, the operating system itself) makes that the processor "splits" its "processing power" and it does the clock pulse goes slower, but I'm not sure. But I thought that if this is right, I should also find the percentage of CPU resources (or memory) my program consumes, because then I could really aspire to obtain a (well) approximated result about my real CPU speed. And this leads me to the issue of how to find out that 'resource consumption value' of my program. I thought about the $ top command, but it's immediately discarded due to the short time my program spends to be executed (0.059 seconds); it's not possible to distinguish by simple sight any peak on the memory usage during this little time. So what do you think about this? Or what do you recommend me to do? I know there are programs that do this work I try to do, but I prefer to do it by using the raw bash because I'm interested on doing it through the most "universal way" possible (seems like more reliable).
That won't work. The number of clock cycles each instruction takes to execute ( they take quite a few, not just one ) depends heavily on the exact mix of instructions that surround it, and varies by exact cpu model. You also have interrupts coming in and the kernel and other tasks having instructions executed mixed in with yours. On top of that, the frequency changes dynamically in response to load and temperature. Modern CPUs have model specific registers that count the exact number of clock cycles. You can read this, and using a high resolution timer, read it again a fixed period later, and compare the two to find out what the (average) frequency was over that period.
How to measure the clock pulse of my computer manually?
1,442,202,278,000
udev is responsible for populating /dev. It adds and removes device nodes to /dev dynamically based on rules/configs/scripts under /lib/udev and /etc/udev/. If I have a CDROM device node /dev/sr0 I can add a symlink /dev/cdrom by adding a rule like: SUBSYSTEM=="block", KERNEL=="sr0", SYMLINK+="cdrom", GROUP="cdrom" I understand how symlinks are created in udev. But who (or "which rule") created /dev/sr0 (or another non symlink device node) in the first place?
The default device, based on the kernel name (sr0 in this case) is always created automatically as a real device file, so no rule is needed for that. Additional synonyms are then created by writing rules which specify symlinks to be added which point at the real file.
How does udev create /dev/sr*? (Or: Which rule does create /dev/sr*?)
1,442,202,278,000
So I've been at this for a while and have been poking around for an answer for a few days, and figure it's about time to ask for help. I am running Ubuntu 10.10 in VMWare Fusion, and have downloaded a copy of the 3.2 kernel and built it with all default settings. When I try to boot into the new kernel after a call to make install, I get the following message: [ 1.581916] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) [ 1.582260] Pid: 1, comm: swapper/0 Not tainted 3.2.4 #1 [ 1.582444] Call Trace: [ 1.582552] [<ffffffff815e7447>] panic+0x91/0x1a7 [ 1.582666] [<ffffffff815e75c5>] ? printk+0x68/0x6b [ 1.582799] [<ffffffff81ad2152>] mount_block_root+0x1ea/0x29e [ 1.582929] [<ffffffff81ad225c>] mount_root+0x56/0x5a [ 1.583047] [<ffffffff81ad23d0>] prepare_namespace+0x170/0x1a9 [ 1.583178] [<ffffffff81ad16f7>] kernel_init+0x144/0x153 [ 1.583304] [<ffffffff815f45f4>] kernel_thread_helper+0x4/0x10 [ 1.583436] [<ffffffff81ad15b3>] ? parse_early_options+0x20/0x20 [ 1.583570] [<ffffffff815f45f0>] ? gs_change+0x13/0x13 Which used to appear on every reboot. I found that if I changed the VM's harddrive type, I could get GRUB to boot at least, but the message above comes up if I try to load the newly compiled kernel. The old kernel works as before. I have checked and I have compiled in support for ext4, which is the fs my root is running. I have also tried generating an initrd file with a call to "sudo update-initramfs -c -k 3.2.4", but to no avail. The compilation, I think, was pretty standard: make menuconfig make make modules_install make install update-grub reboot Were the general steps. In terms of options, I basically took the default on everything. In case it's pertinent, my fstab looks like this: proc /proc proc nodev,noexec,nosuid 0 0 #UUID=c75eddd9-f4fa-49be-927b-8c2da7074135 / ext4 errors=remount-ro 0 1 /dev/sda1 / ext4 defaults 0 1 #UUID=5bc6915e-fdfa-479a-885f-ea03cb14f9cd none swap sw 0 0 /dev/sda5 none swap sw 0 0 /dev/fd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0 Where I've tried it with both UUID's and /dev/sd* notation. Any help or advice would be much appreciated, as it's gotten quite frustrating. Thank you.
You forgot to build your initrd that goes with the kernel. Run update-initramfs -c -k kernelversion and then update-grub to find it and add it to the grub menu.
Kernel Panic - not syncing: VFS: Unable to mount root fs after new kernel compile
1,442,202,278,000
Dear all, I was wondering where the trampoline code went. It is referenced here, and I could find some code in an earlier distro, but I can't find it in the 2.6.38 kernel. Can you explain to me the path of execution, if trampoline.S is not there anymore? Thanks.
When the x86_64 a.k.a. amd64 architecture was introduced in the Linux kernel tree, it was in a separate subtree from i386. So there was arch/i386/kernel/trampoline.S on one side and arch/x86_64/kernel/trampoline.S on the other side. The two architectures were merged in 2.6.24. This was done because there was a lot of code in common — after all, all x86-64 processors are x86 processors. At the time, ppc and ppc64 were already together, and it was decided to merge x86 and x86-64 as well, into a single x86 architecture. Some files are specific to one or the other subarchitectures, so the two versions remain alongside each other: arch/x86/kernel/trampoline_32.S moved from arch/i386/kernel/trampoline.S, and arch/x86/kernel/trampoline_64.S moved from arch/x86_64/kernel/trampoline.S.
Where did Trampoline.S go?
1,442,202,278,000
If I enable Hyper-Threading for my netbook which has an Intel Atom (1.6 GHz) will the kernel see two virtual 800 MHz processors?
No, it will create two virtual 1.6 GHz processors. (However, when not under load, they will clock down to a much lower clock speed, then 800 MHz might be correct.) Do cat /proc/cpuinfo for information about them.
Will enabling Hyper-Threading create two virtual half-speed processors?
1,442,202,278,000
Does the PREEMPT_RT patch (real-time kernel) have any benefit for regular desktop users?
I don't think so. The patch seems to provide real-time scheduling which is very important for some enviroments (planes, nuclear reactors etc.) but overkill for regular desktop. The current kernels however seems to be enough "real-time" and "preemptive" for regular desktop users[1]. It may be useful if you work with high quality audio recording and playing in which even small amount of time may dramatically reduce the quality. [1] Technically both are 0/1 features but I guess it is clear what I mean ;)
Does the Linux PreemptRT patch benefit desktop users?
1,442,202,278,000
Is there any patch for Linux kernel to use different memory allocators, such as ned allocator or TLSF allocator?
The allocators you mention are userspace allocators, entirely different to kernel allocators. Perhaps some of the underlying concepts could be used in the kernel, but it would have to be implemented from scratch. The kernel already has 3 allocators, SLAB, SLUB, SLOB, (and there was/is SLQB). SLUB in particular is designed to work well on multi-CPU systems. As always if you have ideas on how to improve the kernel, your specific suggestions, preferably in the form of patches, are welcome on LKML :-)
Kernel memory allocator patch
1,624,558,154,000
After reading man user_namespaces, I'm not sure, whether user (group) ID mappings to the parent namespace, set in /proc/[pid]/uid_map (/proc/[pid]/gid_map), apply to all processes in the namespace or only for the process [pid]? If the mappings apply to all processes, then it's a bit of a race-condition, which process writes to one of the above files first, since they can only be written once. If the mappings only apply for the processes [pid], then I find it weird that UID 0 may be mapped to different user IDs in the parent namespaces. Can someone explain? man user_namespaces: ... User and group ID mappings: uid_map and gid_map When a user namespace is created, it starts out without a mapping of user IDs (group IDs) to the parent user namespace. The /proc/[pid]/uid_map and /proc/[pid]/gid_map files (available since Linux 3.5) expose the mappings for user and group IDs inside the user namespace for the process pid. These files can be read to view the mappings in a user namespace and written to (once) to define the mappings. ...
The manpage says After the creation of a new user namespace, the uid_map file of one of the processes in the namespace may be written to once to define the mapping of user IDs in the new user namespace. An attempt to write more than once to a uid_map file in a user namespace fails with the error EPERM. Similar rules apply for gid_map files. It takes some measure of reading between the lines, but this is consistent with the fact that all processes in a user namespace share the same user and group mappings. This does mean there’s a bit of a race, but the privilege requirements mean that if a hostile process is privileged enough to hijack your user namespace, you’ve lost anyway. The race can be mitigated for all intents and purposes by handling EPERM in the process which expects to set the mappings up in a new user namespace: start over in a new user namespace.
Do user (group) ID mappings in `/proc/[pid]/uid_map` (`/proc/[pid]/gid_map`) only apply for the process `[pid]` or globally for the whole namespace?
1,624,558,154,000
I am running Ubuntu 19.04, after installing virtualbox and rebooting, i noticed these non-descriptive lines in the /var/log/kern.log Sep 30 16:36:34 a kernel: [ 236.760271] test1 Sep 30 16:36:34 a kernel: [ 236.760273] test2 Sep 30 16:41:07 a kernel: [ 509.036723] test1 Sep 30 16:41:07 a kernel: [ 509.036726] test2 Sep 30 16:41:25 a kernel: [ 527.214838] test1 Sep 30 16:41:25 a kernel: [ 527.214840] test2 What could they be? Output of sudo grep -wri kern /etc/*syslog* /etc/rsyslog.d/20-ufw.conf:# normally containing kern.* messages (eg, /var/log/kern.log) /etc/rsyslog.d/50-default.conf:kern.* -/var/log/kern.log Thanks
This has been taken from https://wiki.chotaire.net/vbox-test-warning-messages. Kudos to them. If it helps you, please consider visiting them and letting them know it was helpful. The source code for Virtual Box 6.0.6 shows kernel print statements that have been accidentally left in the production release. If you're using rsyslog you can filter them out by adding two lines to /etc/rsyslog.conf - or create your own /etc/rsyslog.d/vbox.conf for easier administration (remember to comment there what you've done): :msg, contains, "test1" stop :msg, contains, "test2" stop This has been fixed by Oracle and was released in VirtualBox Version 6.0.8.
i noticed these odd lines in my /var/log/kern.log file, what are they?
1,624,558,154,000
In my previous question How does the kernel scheduler know how to pre-empt a process? I was given an answer to how pre-emption occurs. Now I am wondering, how does the kernel scheduler know that a timeslice has passed? I read up on the hardware timer solution which makes sense to me, but then I read that most current operating systems (e.g. Windows, Linux, etc.) do not use hardware timers, but rather software timers. How can software timers be used to pre-empt a process once it has taken up its timeslice (e.g. it did not pre-empt itself.) It seems like some hardware timer would be necessary?
It seems like some hardware timer would be necessary? Yes, the kernel relies on hardware to generate an interrupt at regular intervals. On PCs, this was historically the 8253/8254 programmable interval timer, or an emulation thereof, then the local APIC timer, then the HPET. Current Linux kernels can be built to run “tickless” when possible: the kernel will program timers to only fire when necessary, and if a given CPU is running a single process, that may well be “never”. In most cases, dynamic ticks are used, so the kernel sets timers up to fire at varying intervals depending on its requirements — fewer interrupts means fewer wake-ups, which means idle CPUs can be kept in low-power modes for longer periods, which saves energy.
How does the kernel scheduler know a timeslice has passed?
1,624,558,154,000
I have a development board which has an older version of Linux installed on it. The vendor supplies an image for the device with a heavily modified linux kernel, some loadable kernel modules, and some example software. I would like to install a newer version of the linux kernel on the device, but the vendor has no support for this, as their modified linux kernel is based off of an older kernel version. What I don't understand, is why start hacking away at the linux kernel, when you can make the kernel compatible with the device it is running on by writing drivers as kernel modules. It could be easily recompiled for any kernel version without problems. This way, if the vendor only supports a certain kernel version, you are "stuck" :( But there must be some reason I am missing, because I see many projects use this approach of grabbing some version of the kernel, and heavily modifying it to fit their board. What I would be interested in, is: Why modify the linux kernel instead of creating a kernel module? What can be done if I need to run a newer kernel, but I get no support from the vendor (Device drivers should work on newer versions of the kernel...)
This question has a lot of assumptions in it. Here are some reasons. The kernel interface is not stable so a module for one version may not compile for a different version. The kernel may not expose a required facility. The kernel may expose a required facility but not in a way that is acceptable, for example requiring the module to have a particular license. The people writing the code found it quicker to write the code this way. As to your options if you need a newer kernel. find someone else who has already ported the code port it yourself pay someone else to port it (may not need money, beer, flattery and curiosity may work).
Why modify the linux kernel instead of creating a kernel module?
1,624,558,154,000
I have one kernel installed currently, 3.10.0-327.28.3. In my /boot directly, I have what looks like a lot of stuff that package-cleanup perhaps missed: -rw-r--r-- 1 root root 17M Aug 28 18:00 initramfs-3.10.0-327.10.1.el7.x86_64.img -rw-r--r-- 1 root root 17M Aug 28 18:00 initramfs-3.10.0-327.28.2.el7.x86_64.img -rw-r--r-- 1 root root 20M Aug 29 00:46 initramfs-3.10.0-327.28.3.el7.x86_64.img -rw-r--r-- 1 root root 17M Aug 28 17:00 initramfs-3.10.0-327.28.3.el7.x86_64kdump.img -rw-r--r-- 1 root root 17M Aug 28 18:01 initramfs-3.10.0-327.4.5.el7.x86_64.img Can I remove those 3 files safely? They look like they belong to older kernels.
Yes. Those files are from previous kernel installation. You may had upgraded kernel hence old kernel files along with their initramfs files are residing on /boot partition. If you want to clean up them then you can remove by using distribution-specific utility like apt-get. Once you remove, then execute the following command to remove those kernel's entry from grub.cfg file grub-mkconfig -o /boot/grub/grub.cfg if you have legacy grub installed then modify the /boot/grub/grub.conf file manually.
Is it safe to remove these kernel files in /boot? [duplicate]
1,624,558,154,000
For the millionth time or so I had to look in /usr/include/foo.h to find the members of a struct foo or foo_t or whatever. There's a man section for library and kernel calls, some of which include descriptions of data structures and others not. Is there a single place to look up the definitions of kernel and library data structures?
The GNU C library has a reference manual that includes documentation for all or most of the data structures in the standard library and extensions. This has a type index. Beware there's also a "GNU C Reference Manual", but it and the "GNU C Library Reference Manual" are two different things. You can also autogenerate documentation sufficient for browsing data structures with doxygen (note that it works much better with stuff that's actually annotated for it, but it can be crudely used this way). I tried this here on /usr/include and it took < 2 minutes (producing, n.b., ~800 MB of html). The steps were: Create a basic config file somewhere (anywhere), doxygen -g doxygen.conf. Edit the file and change the following settings: OUTPUT_DIRECTORY = /home/foo/whatever # documentation goes here OPTIMIZE_OUTPUT_FOR_C = YES EXTRACT_ALL = YES INPUT = /usr/include FILE_PATTERNS = *.h RECURSIVE = YES GENERATE_LATEX = NO Note that all those already exist in the config file, you need to search through and change/set the values as shown. Generate: doxygen doxygen.conf. Now open /home/foo/whatever/html/files.html. There is an index.html, but it is probably WTF'd up (again, doxygen is primarily intended for stuff that's purposefully annotated for it), so the file list is of the most predicatable entry point. There's also a copious "Data Structure Index", but for whatever reason, not everything you would think is indexed in it. E.g., there's a structstat.html you can reach by following the file list, asm-generic -> stat.h, but "struct stat" is not mentioned in the "Data Structures Index". Many standard C lib things follow this pattern: there's a macro/define/typedef in the predicatable header (sys/stat.h) that pulls in something extern that ends up being in a platform/system specific header in, e.g. asm-generic.h. I'm sure you've noticed this before. The stat example is not so bad in so far as at least the final definition is still called struct stat and not struct _fooX_stat. So this takes some getting used to and is, in the end, not much better than tooling around with grep. It also has the dis(?)advantage that non-user fields are included (e.g., compare the struct stat as documented above to its description in man 2 stat). For the standard library (and GNU extensions) the reference manual is much better. However, WRT stuff that's not in that manual, it is slightly better than nothing. I'd recommend that if you do want to use it that way, it would be better to do individual directories independently rather than the whole shebang (clue: you can set RECURSION = NO). 800 MB of html is pretty unwieldy.
Is there a man section or other doc repository for data structure definitions?
1,624,558,154,000
I'm trying to use Redis for production services and trying to avoiding swapping, which is bad for performance. I had learn that swap is triggered by swap_tendency which is depending on swap_tendency = mapped_ratio/2 + swappiness + distress How can I get mapped_ratio/distress from /proc/meminfo for my monitor script? Or anything parameter that can info me that system is going to swap pages?
mapped_ratio mapped_ratio can be calculated like so: mapped ratio = (nr mapped * 100) / total memory; Source: https://www.cs.columbia.edu/~smb/classes/s06-4118/l19.pdf nr_mapped The value, nr_mapped can be read from /proc/vmstat: $ grep nr_mapped /proc/vmstat nr_mapped 47640 distress According to this article, titled: Linux Memory - Implementation Notes “This is a measurement of how much difficulty the VM is having reclaiming pages. Each time the VM tries to reclaim memory, it scans 1/nth of the inactive lists in each zone in an effort to reclaim pages. Each time a pass over the list is made, if the number of inactive clean + free pages in that zone is not over the low water mark, n is decreased by one. Distress is measured as 100 >> n” 5 In researching much of the docs make it sounds as though "distress" is a kernel counter but it is not. Rather it's a value that's used when each zone of memory is being scanned, that is progressively increased as page frames of memory are scanned by the kernel in an attempt to reclaim them. Discussion of this is beyond the scope of this Q&A but if you're curious the section of the book "Understanding the Linux Kernel", Chapter 17: Page Frame Reclaiming. The value of "distress" comes from the value "prev_priority" as the zones are scanned. References Understanding Memory Linux Memory Allocation - PDF What Is the Linux Kernel Parameter vm.swappiness? 2.6 swapping behavior Understanding Virtual Memory In Red Hat Enterprise Linux 4 Understanding the Linux Kernel 3rd Ed.
When is swap triggered or how to calculate swap_tendency?
1,624,558,154,000
I'm using qemu for different kind of tasks, I would like to pick a filesystem that is both qemu-compatible and easy to mount under my host. I already discarded both qcow and qcow2 because apparently they are not supported as filesystem by the linux kernel, there is a little trick but it doesn't meet my needs, I basically need to write and read freely from/to this image file, not just take a look when this image is hotplugged to qemu. Could you suggest a way to create a qemu filesystem that will be usable under a GNU/Linux host as any other partition/hard disk ?
Instead of using an image file (or in addition to an image file) you can use a block device (LVM or loop device) and pass this to the VM (which sees it as disk drive). You can mount it from the guest and from the host. But you should make sure this is not done simultaneously. The obvious disadvantage: This volume does not grow with the need. But you can extend the block device / loop device file later and adapt the filesystem to the new size. libvirt configuration This is not pure QEMU but if you use libvirt then you need entries like this: <disk type='block' device='disk'> <driver name='qemu' type='raw'/> <source dev='/dev/mapper/storage-user'/> <target dev='vdb' bus='virtio'/> <serial>KVM-user</serial> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </disk>
filesystem that works under qemu and I can mount on my host
1,624,558,154,000
From here: http://www.xenomai.org/index.php/RTnet:Installation_%26_Testing#Debugging_RTnet The Linux driver for the real-time network device was built into the kernel and blocks the hardware. When I execute rmmod 8139too it says the module does not exist in /proc/modules. Kernel is 2.6.38.8 (64 bit). What other information should I provide for the question? linux-y3pi:~ # uname -a Linux linux-y3pi 2.6.38.8-12-desktop #2 SMP PREEMPT Fri Jun 1 17:27:16 IST 2012 x86_64 x86_64 x86_64 GNU/Linux linux-y3pi:~ # ifconfig eth0 Link encap:Ethernet HWaddr 00:24:8C:D9:D6:2E inet addr:192.168.16.86 Bcast:192.168.16.255 Mask:255.255.255.0 inet6 addr: fe80::224:8cff:fed9:d62e/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:414 errors:0 dropped:0 overruns:0 frame:0 TX packets:261 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:118971 (116.1 Kb) TX bytes:35156 (34.3 Kb) Interrupt:17 Base address:0x4000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:68 errors:0 dropped:0 overruns:0 frame:0 TX packets:68 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:4720 (4.6 Kb) TX bytes:4720 (4.6 Kb) linux-y3pi:~ # ethtool -i eth0 driver: r8169 version: 2.3LK-NAPI firmware-version: bus-info: 0000:01:00.0 linux-y3pi:~ # rmmod r8169 linux-y3pi:~ # ethtool eth0 Settings for eth0: Cannot get device settings: No such device Cannot get wake-on-lan settings: No such device Cannot get message level: No such device Cannot get link status: No such device No data available linux-y3pi:~ # lsmod|grep 8169 linux-y3pi:~ # lsmod|grep 8139 linux-y3pi:~ # .config from /usr/src/linux-2.6.38.8 CONFIG_R8169=m CONFIG_R8169_VLAN=y CONFIG_8139CP=m CONFIG_8139TOO=m #CONFIG_8139TOO_PIO is not set #CONFIG_8139TOO_TUNE_TWISTER is not set CONFIG_8139TOO_8129=y #CONFIG_8139_OLD_RX_RESET is not set
rmmod 8139too doesn't work because either: 8139 support is built into the kernel, and the driver can't be unloaded because it's not a module. On many systems, there's a /boot/config-2.6.38.8 file (or similar). You can grep it for something like ‘8139TOO’. If you see something like CONFIG_8139TOO=m, then the 8139too driver is compiled as a module. If it's CONFIG_8139TOO=y, then the driver is built into the kernel. If it says something along the lines of # CONFIG_8139TOO is not set, then the driver has not been compiled at all. Your ethernet card doesn't use the RTL8139 chip, so its driver isn't loaded. You must find your intended ethernet port's driver and unload that one instead. If you have lshw, say sudo lshw | less and look for eth0: the driver module will be listed. If you have systool, try sudo systool -c net -A uevent eth0 and look for the DRIVER= part. The right hand side should show the driver loaded to handle the device. dmesg | grep eth0 may also work, but it's not 100% reliable, especially if your system has been on for a while (if there's a /var/log/dmesg, you may want to grep eth0 /var/log/dmesg too).
How to know whether the Linux driver for the real-time network device was built into the kernel?
1,624,558,154,000
During a recent update I received this: Installing: kernel-default-2.6.37.6-0.11.1 [error] Installation of kernel-default-2.6.37.6-0.11.1 failed: (with --nodeps --force) Error: Subprocess failed. Error: RPM failed: installing package kernel-default-2.6.37.6-0.11.1.x86_64 needs 147MB on the / filesystem Abort, retry, ignore? [a/r/i] (a): i Installing: kernel-desktop-2.6.37.6-0.11.1 [error] Installation of kernel-desktop-2.6.37.6-0.11.1 failed: (with --nodeps --force) Error: Subprocess failed. Error: RPM failed: installing package kernel-desktop-2.6.37.6-0.11.1.x86_64 needs 148MB on the / filesystem Abort, retry, ignore? [a/r/i] (a): i Installing: kernel-source-2.6.37.6-0.11.1 [error] Installation of kernel-source-2.6.37.6-0.11.1 failed: (with --nodeps --force) Error: Subprocess failed. Error: RPM failed: installing package kernel-source-2.6.37.6-0.11.1.noarch needs 432MB on the / filesystem Which I am assuming means my / partition needs some room. So I checked the size/space: Filesystem Size Used Avail Use% Mounted on /dev/sda1 25G 24G 208M 100% / How did / grow to be so huge!? Is this a common occurrence and is there a quick trick to freeing up some space? I assume that there are things I'm not using in there and I've been able to update kernels easily for the past year -- so something is accumulating. I'd rather figure out what I free up (are old kernels kept?) instead of re-partitioning my whole drive to grow /.
Make a backup before making any of the following changes Do not proceed without either a backup or the willingness to lose all data. run du -sh /home to get the size used by /home directory. If it's sufficiently large(>=4G), /home is a good candidate to have its own partition. Boot from either a livecd or SystemRescueCd Depending on your partition table type (GPT or MBR), use either gdisk, parted, or fdisk. Create a new partition Format using your preferred fstype e.g. mkfs.ext4 /dev/sda2 mkdir /mnt/os mkdir /mnt/home mount /dev/sda1 /mnt/os # mount your OS, now all on / mount /dev/sda2 /mnt/home # mount newly formatted partion cp -a /mnt/os/home/* /mnt/home/ # copy current /home data to new partition cd /mnt/os/home # remove old home data, leaving mountpoint rm -rf . Now you need to cd to /mnt/os/etc and edit fstab and add /dev/sda2 /home ext4 defaults 0 1 There's more than one way to do this. Depending on your experience and skill you could mount by UUID (preferred, but not necessary). One could do the same for other filesystems, if you've installed a lot of google tools, or eclipse, they get intalled in /opt and it is also a good candidate to be in its own partition. If you get to the point where you have many partitions, you'll want to switch to GPT partitioning and/or LVM. If so, re-ask the question
Not enough space on / to install new kernel update
1,624,558,154,000
I had difficulties installing Cisco5.0 VPN on my Ubuntu 10.04 LTS. I asked for assistance in this question: link to previous question. The answer is that this Cisco program will run on only older versions of the kernel. I would like to use the VPN to connect through my university's network so that I can view academic journals under their subscription. One option could be to downgrade the kernel; how can this be done and what consequences would this have? The second question is whether I can install an alternative VPN client to connect? I have a ".pcf" file from the university to use with the Cisco client. Would this be compatible information to allow another client to connect? Is the connection independent of the software used?
I installed VPNC instead through the synaptic package manager. It was able to import the .pcf file for the cisco VPN. It was then able to connect properly.
Installing OpenVPN to replace Cisco VPN because Cisco will not work with the kernel I am on or downgrade instead?
1,624,558,154,000
While configuring kernel for debugging found this option: CONFIG_X86_PTDUMP: Export kernel pagetable layout to userspace via debugfs Does this mean RAM page-table layouts ? any guides on how to use debugfs and view this layout ?
Take a look at the following: Page table management Dumping kernel page tables
Dump Page table layout (KERNEL CONFIG)
1,624,558,154,000
How can i create initrd image for a new(experimental) kernel without actually installing it. (Existing tools to create initrd based on config and details from installed kernel.) Say i compile a new kernel with experimental features turned on, i have this in another separate partition. I would like to boot into this kernel, for that will old initrd work ? if i want to create a new initrd.img for new kernel without actually installing kernel, how can i do it ? BTW, can someone clarify about initramfs ? will it be useful for my scenario ?
Creating an initrd doesn't have anything to do with installing a kernel. All you do is to create a file structure for the initrd, copy the required files, write the init script and package all of that into a cpio archive. I used the instructions in the Gentoo Wiki to make my initrd. Some distributions make tools to generate initrds, and for that you will have to name your distro. For example, Arch has mkinitcpio. initramfs is just another (newer) implementation of the initial ramdisk. I don't know for sure, but I think modern distributions all use initramfs. When you see "initrd", it may be a shorthand for "initial ramdisk", and thus it covers both initrd and initramfs.
Creating new initrd without installing kernel
1,624,558,154,000
I would like to study the flow of some linux device drivers and some minimal flow of kernel (threading cum context switching and interrupt management). How can I debug the linux kernel? What are the basic steps for doing that? Recently i successfully compiled and integrated new kernel (2.6.34.7) into my machine running the 2.6.29 kernel.
It depends on what you really need. Probably simple printk() function is gonna be OK for the beginning. There is also the /proc interface you can use to get useful information from kernel. If you need something more complicated, use KGDB (kernel debugger).
Kernel debugging
1,624,558,154,000
I would like to know how much CPU / memory my current iptables rules consume. I have tried looking in ps and htop, but even with kernel threads displayed and did not see anything related to iptables. I am using the conntrack module with these module-specific settings: xt_recent.ip_pkt_list_tot=1 xt_recent.ip_list_tot=4096. I think 4096 is quite high. And then, in my iptables configuration, I am using two kinds of block lists: BLACKLIST and PORTSCAN. -A INPUT -i eth0 -p icmp -j ACCEPT -A INPUT -i eth0 -s 1.2.3.4/32 -j ACCEPT -A INPUT -i eth0 -m recent --rsource --name BLACKLIST --seconds 14400 --update -j DROP -A INPUT -i eth0 -p tcp -m tcp --dport 25 -j ACCEPT -A INPUT -i eth0 -m recent --rsource --name PORTSCAN --seconds 3600 --update -j DROP -A INPUT -i eth0 -p udp -m udp --dport 5060 -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --dport 5061 -j ACCEPT -A INPUT -i eth0 -p udp -m udp --dport 5062:5100 -j ACCEPT -A INPUT -i eth0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A INPUT -i eth0 -m recent --rsource --name PORTSCAN --set -j DROP -A INPUT -i eth0 -j DROP -A INPUT -j DROP I am experiencing network problems on the server, where I suspect my iptables rules could play a role. For instance: My ssh sessions are being dropped quite often. Ping reports 0.2% packet loss when I am connecting on allowed ports, ie 5060 it takes noticeably longer when PORTSCAN has many items, as compared when it is empty What would be the best way to troubleshoot this issue? is there some optimization I could do to my iptables rules? How can I see how much of my CPU is being consumed by iptables ?
Linux Kernel's Process: Many Kernel's functions like Iptables are processed in the Kernel level as kworker tasks, they are visible on task managers like top. As mentioned on the comments, you can compute the CPU and memory usage by comparing the total ressource usage usage with and without loading the iptables rules. Note that ipset already consumes memory even if you do not use it in a rule. Kworker is a placeholder process for kernel worker threads, which perform most of the actual processing for the kernel, especially in cases where there are interrupts, timers, I/O, etc. These typically correspond to the vast majority of any allocated "system" time to running processes. It is not something that can be safely removed from the system in any way, and is completely unrelated to the desktop applications (except if these programs make system calls, which may require the kernel to do something). Also kworker means a Linux kernel process doing "work" (processing system calls). You can have several of them in your process list: kworker/0:1 is the one on your first CPU core, kworker/1:1 the one on your second etc.. All Kernel's processes are started as children of kthreadd process on the Kernel space. Parent process: The process ID of kthreadd is 2 and this kernel workers can be listed with: pstree 2 -l -p # or ps --ppid 2 -u # or ps --ppid 2 -o pid,user,%mem,command,time,etime,cpu,pcpu,nice,pcpu,vsz That last one can be used with a bash + cron script to watch changes... alternatively for a direct timed analysis, perf can be used (apt-get install linux-tools-common linux-tools-3.11.0-15-generic) # Record 10 seconds of backtraces on all your CPUs: sudo perf record -g -a sleep 10 # Analyse your recording: sudo perf report Navigate the call graph with ←, →, ↑, ↓ and Enter. Links: 1, 2, 3, 4, 5, 6.
How to monitor the performance of "iptables" kernel module?
1,624,558,154,000
I have been playing around kernel programming for a while and want to create this simple data acquiring interface with some custom hardware. For portability and reusability, I do the whole thing on my Raspberry Pi. The challenging part of the project is having a high speed ADC (parallel) connected to GPIO's and having a kernel module that uses hardware interrupt from ADC to acquire each sample and store it inside a buffer which is then accessible via chardevice. My current setup (that works) is as follows: I have a userspace C program that is controlling my hardware through SPI. If I send a required command, it starts acquiring analogue data and sends them to the ADC. Whenever ADC finishes conversion, it pusts corresponding signal to 'low' on a GPIO and I get interrupt inside the kernel module (bound to that GPIO). The ISR collects the value of 12 other GPIO's (it's a 12-bit ADC) and puts it into a buffer that is then accessed through /dev/mydevice. I have another separate userspace program that runs a never-ending while loop, reading from /dev/mydevice and in turn writes into 'out_data.dat' (an userspace file). With this crude setup (2 userspace programs and kernel module loaded) I can write over 130 000 samples into my file per second (without missing anything). I now want to see how much faster I can make it, there are 2 things to consider: Is the setup I have outline above the 'usual' way how something like this would be done? I read everywhere that direct file I/O is not advised from kernel so I am not doing it. Surely though, it should be possible to write it into some "permanent" location during the ISR. This seems to me like a common problem, trying to get data from some hardware into computer using interrupts. Without changing my setup above, is there any way how to disable other interrupts to make it as smooth as possible? During the data acquisition I do not really need anything, only some sort of a way how to stop it. Any other interrupts (wireless, monitor refresh etc...) can be disabled as data acquisition is only to be run for a few minutes. Afterwards, everything will resume and more demanding python code can be run to analyze and visualize the data (at least that's my simple view of it).
For the userspace data collection program, what is wrong with an infinite loop? As long as you are using the poll system call, it should be efficient: https://stackoverflow.com/questions/30035776/how-to-add-poll-function-to-the-kernel-module-code/44645336#44645336 ? Permanent data storage I'm not sure what is the best way to do it, why don't you just write to a file from userland on the poll? I suppose your concern is that if too much data arrives, data would be lost, is that so? But I doubt the limiting factor would be kernel to userland communication in that case, but rather the slowness of the permanent storage device, so doing it on userland won't make any difference I think. In any case, the kernel only solution has a high profile question at: https://stackoverflow.com/questions/1184274/how-to-read-write-files-within-a-linux-kernel-module and I don't think you will get a better solution here. Disable interrupts Are you sure that it would make any difference, especially considering that the bottleneck is likely going to be? I would expect that if your device is actually producing a large number of interrupts, then those would dominate any other interrupts in any case. Is it worth risking messing up the state of other hardware? Do the specs of your hardware device suggest that it could physically provide a much larger data bandwidth that what you currently have? I don't know how to do it myself, but if you want an answer, your best bet is to make a separate question with title "How to disable all interrupts from a Linux kernel module?". LDD2 mentions the cli() function http://www.xml.com/ldd/chapter/book/ch09.html but it seems that it was deprecated: https://notes.shichao.io/lkd/ch7/#no-more-global-cli That text then suggests local_irq_disable and local_irq_save. I would also try to hack it up with whatever method you find to disable the interrupts, and see if it gets any more efficient before looking further if a nice method exists. On an emulator, a quick: static int myinit(void) { pr_info("hello init\n"); unsigned long flags; local_irq_save(flags); return 0; } fails with: returned with disabled interrupts apparently coming from v4.16 do_one_initcall, so there is a specialized error handling for that! I then tried naively doing it from a worker thread: static int work_func(void *data) { unsigned long flags; local_irq_save(flags); return 0; } static int myinit(void) { kthread = kthread_create(work_func, NULL, "mykthread"); wake_up_process(kthread); return 0; } but still then I can't observe any effect, so the interrupts must be being enabled by something else, as can be inferred from: watch -n 1 grep i8042 /proc/interrupts which keeps updating tty or muse / keyboard interrupts. Same from other entry points such as fops, or if I try a raw asm("cli"). We will need some more educated approach.
Saving data from kernel module into userspace
1,624,558,154,000
According to Documentation/x86/x86_64/mm.txt, the layout of kernel space should be like this in 64bit linux: 6 0000000000000000 - 00007fffffffffff (=47 bits) user space, different per mm 7 hole caused by [48:63] sign extension 8 ffff800000000000 - ffff80ffffffffff (=40 bits) guard hole 9 ffff880000000000 - ffffc7ffffffffff (=64 TB) direct mapping of all phys. memory 10 ffffc80000000000 - ffffc8ffffffffff (=40 bits) hole 11 ffffc90000000000 - ffffe8ffffffffff (=45 bits) vmalloc/ioremap space 12 ffffe90000000000 - ffffe9ffffffffff (=40 bits) hole 13 ffffea0000000000 - ffffeaffffffffff (=40 bits) virtual memory map (1TB) 14 ... unused hole ... 15 ffffffff80000000 - ffffffffa0000000 (=512 MB) kernel text mapping, from phys 0 16 ffffffffa0000000 - fffffffffff00000 (=1536 MB) module mapping space But how does the kernel space layout in 32bit linux looks like? What description I can find is all about ZONE_DMA, ZONE_NORMAL, ZONE_HIGHMEM, but these doesn't tell me details like where is the physmap address, or where is kernel code and kernel modules etc. What exactly are they like? Thanks for any help:)
Well, now I think I can give myself an answer:) In a word, in 32-bit linux, some kernel regions collide to prevent waste of limited kernel virtual address space(e.g., modules and vmalloc arena, kernel image and physmap) so the layout of kernel space may not be as clear as it is in 64-bit. The layout of kernel space in 32-bit linux on x86 should be like this (some differences from AArch32: http://www.arm.linux.org.uk/developer/memory.txt): fixmap : 0xffc57000 - 0xfffff000 (3744 kB) pkmap : 0xff800000 - 0xffa00000 (2048 kB) vmalloc : 0xf7ffe000 - 0xff7fe000 ( 120 MB) lowmem : 0xc0000000 - 0xf77fe000 ( 887 MB) .init : 0xc0906000 - 0xc0973000 ( 436 kB) .data : 0xc071ae6a - 0xc08feb78 (1935 kB) .text : 0xc0400000 - 0xc071ae6a (3179 kB) According to definition of ZONE in 32-bit linux, ZONE_HIGHMEM includes the region of fixmap,pkmap, vmalloc(kernel modules will use vmalloc region). The lowmem area consists of ZONE_DMA and ZONE_NORMAL. It is mapped linearly from physical memory and the so-called physmap refers exactly to this region. The .init, .data, .text inside of lowmem belong to the kernel image, which is a separate area in 64-bit linux.
What is the layout of kernel space in 32bit linux?
1,624,558,154,000
I'm using Linux in embedded systems, and want to configure the system to automatically reboot after a kernel panic. However, when the system comes back up, it's important for me to detect and log the fact that the kernel panicked (rather than, say, the user toggling the power switch). I could configure a kernel core dump on panic, and check for the dump on restart, but that seems like it could cause trouble if the file system isn't A-OK (plus I've been trying to set up kernel core dumping and have yet to succeed). Any suggestions?
If you run customized kernels for your embedded hw and have some hw register/bit available you may be able to customize the kernel crash code to set a flag in that hw location which you'd check after reboot. If not AFAIK you're only chance is to configure your kernel core dumping facility. Indeed, it's risky to write to a 'live' filesystem, but you can use a swap partition or a small dedicated partition instead.
How do I detect that my system has auto-rebooted after a kernel panic?
1,624,558,154,000
With the real-time executive approach, a small real-time kernel coexists with the Linux kernel. This real-time core uses a simple real-time executive that runs the non-real-time Linux kernel as its lowest priority task and routes interrupts to the Linux kernel through a virtual interrupt layer. All interrupts are initially handled by the core and are passed to standard Linux only when there are no real-time tasks to run. Real-time applications are loaded in kernel space and receive interrupts immediately, giving near hardware speeds for interrupt processing. I wonder how to test this in ordinary desktop Linux, e.g. Ubuntu? If it's even possible?
This sounds very much like the approach taken by RTLinux, which still seems to be available but not commercially supported. That being said, there's a community unto itself about real-time Linux concepts, and the CONFIG_PREEMPT_RT patch would seem to enable the functionality you're looking for. As with all kernel hacking, do so at your own risk. There's a HOWTO available to help you get started.
Real-time executive approach, can be run in desktop Linux?
1,624,558,154,000
I'm compiling a flavor of the Linux kernel based on the default configuration (for an ODROID system), with some additional features enabled. I want to automate this process so that I don't have to reselect the features again if I want to build a newer version of the kernel. I could have saved the whole .config file, but if the default configuration changes in the future release, then my .config file will get outdated. Is there some alternative to make menuconfig that will just take a set of features and enable them in an unattended fashion?
You can apply your current .config to a newer version of the kernel; they're tagged, and the make system will update it appropriately without changing what you have -- that's not a guarantee, of course; there may be some kind of incompatibility that requires a change. I can't recall noticing anything like this, however, but I usually go in short steps. You will probably be fine going from 2.6.x to 3.0 and any version of 3.x to any higher version. However, you do have to run make menuconfig to perform this update -- if you keep a copy of the original, run make menuconfig, don't change anything and just save and exit, you will notice .config has changed. You can also run make oldconfig, which will step you through a (possibly long) list of new choices. I'm not sure what the policy is with respect to make menuconfig's automation of this process, but what it seems to be is that at least some new choices which are possible with your existing config are ticked, as modules where possible (the new .config is often substantially bigger). In any case, I recommend just running make menuconfig, again, you don't have to change anything. I've never had a problem this way, or at least not one serious enough for me to remember. You may be interested in "Where to start with configuring, compiling and installing a custom Linux kernel?".
How to automate configuration of the Linux kernel build
1,624,558,154,000
I have a remote server which has some issues (Seem hardware related) which means that it logs KVM errors and then sometime later it becomes unresponsive and locked up. There is an often an early indication of it failing, in the dmesg log output, so I would like to know, is there a Debian utility which can send me (daily?) digests of the dmesg (/var/log/kern.log) output?
In the past I've used logwatch to do exactly this. Directions on customizing it are here, titled: HOWTO-Customize-LogWatch. Installation $ sudo apt-get install logwatch Setup Logwatch runs daily but can be configured to run more frequently It's typically kicked off from a crontab entry. $ ls -l /etc/cron.daily/0logwatch -rwxr-xr-x 1 root root 265 Feb 28 2011 /etc/cron.daily/0logwatch Customizations can go here: /etc/logwatch/conf/logwatch.conf To email yourself the daily summary: MailTo = [email protected] If you want to add additional rules around a particular log file you can copy the existing rule file and modify as needed: $ cp /usr/share/logwatch/default.conf/logfiles/syslog.conf \ /etc/logwatch/conf/logfiles/ Take a look at this section of the conf file, you can add additional rules here: *ExpandRepeats *RemoveService = talkd,telnetd,inetd,nfsd,/sbin/mingetty *OnlyHost *ApplyStdDate Going further I'd consult this tutorial titled: Monitor System Logs with Logwatch on Debian 5 (Lenny) for more details if you'd like to expand the monitoring beyond just he stock things that logwatch does out of the box.
dmesg email digest
1,624,558,154,000
I'm using Debian testing, and I've problems with MiniDLNA on kernel > 3.2. The DLNA client works about 30 min, after that it lost the connection with the minidlna server and can't discover the server again. But if I load with an old kernel (3.2) all works fine. The time that minidlna works (30 min) may be related to the notify_interval parameter, which by default is 15 min. Problem solved. It was related to my network configuration. My network config: iface br0 inet static address 192.168.5.2 netmask 255.255.255.0 gateway 192.168.5.1 bridge_ports eth0 bridge_stp off bridge_maxwait 0 bridge_fd 0 In 3.5 kernel was added multicast_querier toggle and disabled queries by default which broke my DLNA on bridge interface. Now I just enable multicast_querier and all works as before. # echo 1 > /sys/class/net/br0/bridge/multicast_querier
It was related to my network configuration. My network config: iface br0 inet static address 192.168.5.2 netmask 255.255.255.0 gateway 192.168.5.1 bridge_ports eth0 bridge_stp off bridge_maxwait 0 bridge_fd 0 In 3.5 kernel was added multicast_querier toggle and disabled queries by default which broke my DLNA on bridge interface. Now I just enable multicast_querier and all works as before. # echo 1 > /sys/class/net/br0/bridge/multicast_querier
What changed between kernels 3.2 and 3.9 that affect MiniDLNA?
1,624,558,154,000
I am trying to disassemble a compound file that consists of several parts, one of which is an uncompressed kernel sandwiched between several other files. I am trying to find the exact length of the kernel part which is proving difficult. Is there some indication in the vmlinux header of how much data the bootloader is supposed to read before executing, or is it assumed that all contents of the vmlinux file are to be loaded at handoff from the bootloader?
The quick answer is "no", but if this is an ELF image then with some hacking you can probalby find kernel. See the readelf hack below. The bootloader is responsible to know the format of whatever file in which the kernel and root filesystem reside. This includes knowing the size of the kernel file, in whatever fomat it is. On PowePC and Blackfin, the bootloader is responsible for decompressing the entire kernel if it is compressed, and writing it to it final location in RAM. On ARM, the kernel could be self-decompressing and the bootloader only needs to copy the raw kernel file to a convenient place in RAM and start execution. If the kernel is self-decompressing, then the symbols that indicate the size of the compressed kernel file might or might not be somewhere in the decompression code at the begining of the file, depending on the decompression algorithm used, but you have no way of knowing where without a linker map for the specific kernel build. Certainly the bootloader has no way of knowing. The uncompressed kernel code itself is bracketed by two symbols, _stext and _end whose addresses are the start and end of the kernel itself, but do not include the extent of any included initramfs, if an initramfs has been linked into the kernel binary. The extent of the initramfs is set by the linker in two kernel symbols, __initramfs_start and __initramfs_end. Bootloaders generally do not have the capability to read the kernel symbol table (it's in the System.map file), and without this capability they would have no way of knowing where the _end and __initrams_end symbols are in the kernel file. That is, the position of the symbols is not a fixed offset from the start of the binary file. It is determined at link time and can vary depending on the kernel build configuration. For an uncompressed kernel in ELF format you can probably identify the start of the vmlinux file by looking for the ELF header (177 E L F in the od -c dump of the compound file). You can then do readelf -e or objdump -h on the remainder of the file from this point to find the section with the highest file offset (.shstrtab). Add the section size to this offset and that brings you to the end of vmlinux. I tested this method using an unstripped PPC vmlinux and got size that exactly matched the standalone vnlinux file size. After stripping the kernel this method gave a result that was 1283 bytes short of the stripped image size. Embedded systems usually use a file fomat such as mkimage to pack the kernel, rootfs, device tree and other components. U-boot for example is mkimage aware, so it knows where the kernel binary begins and ends inside the mkimage file and it knows if the kernel is compressed or not and to what RAM address to write the kernel file.
Does the vmlinux header contain the length of the kernel image?
1,624,558,154,000
The kernel configuration contains an NLS_UTF8 option. It can be found under File systems → Native language support. What does it do? Its description maintains that it is needed for using a FAT or JOLIET CD-ROM filesystem. Is it necessary for an ext[234] filesystem?
FAT filenames (not file contents) are encoded in a country-specific manner, DOS called those "codepages". They need to be present in the kernel so your console con correctly display the characters. This also counts for the UTF-8 encoding of the Unicode charater set. This doesn't apply to ext FS though, read up here.
What does the CONFIG_NLS_UTF8 kernel option do?
1,624,558,154,000
I want to compile and install a kernel.org kernel on a custom HDD volume, say /dev/sda5, instead of being merged with my current Ubuntu's directories. I can find information about configuration and compilation process all over the web, but there's no track of how to put the kernel on a custom volume (different than the booted distro you're using at the moment of compile). What I'm asking for is like how we can install 2 different distros on 2 different volumes on 1 HDD, now think of my custom kernel as another distro.
You can compile a kernel anywhere you like, including your home directory. The only time directories outside the build tree are modified is when you make one of the install* targets. So, to install a kernel you'd do the obvious: cd $SOME_DIRECTORY tar -xjvf linux-$VERSION.tar.bz2 cd linux-$VERSION make mrproper menuconfig # Configure the kernel here. # Now build it using all your CPU threads in parallel. make -j$(grep -c processor /proc/cpuinfo) bzImage modules After you configure the kernel, it'll be built. At this point, you'll have a kernel binary (vmlinux) and a bootable kernel image under arch/$YOUR_ARCHITECTURE/boot/bzImage. If you're building a monolithic kernel, you're done. Copy the uncompressed file (vmlinux) or compressed file (bzImage) to your intended volume, configure the boot manager if you need to, and off you go. If you need to install modules, and assuming you've mounted your target volume on /mnt, you could say: INSTALL_MOD_PATH=/mnt \ INSTALL_PATH=/mnt/boot \ make modules_install This will copy the kernel image to /mnt/boot and the modules to /mnt/lib/modules/$VERSION. Please note, I'm oversimplifying this. If you need help building the kernel manually, you should read some of the documents in the kernel source tree's Documentation/ subdirectory. The README file also tells you how to build and install it in detail. Booting the kernel is a different story, though. Most modern distributions use a initial RAMdisk image which contains a ton of drivers for the hardware needed to bring up the rest of the kernel (block devices, filesystems, networking, etc). This process won't make this image. Depending on what you need to do (what do you need to do?), you can use an existing one or make a new one using your distribution's toolchain. You should check the documentation on update-initramfs. There are other issues too, though. Using the standard toolchain you can't compile a kernel for a different architecture or sub-architecture. Note that in some cases, even kernels compiled on a particular type of x86 box won't work on certain other types of x86 boxes. It all depends on the combination of sub-architectures and the kernel config. Compiling across architectures (e.g. building an ARM kernel on an x86 machine) is altogether out of the question unless you install an appropriate cross-compilation toolchain. If you're trying to rescue another installation or computer, however, a rescue disk might come in handier than building a custom kernel like that. One more thing: if you're trying to build a kernel for another computer which boots, is the same architecture as the one you're compiling on, and runs a Debian or Debian-like OS (Ubuntu counts), you could install the kernel-package package (sudo aptitude install kernel-package). Then unpack the kernel, cd to the root of the source tree, and say: CONCURRENCY_LEVEL=$(grep -c processor /proc/cpuinfo) \ sudo make-kpkg --initrd binary-arch This will apply necessary patches, configure the kernel, build it and package it as a .deb package (a few packages, actually). All you need to do is install it on your target system and you're done.
Compiling and installing a kernel.org kernel to a custom volume on disk
1,624,558,154,000
I recently migrated an existing Debian system to new hardware, a core i3 chip running on an intel sandy bridge motherboard. I’m experiencing a very strange problem; when I ping my router, about 50% of the packets are getting dropped. I spent some time testing, and can verify it’s not the router. It works fine with multiple different machines, even when connected to the same Ethernet port on the router. The pings that do come back have very low latency, less than 1 ms, as you’d expect from the router sitting across the room. I am using kernel 2.6.39, on Debian stable (I got the kernel from backports). Other than the kernel and a few related packages needed to get it going, the system is 100% Debian 6.0. The kernel detects the network hardware and loads the e1000e driver on boot. There is nothing strange in the logs. One other thing: in spite of the problem, the networking "works" if you can call it that. What I mean is I can also ping yahoo and google successfully. Of course I also lose ~ 50% of the packets in these cases too, but some packets are coming back. The other devices connected to this router are all working fine. I am typing this on a machine connected to the same router. I am relatively experienced in Linux, but not sure where to even start with this issue. The only other thing I can think of is that the router is 10/100, not gigabit. Obviously that shouldn’t cause this issue, but maybe it’s related? OTOH, I’m pretty sure the last machine had gigabit Ethernet too. It was plugged into the same port on the same router. Yes, I’ve tried rebooting the router, and the machine, multiple times. I’m hoping someone here will have an idea. UPDATE: @bdk makes some some good suggestions... wish I had good news! :( I tried a bunch more things, and got nowhere. I also grabbed some output from the system to include here. sometimes when I try to ping it can't find the host at all. if I try it again, it can connect. I assume this is just the first ping(s) failing. @bdk, the failures seem intermittent, at least I cannot see a pattern. Here are the relevant lines from dmesg, am I missing some red flag? [ 1.171187] e1000e: Intel(R) PRO/1000 Network Driver - 1.3.10-k2 [ 1.171190] e1000e: Copyright(c) 1999 - 2011 Intel Corporation. [ 1.171225] e1000e 0000:00:19.0: PCI INT A -> GSI 20 (level, low) -> IRQ 20 [ 1.171236] e1000e 0000:00:19.0: setting latency timer to 64 [ 1.171339] e1000e 0000:00:19.0: irq 42 for MSI/MSI-X [ 1.460976] e1000e 0000:00:19.0: eth0: (PCI Express:2.5GB/s:Width x1) e0:69:95:dd:5d:d9 [ 1.460979] e1000e 0000:00:19.0: eth0: Intel(R) PRO/1000 Network Connection [ 1.461015] e1000e 0000:00:19.0: eth0: MAC: 10, PHY: 11, PBA No: FFFFFF-0FF [ 48.475222] e1000e 0000:00:19.0: irq 42 for MSI/MSI-X [ 48.530979] e1000e 0000:00:19.0: irq 42 for MSI/MSI-X [ 50.120859] e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: Rx/Tx [ 50.120863] e1000e 0000:00:19.0: eth0: 10/100 speed: disabling TSO things I tried that did not help: installed linux-firmware-free, linux-firmware-nonfree, in case there was a better firmware available (there wasn't, or at least the kernel didn't find it) played with aspm in the BIOS, others have reported aspm causing problems for e1000e ethernet (didn't help) completely disabled pcie_aspm in the kernel, in case that was causing the problem (it wasn't, but disabling it did introduce new problems) mii-tool is apparently not supported by this chip? is there a special intel tool to use instead? when I took a look at tcpdump, things started looking more grim. not only are some of the packets not making it back, some aren't even making it out! 14:25:01.162331 IP debian.local > 74.125.224.80: ICMP echo request, id 2334, seq 1, length 64 14:25:02.168630 IP debian.local > 74.125.224.80: ICMP echo request, id 2334, seq 2, length 64 14:25:02.228192 IP 74.125.224.80 > debian.local: ICMP echo reply, id 2334, seq 2, length 64 14:25:07.236359 IP debian.local > 74.125.224.80: ICMP echo request, id 2334, seq 3, length 64 14:25:07.259431 IP 74.125.224.80 > debian.local: ICMP echo reply, id 2334, seq 3, length 64 14:25:31.307707 IP debian.local > 74.125.224.80: ICMP echo request, id 2334, seq 9, length 64 14:25:32.316628 IP debian.local > 74.125.224.80: ICMP echo request, id 2334, seq 10, length 64 14:25:33.324623 IP debian.local > 74.125.224.80: ICMP echo request, id 2334, seq 11, length 64 14:25:33.349896 IP 74.125.224.80 > debian.local: ICMP echo reply, id 2334, seq 11, length 64 14:25:43.368625 IP debian.local > 74.125.224.80: ICMP echo request, id 2334, seq 17, length 64 14:25:43.394590 IP 74.125.224.80 > debian.local: ICMP echo reply, id 2334, seq 17, length 64 14:26:18.518391 IP debian.local > 74.125.224.80: ICMP echo request, id 2334, seq 30, length 64 14:26:18.537866 IP 74.125.224.80 > debian.local: ICMP echo reply, id 2334, seq 30, length 64 14:26:19.519554 IP debian.local > 74.125.224.80: ICMP echo request, id 2334, seq 31, length 64 14:26:20.518588 IP debian.local > 74.125.224.80: ICMP echo request, id 2334, seq 32, length 64 14:26:21.518559 IP debian.local > 74.125.224.80: ICMP echo request, id 2334, seq 33, length 64 14:26:21.538623 IP 74.125.224.80 > debian.local: ICMP echo reply, id 2334, seq 33, length 64 14:26:37.573641 IP debian.local > 74.125.224.80: ICMP echo request, id 2334, seq 35, length 64 14:26:38.580648 IP debian.local > 74.125.224.80: ICMP echo request, id 2334, seq 36, length 64 14:26:38.602195 IP 74.125.224.80 > debian.local: ICMP echo reply, id 2334, seq 36, length 64 notice the request sequence, how it goes 1, 2, 3... 9??! that can't be good. I know Sandy Bridge is still relatively new, but Linux does work... right? Could this be bad hardware? No way... right? sigh.... maybe I should just go back to the old system.
Apparently this issue is already known to the Ubuntu folks. Got to hand it to 'em! For starters: the quick work around. You can get your system running again by slowing down the ethernet to 10 mpbs like this: sudo ethtool -s eth0 speed 10 autoneg off (Note the mii-tool does NOT work with this ethernet chip) I actually don't have a confirmed fix yet, but apparently no one does. I chose to answer this question because the nature of this problem is something people need to be aware of. According to the Ubuntu bug report, this is a hardware fault that randomly affects only some recent Intel ethernet chips. Not some models, but certain chips. Meaning there's no way to tell which ones are good and which aren't. At a minimum, the 82579V (my chip) and the 82579LM are affected, Ubuntu team has confirmed those. Who knows how many other models are affected. It may be wise to avoid motherboards that use Intel ethernet chips, at least until the extent of the problem is fully understood. So it appears this actually is a hardware bug, after all. There are rumors that you can download, compile, and install the latest intel driver, which contains a permanent software workaround. The download is here, compile and install are left as an exercise for the reader. I'm curious what this software workaround is, and whether it permanently reduces any functionality or performance. There must be some tradeoff, right? Unfortunately I was unable to experiment with this myself, since I needed to get this motherboard sent back within the return window. Ubuntu bug reports be found here and here. Many thanks to the awesome Ubuntu team! They really do great things for Linux hardware compatibility. What surprised me most about this is that I was apparently among the first to come across this issue. The Ubuntu bug reports above are still active as of this writing. Is no one using Linux on Sandy Bridge yet? Am I the only person left on the planet with 10/100 network hardware? Perhaps the most likely reason is that the Intel ethernet hardware problem only recently manifested itself. -- Eric
Debian 6.0 system with 2.6.39 kernel dropping packets, sandy bridge hardware