date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,356,020,314,000
I accidentally overwrote my /dev/sda partition table with GParted (full story on AskUbuntu). Since I haven't rebooted yet and my filesystem is still perfectly usable, I was told I might be able to recover the partition table from in-kernel memory. Is that possible? If so, how do I recover it and restore it?
Yes, you can do this with the /sys filesystem. /sys is a fake filesystem dynamically generated by the kernel & kernel drivers. In this specific case you can go to /sys/block/sda and you will see a directory for each partition on the drive. There are 2 specific files in those folders you need, start and size. start contains the offset from the beginning of the drive, and size is the size of the partition. Just delete the partitions and recreate them with the exact same starts and sizes as found in /sys. For example this is what my drive looks like: Device Boot Start End Blocks Id System /dev/sda1 * 2048 133119 65536 83 Linux /dev/sda2 * 133120 134340607 67103744 7 HPFS/NTFS/exFAT /dev/sda3 134340608 974675967 420167680 8e Linux LVM /dev/sda4 974675968 976773167 1048600 82 Linux swap / Solaris And this is what I have in /sys/block/sda: sda1/ start: 2048 size: 131072 sda2/ start: 133120 size: 134207488 sda3/ start: 134340608 size: 840335360 sda4/ start: 974675968 size: 2097200 I have tested this to verify information is accurate after modifying the partition table on a running system
How to read the in-memory (kernel) partition table of /dev/sda?
1,356,020,314,000
How to detect if isolcpus is activated and on which cpus, when for example you connect for the first time on a server. Conditions: not spawning any process to see where it will be migrated. The use case is that isolcpus=1-7 on a 6 cores i7, seems to not activate isolcpus at boot, and i would like to know if its possible from /proc/, /sys or any kernel internals which can be read in userspace, to provide a clear status of activation of isolcpus and which cpu are concerned. Or even read active setting of the scheduler which is the first concerned by isolcpus. Consider the uptime is so big, that dmesg is no more displaying boot log to detect any error at startup. Basic answer like "look at kernel cmd line" will not be accepted :)
What you look for should be found inside this virtual file: /sys/devices/system/cpu/isolated and the reverse in /sys/devices/system/cpu/present // Thanks to John Zwinck From drivers/base/cpu.c we see that the source displayed is the kernel variable cpu_isolated_map: static ssize_t print_cpus_isolated(struct device *dev, n = scnprintf(buf, len, "%*pbl\n", cpumask_pr_args(cpu_isolated_map)); ... static DEVICE_ATTR(isolated, 0444, print_cpus_isolated, NULL); and cpu_isolated_map is exactly what gets set by kernel/sched/core.c at boot: /* Setup the mask of cpus configured for isolated domains */ static int __init isolated_cpu_setup(char *str) { int ret; alloc_bootmem_cpumask_var(&cpu_isolated_map); ret = cpulist_parse(str, cpu_isolated_map); if (ret) { pr_err("sched: Error, all isolcpus= values must be between 0 and %d\n", nr_cpu_ids); return 0; } return 1; } But as you observed, someone could have modified the affinity of processes, including daemon-spawned ones, cron, systemd and so on. If that happens, new processes will be spawned inheriting the modified affinity mask, not the one set by isolcpus. So the above will give you isolcpus as you requested, but that might still not be helpful. Supposing that you find out that isolcpus has been issued, but has not "taken", this unwanted behaviour could be derived by some process realizing that it is bound to only CPU=0, believing it is in monoprocessor mode by mistake, and helpfully attempting to "set things right" by resetting the affinity mask. If that was the case, you might try and isolate CPUS 0-5 instead of 1-6, and see whether this happens to work.
how to detect if isolcpus is activated?
1,356,020,314,000
I woke up this morning to a notification email with some rather disturbing system log entries. Dec 2 04:27:01 yeono kernel: [459438.816058] ata2.00: exception Emask 0x0 SAct 0xf SErr 0x0 action 0x6 frozen Dec 2 04:27:01 yeono kernel: [459438.816071] ata2.00: failed command: WRITE FPDMA QUEUED Dec 2 04:27:01 yeono kernel: [459438.816085] ata2.00: cmd 61/08:00:70:0d:ca/00:00:08:00:00/40 tag 0 ncq 4096 out Dec 2 04:27:01 yeono kernel: [459438.816088] res 40/00:00:00:4f:c2/00:00:00:00:00/40 Emask 0x4 (timeout) Dec 2 04:27:01 yeono kernel: [459438.816095] ata2.00: status: { DRDY } (the above five lines were repeated a few times at a short interval) Dec 2 04:27:01 yeono kernel: [459438.816181] ata2: hard resetting link Dec 2 04:27:02 yeono kernel: [459439.920055] ata2: SATA link down (SStatus 0 SControl 300) Dec 2 04:27:02 yeono kernel: [459439.932977] ata2: hard resetting link Dec 2 04:27:09 yeono kernel: [459446.100050] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 2 04:27:09 yeono kernel: [459446.314509] ata2.00: configured for UDMA/133 Dec 2 04:27:09 yeono kernel: [459446.328037] ata2.00: device reported invalid CHS sector 0 ("reported invalid CHS sector 0" repeated a few times at a short interval) I make full nightly backups of my entire system to an external (USB-connected) drive, and the above happened right in the middle of that backup run. (The backup starts at 04:00 through cron, and tonight's logged completion just before 04:56.) The backup process itself claims to have completed without any errors. There are two internally connected SATA drives and two externally (USB) connected drives on my system; one of the external drives is currently dormant. I don't recall off the top of my head which physical SATA ports are used for which of the internal drives. When googling I found the AskUbuntu question Is this drive failure or something else? which indicates that a very similar error occured after 8-10 GB had been copied to a drive, but the actual failure mode was different as the drive switched to a read-only state. The only real similarity is that I did add on the order of 7-8 GB of data to my main storage last night, which would have been backed up around the time that the error occured. smartd is not reporting anything out of the ordinary on either of the internal drives. Unfortunately smartctl doesn't speak the language of the external backup drive's USB bridge, and simply complains about Unknown USB bridge [0x0bc2:0x3320 (0x100)]. Googling for that specific error was distinctly unhelpful. My main data storage as well as the backup is on ZFS and zpool status reports 0 errors and no known data errors. Nevertheless I have initiated a full scrub on both the internal and external drives. It is currently slated to complete in about six hours for the internal drive (main storage pool) and 13-14 hours for the backup drive. It seems that the next step should be to determine which drive was having trouble, and possibly replace it. The ata2.00 part probably tells me which drive was having problems, but how do I map that identifier to a physical drive?
I wrote one-liner based on Tobi Hahn answer. For example, you want to know what device stands for ata3: ata=3; ls -l /sys/block/sd* | grep $(grep $ata /sys/class/scsi_host/host*/unique_id | awk -F'/' '{print $5}') It will produce something like this lrwxrwxrwx 1 root root 0 Jan 15 15:30 /sys/block/sde -> ../devices/pci0000:00/0000:00:1f.5/host2/target2:0:0/2:0:0:0/block/sde
Given a kernel ATA exception, how to determine which physical disk is affected? [duplicate]
1,356,020,314,000
I want to make "echo 1 > /sys/kernel/mm/ksm/run" persistent between boots. I know that I can edit /etc/sysctl.conf to make /proc filesystem changes persist, but this doesn't seem to work for /sys. How would I make this change survive reboots?
Most distros have some sort of an rc.local script that you could use. Check your distro as names and path may vary. Normally expect to look under /etc.
Make changes to /sys persistent between boots
1,356,020,314,000
I am trying to run an update of freebsd10 and I am being asked for the kernel sources ===>>> Launching child to update lsof-4.89.b,8 to lsof-4.89.d,8 ===>>> All >> lsof-4.89.b,8 (9/9) ===>>> Currently installed version: lsof-4.89.b,8 ===>>> Port directory: /usr/ports/sysutils/lsof ===>>> This port is marked IGNORE ===>>> requires kernel sources ===>>> If you are sure you can build it, remove the IGNORE line in the Makefile and try again. ===>>> Update for lsof-4.89.b,8 failed ===>>> Aborting update but sysinstall no longer exist sysinstall: not found What is the new method of installing the kernel sources in FreeBSD10? I thought bsdinstall, but it only tries to chop up my disk which I do not want
You can do it: git clone https://github.com/freebsd/freebsd.git /usr/src cd /usr/src; make clean
How do you install the FreeBSD10 kernel sources?
1,356,020,314,000
I've been trying to understand the booting process, but there's just one thing that is going over my head.. As soon as the Linux kernel has been booted and the root file system (/) mounted, programs can be run and further kernel modules can be integrated to provide additional functions. To mount the root file system, certain conditions must be met. The kernel needs the corresponding drivers to access the device on which the root file system is located (especially SCSI drivers). The kernel must also contain the code needed to read the file system (ext2, reiserfs, romfs, etc.). It is also conceivable that the root file system is already encrypted. In this case, a password is needed to mount the file system. The initial ramdisk (also called initdisk or initrd) solves precisely the problems described above. The Linux kernel provides an option of having a small file system loaded to a RAM disk and running programs there before the actual root file system is mounted. The loading of initrd is handled by the boot loader (GRUB, LILO, etc.). Boot loaders only need BIOS routines to load data from the boot medium. If the boot loader is able to load the kernel, it can also load the initial ramdisk. Special drivers are not required. If /boot is not a different partition, but is present in the / partition, then shouldn't the boot loader require the SCSI drivers, to access the 'initrd' image and the kernel image? If you can access the images directly, then why exactly do we need the SCSI drivers??
Nighpher, I'll try to answer your question, but for a more comprehensive description of boot process, try this article at IBM. Ok, I assume, that you are using GRUB or GRUB2 as your bootloader for explanation. First off, when BIOS accesses your disk to load the bootloader, it makes use of its built-in routines for disk access, which are stored in the famous 13h interrupt. Bootloader (and kernel at setup phase) make use of those routines when they access disk. Note that BIOS runs in real mode (16-bit mode) of the processor, thus it cannot address more than 2^20 bytes of RAM (2^20, not 2^16, because each address in real mode is comprised of segment_address*16 + offset, where both segment address and offset are 16-bit, see "x86 memory segmentation" at Wikipedia). Thus, these routines can't access more than 1 MiB of RAM, which is a strict limitation and a major inconvenience. BIOS loads bootloader code right from the MBR – the first 512 bytes of your disk – and executes it. If you're using GRUB, that code is GRUB stage 1. That code loads GRUB stage 1.5, which is located either in the first 32 KiB of disk space, called DOS compatibility region, or from a fixed address of the file system. It doesn't need to understand the file system structure to do this, because even if stage 1.5 is in the file system, it is "raw" code and can be directly loaded to RAM and executed: See "Details of GRUB on the PC" at pixelbeat.org, which is the source for the below image. Load of stage 1.5 from disk to RAM makes use of BIOS disk access routines. Stage 1.5 contains the filesystem utilities, so that it can read the stage 2 from the filesystem (well, it still uses BIOS 13h to read from disk to RAM, but now it can decipher filesystem info about inodes, etc., and get raw code out of the disk). Older BIOSes might not be able to access the whole HD due to limitations in their disk addressing mode – they might use Cylinder-Head-Sector system, unable to address more than first 8 GiB of disk space: http://en.wikipedia.org/wiki/Cylinder-head-sector. Stage 2 loads the kernel into RAM (again, using BIOS disk utilities). If it's 2.6+ kernel, it also has initramfs compiled within, so no need to load it. If it's an older kernel, bootloader also loads standalone initrd image into memory, so that kernel could mount it and get drivers for mounting real file system from disk. The problem is that the kernel (and ramdisk) weigh more than 1 MiB; thus, to load them into RAM you have to load the kernel into the first 1 MiB, then jump to protected mode (32-bit), move the loaded kernel to high memory (free the first 1 MiB for real mode), then return to real (16-bit) mode again, get ramdisk from disk to first 1 MiB (if it's a separate initrd and older kernel), possibly switch to protected (32-bit) mode again, put it to where it belongs, possibly get back to real mode (or not: https://stackoverflow.com/questions/4821911/does-grub-switch-to-protected-mode) and execute the kernel code. Warning: I'm not entirely sure about thoroughness and accuracy of this part of description. Now, when you finally run the kernel, you already have it and ramdisk loaded into RAM by bootloader, so the kernel can use disk utilities from ramdisk to mount your real root file system and pivot root to it. ramfs drivers are present in the kernel, so it can understand the contents of initramfs, of course.
How does Linux load the 'initrd' image?
1,356,020,314,000
Imagine there's a company A that releases a new graphics adapter. Who manages the process that results in this new graphics adapter being supported by the Linux kernel in the future? How does that proceed? I'm curious how kernel support for any new hardware is handled; on Windows companies develop drivers on their own, but how does Linux get specific hardware support?
Driver support works the same way as with all of open source: someone decides to scratch their own itch. Sometimes the driver is supplied by the company providing the hardware, just as on Windows. Intel does this for their network chips, 3ware does this for their RAID controllers, etc. These companies have decided that it is in their best interest to provide the driver: their "itch" is to sell product to Linux users, and that means ensuring that there is a driver. In the best case, the company works hard to get their driver into the appropriate source base that ships with Linux distros. For most drivers, that means the Linux kernel. For graphics drivers, it means X.org. There's also CUPS for printer drivers, NUT for UPS drivers, SANE for scanner drivers, etc. The obvious benefit of doing this is that Linux distros made after the driver gets accepted will have support for the hardware out of the box. The biggest downside is that it's more work for the company to coordinate with the open source project to get their driver in, for the same basic reasons it's difficult for two separate groups to coordinate anything. Then there are those companies that choose to offer their driver source code directly, only. You typically have to download the driver source code from their web site, build it on your system, and install it by hand. Such companies are usually smaller or specialty manufacturers without enough employees that they can spare the effort to coordinate with the appropriate open source project to get their driver into that project's source base. A rare few companies provide binary-only drivers instead of source code. An example are the more advanced 3D drivers from companies like NVIDIA. Typically the reason for this is that the company doesn't want to give away information they feel proprietary about. Such drivers often don't work with as many Linux distros as with the previous cases, because the company providing the hardware doesn't bother to rebuild their driver to track API and ABI changes. It's possible for the end user or the Linux distro provider to tweak a driver provided as source code to track such changes, so in the previous two cases, the driver can usually be made to work with more systems than a binary driver will. When the company doesn't provide Linux drivers, someone in the community simply decides to do it. There are some large classes of hardware where this is common, like with UPSes and printers. It takes a rare user who a) has the hardware; b) has the time; c) has the skill; and d) has the inclination to spend the time to develop the driver. For popular hardware, this usually isn't a problem because with millions of Linux users, these few people do exist. You get into trouble with uncommon hardware.
How is new hardware support added to the linux kernel?
1,356,020,314,000
I would like to understand the term "system call". I am familiar that system calls are used to get kernel services from a userspace application. The part i need clarification with is the difference between a "system call" and a "C implementation of the system call". Here is a quote that confuses me: On Unix-like systems, that API is usually part of an implementation of the C library (libc), such as glibc, that provides wrapper functions for the system calls, often named the same as the system calls that they call What are the "system calls that they call"? Where is their source? Can I include them directly in my code? Is the "system call" in a generic sense just a POSIX defined interface but to actually see the implementation one could examine the C source and in it see how the actual userspace to kernel communication actually goes? Background note: I'm trying to understand if, in the end, each c function ends up interacting with devices from /dev.
System calls per se are a concept. They represent actions that processes can ask the kernel to perform. Those system calls are implemented in the kernel of the UNIX-like system. This implementation (written in C, and in asm for small parts) actually performs the action in the system. Then, processes use an interface to ask the system for the execution of the system calls. This interface is specified by POSIX. This is a set of functions of the C standard library. They are actually wrappers, they may perform some checks and then call a system-specific function in the kernel that tell it to do the actions required by the system call. And the trick is that those functions which are the interface are named the same as the system calls themselves and are often referred directly as "the system calls". You could call the function in the kernel that perform the system call directly through the system-specific mechanism. The problem is that it makes your code absolutely not portable. So, a system call is: a concept, a sequence of action performed by the kernel to offer a service to a user process the function of the C standard library you should use in your code to get this service from the kernel.
What is meant by "a system call" if not the implementation in the programing language?
1,356,020,314,000
I read through this popular IBM doc (I see it referred quite often on the web) explaining the function of the initial RAM disk. I hit a wall in conceptualizing how this works though. In the doc it says The boot loader, such as GRUB, identifies the kernel that is to be loaded and copies this kernel image and any associated initrd into memory I'm already confused: Does it copy the entire kernel into memory or just part of it? If the entire kernel is in memory then why do we even need the initial RAM disk? I thought the purpose of initrd was to be able to have a small generalized kernel image and initrd will install the correct modules in it before the kernel image is loaded. But if the entire kernel is already in memory why do we need initrd? That also brings up another thing that confuses me - where are the modules that get loaded into the kernel located? Are all the kernel modules stored inside initrd?
The entire kernel is loaded into memory at boot, typically along with an initramfs nowadays. (It is still possible to set up a system to boot without an initramfs but that's unusual on desktops and servers.) The initramfs's role is to provide the functionality needed to mount the "real" filesystems and continue booting the system. That involves kernel modules, and also various binaries: you need at least udev, perhaps some networking, and kmod which loads modules. Modules can be loaded into the kernel later than just boot, so there's no special preparation of the kernel by the initramfs. They can be stored anywhere: the initramfs, /lib/modules on the real filesystem, in a development tree if you're developing a module... The initramfs only needs to contain the modules which are necessary to mount the root filesystem (which contains the rest).
Is the entire kernel loaded into memory on boot?
1,356,020,314,000
My server has two 1-Gbit and two 10-Gbit onboard network cards. I need to disable the 1-Gbit network cards completely, so that ifconfig -a does not show them. The network cards use different kernel modules. The 10-Gbit use ixgbe, and the 1-Gbit use igb. 01:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) Subsystem: Dell Ethernet 10G 4P X520/I350 rNDC Kernel driver in use: ixgbe 05:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01) Subsystem: Dell I350 Gigabit Network Connection Kernel driver in use: igb Both ixgbe and igb are compiled statically in the kernel (not as a loadable module). I need to disable the module using the kernel boot parameters. I have tried appending the following to my kernel, but it has no effect: igb.blacklist=yes igb.enable=0 igb.disable=yes the igb network cards are still showing How can I disable igb completely ?
You should be able to blacklist the igb “module”, even when built-in, by blacklisting its initialisation function: add initcall_blacklist=igb_init_module to your kernel’s boot parameters. See How do I disable I2C Designware support when it's not built as a module? for background information. The general recipe here is to look for the module in the kernel source code, and look for functions which have the __init attribute — there should only be one readily identifiable as the main initialisation function (typically referred to in a module_init declaration). Blacklist that, and the driver won’t be initialised.
disable kernel module which is compiled in kernel (not loaded)
1,356,020,314,000
Is there a site someplace that lists the contents of /proc and what each entry means?
The documentation for Linux's implementation of /proc is in Documentation/filesystems/proc.txt in the kernel documentation. Beware that /proc is one of the areas where *ixes differ most. It started out as a System V specific feature, was then greatly extended by Linux, and is now in the process of being deprecated by things like /sys. The BSDs — including OS X — haven't adopted it at all. Therefore, if you write a program or script that accesses things in /proc, there is a good chance it won't work on other *ixes.
Where are the contents of /proc of the Linux kernel documented?
1,356,020,314,000
My question is why nowadays some operating system event handling is still written in assembly language instead of a higher level language such as C, when the kernel itself is written mostly in C?
The language abstracts away access to CPU registers, and an OS when handling events has to save context, so it needs access to the registers at the point of the event, thus breaking the C spec.
Why some operating systems event handling is written in asm instead of c?
1,356,020,314,000
How to add more /dev/loop* devices on Fedora 19? I do: # uname -r 3.11.2-201.fc19.x86_64 # lsmod |grep loop # ls /dev/loop* /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3 /dev/loop4 /dev/loop5 /dev/loop6 /dev/loop7 /dev/loop-control # modprobe loop max_loop=128 # ls /dev/loop* /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3 /dev/loop4 /dev/loop5 /dev/loop6 /dev/loop7 /dev/loop-control So nothing changes.
You have to create device nodes into /dev with mknod. The device nodes in dev have a type (block, character and so on), a major number and a minor number. You can find out the type and the major number by doing ls -l /dev/loop0: user@foo:/sys# ls -l /dev/loop0 brw-rw---- 1 root disk 7, 0 Oct 8 08:12 /dev/loop0 This means loop device nodes should have the block type and major number of 7. The minor numbers increment by one for each device node, starting from 0, so loop0 is simply 0 and loop7 is 7. To create loop8 you run, as root, command mknod -m 0660 /dev/loop8 b 7 8. This will create the device node /dev/loop8 with permissions specified along the -m switch (that's not necessary as you're probably running a desktop system, but it's a good idea not to let everyone read and write your device nodes).
How to add more /dev/loop* devices on Fedora 19
1,356,020,314,000
I'm interested in theoretical limits, perhaps with examples of systems having huge numbers of CPU's.
At least 2048 in practice. As a concrete example, SGI sells its UV system, which can use 256 sockets (2,048 cores) and 16TB of shared memory, all running under a single kernel. I know that there are at least a few systems that have been sold in this configuration. According to SGI: Altix UV runs completely unmodified Linux, including standard distributions from both Novell and Red Hat.
How many cores can Linux kernel handle?
1,356,020,314,000
I have always found it difficult to find information about the system itself in Unix, whether it be Which OS I am using (version number and all, to compare it with the latest available builds)? Which Desktop Environment am I using? If I am using KDE, most of the programs begin with a K and I can say I am using KDE, but there should be some way to query this, say from a script. Which kernel version am I using? (For example, I am using Fedora, and I want to know what Linux kernel version I am using) Basically, what I miss is a single point/utility that can get all this information for me. Most of the times the solutions to the above would themselves be OS specific. Then, you are stuck.
In addition to uname -a, which gives you the kernel version, you can try: lsb_release -idrc # distro, version, codename, long release name Most Desktop Environments like GNOME or KDE have an "about" or "info" menu option that will tell you what you use currently, so no commandline needed there really.
How to find information about the system/machine in Unix?
1,356,020,314,000
So, I thought this would be a pretty simple thing to locate: a service / kernel module that, when the kernel notices userland memory is running low, triggers some action (e.g. dumping a process list to a file, pinging some network endpoint, whatever) within a process that has its own dedicated memory (so it won't fail to fork() or suffer from any of the other usual OOM issues). I found the OOM killer, which I understand is useful, but which doesn't really do what I'd need to do. Ideally, if I'm running out of memory, I want to know why. I suppose I could write my own program that runs on startup and uses a fixed amount of memory, then only does stuff once it gets informed of low memory by the kernel, but that brings up its own question... Is there even a syscall to be informed of something like that? A way of saying to the kernel "hey, wake me up when we've only got 128 MB of memory left"? I searched around the web and on here but I didn't find anything fitting that description. Seems like most people use polling on a time delay, but the obvious problem with that is it makes it way less likely you'll be able to know which process(es) caused the problem.
What you are asking is, basically, a kernel-based callback on a low-memory condition, right? If so, I strongly believe that the kernel does not provide such mechanism, and for a good reason: being low on memory, it should immediately run the only thing that can free some memory - the OOM killer. Any other programs can bring the machine to an halt. Anyway, you can run a simple monitoring solution in userspace. I had the same low-memory debug/action requirement in the past, and I wrote a simple bash which did the following: monitor for a soft watermark: if memory usage is above this threshold, collect some statistics (processes, free/used memory, etc) and send a warning email; monitor for an hard watermark: if memory usage is above this threshold, collect some statistics and kill the more memory hungry (or less important) processes, then send an alert email. Such a script would be very lightweight, and it can poll the machine at small interval (ie: 15 seconds)
How to trigger action on low-memory condition in Linux?
1,356,020,314,000
What are the differences in dependencies between select and depends on in the kernels Kconfig files? config FB_CIRRUS tristate "Cirrus Logic support" depends on FB && (ZORRO || PCI) select FB_CFB_FILLRECT select FB_CFB_COPYAREA select FB_CFB_IMAGEBLIT ---help--- This enables support for Cirrus Logic GD542x/543x based boards on Amiga: SD64, Piccolo, Picasso II/II+, Picasso IV, or EGS Spectrum. In the example above, how is FB_CIRRUS diffrently related to FB && (ZORRO || PCI) than it is to FB_CFB_FILLRECT, FB_CFB_COPYAREA and FB_CFB_IMAGEBLIT? Update I've noticed that depend on doesn't really do much in terms of compilation order. For example. A successful build of AppB depends on a statically linked LibB to be built first. Setting depends on LibB in Kconfig for AppB will not force the LibB to be built first. Setting select LibB will.
depends on A indicates the symbol(s) A must already be positively selected (=y) in order for this option to be configured. For example, depends on FB && (ZORRO || PCI) means FB must have been selected, and (&&) either ZORRO or (||) PCI. For things like make menuconfig, this determines whether or not an option will be presented. select positively sets a symbol. For example, select FB_CFB_FILLRECT will mean FB_CFB_FILLRECT=y. This fulfills a potential dependency of some other config option(s). Note that the kernel docs discourage the use of this for "visible" symbols (which can be selected/deselected by the user) or for symbols that themselves have dependencies, since those will not be checked. Reference: https://www.kernel.org/doc/Documentation/kbuild/kconfig-language.txt
What is the difference between "select" vs "depends" in the Linux kernel Kconfig?
1,356,020,314,000
I'd like to block all distribution-shipped kernel updates due to a nasty thing that recently happened to me. (I'm on a Ubuntu 12.04 amd64 derivative.) I'd like to block all updates to installed kernels of the minor version 3.2 to the linux-headers, linux-headers-generic, linux-image, and linux-image-extra packages. The problem I'm encountering is that these all have a version and if I block a specific version, nothing is gained because a new version will be installed (eg: if I block linux-image-3.2.0-35, linux-image-3.2.0-36 is not blocked and could still potentially be installed with a dist-upgrade from apt.)
What you need to use is a feature of apt-get called holding You can either do this via Synaptic or Dpkg, Here is how I would hold my kernel using the dpkg method. First check your kernel image name dpkg -l | grep linux-image output for me: ii linux-image-3.2.0-4-amd64 3.2.35-2 amd64 Linux 3.2 for 64-bit PCs ii linux-image-amd64 3.2+46 amd64 Linux for 64-bit PCs (meta-package) then tell dpkg to hold the metapackage (the generic version without any version numbers) echo linux-image-amd64 hold | sudo dpkg --set-selections You can then check this worked via dpkg -l linux-image-amd64 Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) ||/ Name Version Architecture Description +++-==========================-==================-==================-========================================================== hi linux-image-amd64 3.2+46 amd64 Linux for 64-bit PCs (meta-package) Notice the 'hi' at the bottom, h means held and i means currently installed. This package is installed but will not be upgraded. You can reverse this via echo linux-image-amd64 install | sudo dpkg --set-selections and again can check via dpkg -l linux-image-amd64 Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) ||/ Name Version Architecture Description +++-==========================-==================-==================-========================================================== ii linux-image-amd64 3.2+46 amd64 Linux for 64-bit PCs (meta-package) Notice 'ii', the first i means this package is set to install and the second i means it is currently installed. This package is installed and will be upgraded. For more information on these flags see man dpkg specifically the 'package selection states' sections.
Blocking kernel updates with dpkg
1,356,020,314,000
I have been using Linux Mint Debian with Debian unstable and noticed that when I press restart, instead of going all the way back to the BIOS, then grub, then booting up, I seem to be shutting down then loading back up without going back to the BIOS or GRUB. This is an amazing feature I have not seen before until now. What is this called and when did it happen? I had been a user of Ubuntu for a long time.
It looks like your system has kexec enabled. Kexec allows the Linux kernel to load another kernel and hand the system over to that system. It's named after the exec family of functions that replace a process by a new executable image. Instead of calling the reboot utility, your system is set up to call kexec when you reboot, and the kernel does the rest.
Rebooting without shutting off?
1,514,121,729,000
Just as a curiosity; something went wrong with a Linux machine, making the root file system show up as "64Z". A few commands work, like top, df, and kill, but others like reboot come up with "command not found" (since it can't read the root filesystem), and chmod comes up with a segmentation fault. Is there any way to restart the system anyway, i.e. without the reboot program? I tried kill -PWR 1 (sending SIGPWR to init), but this didn't seem to do anything. It's mostly an academic curiosity. The labmate who was doing whatever large-database work that caused the failure will be physically restarting the machine soon.
Try to reboot with magic sysrq key: echo b > /proc/sysrq-trigger For more information read wiki or kernel documentation.
Any way to restart a Linux machine via SSH if the root filesystem is not working?
1,514,121,729,000
I understand this is somewhat less Ubuntu related, but it affects it. So,what is so new about it that Linus decided to name it 3.0? I'm not trying to get information about the drivers that got into it or stuff that always gets improved. I want to know what really made it 3.0. I read somewhere that Linus wanted to get rid of the code that supports legacy hardware. Hm, not sure what that really meant because 3.0 is bigger (in MB), not smaller, than, say, 2.6.38. What was the cause of naming it 3.0?
Nothing new at all. Citation below are from https://lkml.org/lkml/2011/5/29/204 I decided to just bite the bullet, and call the next version 3.0. It will get released close enough to the 20-year mark, which is excuse enough for me, although honestly, the real reason is just that I can no longe rcomfortably count as high as 40. I especially like: The whole renumbering was discussed at last years Kernel Summit, and there was a plan to take it up this year too. But let's face it - what's the point of being in charge if you can't pick the bike shed color without holding a referendum on it? So I'm just going all alpha-male, and just renumbering it. You'll like it. And finally: So what are the big changes? NOTHING. Absolutely nothing. Sure, we have the usual two thirds driver changes, and a lot of random fixes, but the point is that 3.0 is just about renumbering, we are very much not doing a KDE-4 or a Gnome-3 here. No breakage, no special scary new features, nothing at all like that. We've been doing time-based releases for many years now, this is in no way about features. If you want an excuse for the renumbering, you really should look at the time-based one ("20 years") instead.
What is new in Kernel 3.0?
1,514,121,729,000
So i always thought MMU is part of the unix kernel that translates addresses to physical addresses but in the MMU wiki page it says its a computer hardware that usually have its own memory, but that page doesn't talk much about Unix/Linux operating systems So I'm confused, does the all the translation happen in hardware and kernel doesn't do any translation? and basically the operating system doesn't know anything about the real physical address? I'm asking about Unix based operating systems, but if you know about other operating systems as well like windows or if its a general thing in modern computers let me know, thanks.
The MMU (memory management unit) is a physical component of the computer system, typically part of the CPU (but not necessarily). It translates virtual addresses (also known as linear addresses in the x86 world) to physical addresses; it can also enforce memory access control, cache control, and bus arbitration. It doesn’t usually have its own memory, it relies on data in the system’s main memory to operate. The MMU performs this translation by using information stored in data structures such as page tables; these specify which physical address ranges correspond to linear address ranges (if any — a page can be “not present”). The page tables are set up by the kernel, and the kernel determines what the mappings should be — so the ultimate authority on physical addresses is the kernel, however it always operates with the help of the MMU. Put another way, the CPU always operates on linear addresses, which are translated to physical addresses by the MMU, but the kernel is aware of the translations and programs the MMU to perform them. User-space processes are oblivious to all this and aren’t (normally) aware of the physical addresses corresponding to the linear addresses they use, and typically can’t access the mappings either. There are some cases where physical mappings leak, but those are usually considered to be security vulnerabilities and quickly addressed. However, in Linux, processes with sufficient privileges can see their physical map in /proc/<pid>/pagemap. For Linux specifically, see the memory management documentation, and in particular the section on examining page tables.
Is the MMU inside of Unix/Linux kernel? or just in a hardware device with its own memory?
1,514,121,729,000
I'm building a custom kernel based off 4.11 (for Mintx64, if it matters). I've already compiled and installed it to prove that it works. Now I've made a few small changes to a couple of files (in the driver and net subsystems, this is why I need to compile a custom kernel in the first place!) Now I want to build the modified kernel. However when I run fakeroot make -j5 deb-pkg LOCALVERSION=myname KDEB_PKGVERSION=1 The build system appears to start by "clean"-ing a whole load of stuff, so I stopped it quickly. Unfortunately the computer I'm using is not blessed with a good CPU and takes many hours to build from scratch. Therefore I'd rather avoid doing it again if possible! Is it possible to make just an incremental build without everything be "clean"d or is this a requirement of the kernel build system? The output I got was: CHK include/config/kernel.release make clean CLEAN . CLEAN arch/x86/lib ...
The make clean is only for the deb-pkg target. Take a look at scripts/package/Makefile: deb-pkg: FORCE $(MAKE) clean $(call cmd,src_tar,$(KDEB_SOURCENAME)) $(MAKE) KBUILD_SRC= +$(call cmd,builddeb) bindeb-pkg: FORCE $(MAKE) KBUILD_SRC= +$(call cmd,builddeb) If you build the bindeb-pkg instead, it won't do a clean. You probably don't need the source packages anyway. I suspect it does a clean because it doesn't want to tar up build artifacts in the source tarball.
Re-building Linux kernel without "clean"
1,514,121,729,000
Recently saw a question that sparked this thought. Couldn't really find an answer here or via the Google machine. Basically, I'm interested in knowing how the kernel I/O architecture is layered. For example, does kjournald dispatch to pdflush or the other way around? My assumption is that pdflush (being more generic to mass storage I/O) would sit at a lower level and trigger the SCSI/ATA/whatever commands necessary to actually perform the writes, and kjournald handles higher level filesystem data structures before writing. I could see it the other way around as well, though, with kjournald directly interfacing with the filesystem data structures and pdflush waking up every now and then to write dirty pagecache pages to the device through kjournald. It's also possible that the two don't interact at all for some other reason. Basically: I need some way to visualize (graph or just an explanation) the basic architecture used for dispatching I/O to mass storage within the Linux kernel.
Before we discuss the specifics regarding pdflush, kjournald, andkswapd`, let's first get a little background on the context of what exactly we're talking about in terms of the Linux Kernel. The GNU/Linux architecture The architecture of GNU/Linux can be thought of as 2 spaces: User Kernel Between the User Space and Kernel Space sits the GNU C Library (glibc). This provides the system call interface that connects the kernel to the user-space applications. The Kernel Space can be further subdivided into 3 levels: System Call Interface Architectural Independent Kernel Code Architectural Dependent Code System Call Interface as its name implies, provide an interface between the glibc and the kernel. The Architectural Independent Kernel Code is comprised of the logical units such as the VFS (Virtual File System) and the VMM (Virtual Memory Management). The Architectural Dependent Code is the components that are processor and platform-specific code for a given hardware architecture. Diagram of GNU/Linux Architecture                                   For the rest of this article, we'll be focusing our attention on the VFS and VMM logical units within the Kernel Space. Subsystems of the GNU/Linux Kernel                                      VFS Subsystem With a high level concept of how the GNU/Linux kernel is structured we can delve a little deeper into the VFS subsystem. This component is responsible for providing access to the various block storage devices which ultimately map down to a filesystem (ext3/ext4/etc.) on a physical device (HDD/etc.). Diagram of VFS This diagram shows how a write() from a user's process traverses the VFS and ultimately works its way down to the device driver where it's written to the physical storage medium. This is the first place where we encounter pdflush. This is a daemon which is responsible for flushing dirty data and metadata buffer blocks to the storage medium in the background. The diagram doesn't show this but there is another daemon, kjournald, which sits along side pdflush, performing a similar task writing dirty journal blocks to disk. NOTE: Journal blocks is how filesystems like ext4 & JFS keep track of changes to the disk in a file, prior to those changes taking place. The above details are discussed further in this paper. Overview of write() steps To provide a simple overview of the I/O sybsystem operations, we'll use an example where the function write() is called by a User Space application. A process requests to write a file through the write() system call. The kernel updates the page cache mapped to the file. A pdflush kernel thread takes care of flushing the page cache to disk. The file system layer puts each block buffer together to a bio struct (refer to 1.4.3, “Block layer” on page 23) and submits a write request to the block device layer. The block device layer gets requests from upper layers and performs an I/O elevator operation and puts the requests into the I/O request queue. A device driver such as SCSI or other device specific drivers will take care of write operation. A disk device firmware performs hardware operations like seek head, rotation, and data transfer to the sector on the platter. VMM Subsystem Continuing our deeper dive, we can now look into the VMM subsystem. This component is responsible for maintaining consistency between main memory (RAM), swap, and the physical storage medium. The primary mechanism for maintaining consistency is bdflush. As pages of memory are deemed dirty they need to be synchronized with the data that's on the storage medium. bdflush will coordinate with pdflush daemons to synchronize this data with the storage medium. Diagram of VMM                  Swap When system memory becomes scarce or the kernel swap timer expires, the kswapd daemon will attempt to free up pages. So long as the number of free pages remains above free_pages_high, kswapd will do nothing. However, if the number of free pages drops below, then kswapd will start the page reclaming process. After kswapd has marked pages for relocation, bdflush will take care to synchronize any outstanding changes to the storage medium, through the pdflush daemons. References & Further Readings Conceptual Architecture of the Linux Kernel The Linux I/O Stack Diagram - ver. 0.1, 2012-03-06 - outlines Linux I/O stack as of Kernel 3.3 Local file systems update - specifically slide #7 Interactive Linux Kernel Map Understanding Virtual Memory In Red Hat Enterprise Linux 4 Linux Performance and Tuning Guidelines - specifically pages 19-24 Anatomy of the Linux kernel The Case for Semantic Aware Remote Replication
How do pdflush, kjournald, swapd, etc interoperate?
1,514,121,729,000
I would like to capture traffic on Linux virtual interfaces, for debugging purposes. I have been experimenting with veth, tun and dummy interface types; on all three, I am having trouble getting tcpdump to show anything. Here is how I set up the dummy interface: ip link add dummy10 type dummy ip addr add 99.99.99.1 dev dummy10 ip link set dummy10 up In one terminal, watch it with tcpdump: tcpdump -i dummy10 In a second, listen on it with nc: nc -l 99.99.99.1 2048 In a third, make an HTTP request with curl: curl http://99.99.99.1:2048/ Although in terminal 2 we can see the data from the curl request, nothing shows up from tcpdump. A Tun/Tap tutorial clarifies some situations where the kernel may not actually send any packets when one is operating on a local interface: Looking at the output of tshark, we see...nothing. There is no traffic going through the interface. This is correct: since we're pinging the interface's IP address, the operating system correctly decides that no packet needs to be sent "on the wire", and the kernel itself is replying to these pings. If you think about it, it's exactly what would happen if you pinged another interface's IP address (for example eth0): no packets would be sent out. This might sound obvious, but could be a source of confusion at first (it was for me). However, it is hard to see how this could apply to TCP data packets. Maybe tcpdump should be bound to the interface a different way?
The traffic is going over the lo interface. When an IP is added to a box, a route for that address is added to the 'local' table. All the routes in this table route traffic over the loopback interface. You can view the contents of the 'local' table with the following: ip route show table local Which on my system looks like this: local 10.230.134.38 dev tun0 proto kernel scope host src 10.230.134.38 broadcast 10.230.134.38 dev tun0 proto kernel scope link src 10.230.134.38 broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1 local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1 local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1 broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1 broadcast 172.17.0.0 dev docker0 proto kernel scope link src 172.17.42.1 local 172.17.42.1 dev docker0 proto kernel scope host src 172.17.42.1 broadcast 172.17.255.255 dev docker0 proto kernel scope link src 172.17.42.1 broadcast 192.168.0.0 dev enp6s0 proto kernel scope link src 192.168.0.20 local 192.168.0.20 dev enp6s0 proto kernel scope host src 192.168.0.20 broadcast 192.168.0.255 dev enp6s0 proto kernel scope link src 192.168.0.20 So basically if I send any traffic to 10.230.134.38, 127.0.0.0/8, 127.0.0.1(redundant), 172.17.42.1, or 192.168.0.20, the traffic will get routed over the loopback interface, even though those IPs are really on a different interface.
How does one capture traffic on virtual interfaces?
1,514,121,729,000
Can I take a Linux kernel and use it with, say, FreeBSD and vice versa (FreeBSD kernel in, say, a Debian)? Is there a universal answer? What are the limitations? What are the obstructions?
No, kernels from different implementations of Unix-style operating systems are not interchangeable, notably because they all present different interfaces to the rest of the system (user space) — their system calls (including ioctl specifics), the various virtual file systems they use... What is interchangeable to some extent, at the source level, is the combination of the kernel and the C library, or rather, the user-level APIs that the kernel and libraries expose (essentially, the view at the layer described by POSIX, without considering whether it is actually POSIX). Examples of this include Debian GNU/kFreeBSD, which builds a Debian system on top of a FreeBSD kernel, and Debian GNU/Hurd, which builds a Debian system on top of the Hurd. This isn’t quite at the level of kernel interchangeability, but there have been attempts to standardise a common application binary interface, to allow binaries to be used on various systems without needing recompilation. One example is the Intel Binary Compatibility Standard, which allows binaries conforming to it to run on any Unix system implementing it, including older versions of Linux with the iBCS 2 layer. I used this in the late 90s to run WordPerfect on Linux. See also How to build a FreeBSD chroot inside of Linux.
Are different Linux/Unix kernels interchangeable?
1,514,121,729,000
I am learning about journald and rsyslog and while reading I saw that rsyslog reads from /dev/kmsg and that journald can read from both /dev/kmsg and /proc/ksmg. I know these are both kernel logs, but what is the difference between /proc/kmsg and /dev/kmsg? Why is one appear to be a process and another appear to be a device?
/proc/kmsg provides a root-only, read-only, consuming view of the kernel log buffer. It’s equivalent to calling syslog(2) with the SYSLOG_ACTION_READ action. As mentioned in the proc manpage, A process must have superuser privileges to read this file, and only one process should read this file. This file should not be read if a syslog process is running which uses the syslog(2) system call facility to log kernel messages. /dev/kmsg provides access to the same kernel log buffer, but in an easier-to-use fashion. Reads are tracked per open, so multiple processes can read in parallel, and entries aren’t removed from the buffer as they are read. /dev/kmsg also provides write access to the log buffer, so it can be used to add entries to the log buffer. See the /dev/kmsg documentation for details. As for why both are present, and why one is in /proc (albeit not process-related) and one in dev, /proc/kmsg is an old convenience “export” of kernel internals, and /dev/kmsg is a more recent addition, designed as a usable interface to the log buffer.
What is the difference between /proc/kmsg and /dev/kmsg?
1,514,121,729,000
I'd like to know more about the advanced uses of the /proc and /sys virtual filesystems, but I don't know where to begin. Can anyone suggest any good sources to learn from? Also, since I think sys has regular additions, what's the best way to keep my knowledge current when a new kernel is released.
Read this blog post: Solving problems with proc There are a few tips what you can do with the proc filesystem. Among other things, there is a tip how to get back a deleted disk image or how to staying ahead of the OOM killer. Don't forget to read the comments, there are good tips, too.
How do I learn what I can do with /proc and /sys [closed]
1,514,121,729,000
I would like to try compile mmu-less kernel. From what I found in configuration there is no option for such a thing. Is it possible to be done?
You can compile a Linux kernel without MMU support on most processor architectures, including x86. However, because this is a rare configuration only for users who know what they are doing, the option is not included in the menu displayed by make menuconfig, make xconfig and the like, except on a few architectures for embedded devices where the lack of MMU is relatively common. You need to edit the .config file explicitly to change CONFIG_MMU=y to CONFIG_MMU=n. Alternatively, you can make the option appear in the menu by editing the file in arch/*/Kconfig corresponding to your architecture and replacing the stanza starting with CONFIG MMU by config MMU bool "MMU support" default y ---help--- Say yes. If you say no, most programs won't run. Even if you make the option appear in the menus, you may need to tweak the resulting configuration to make it internally consistent. MMU-less x86 systems are highly unusual. The easiest way to experiment with an MMU-less system would be to run a genuine MMU-less system in an emulator, using the Linux kernel configuration provided by the hardware vendor or with the emulator. In case this wasn't clear, normal Linux systems need an MMU. The Linux kernel can be compiled for systems with no MMU, but this introduces restrictions that prevent a lot of programs from running. Start by reading No-MMU memory mapping support. I don't think you can use glibc without an MMU, µClibc is usually used instead. Documentation from the µClinux project may be relevant as well (µClinux was the original project for a MMU-less Linux, though nowadays support for MMU-less systems has been integrated into the main kernel tree so you don't need to use µClinux).
MMU-less kernel?
1,514,121,729,000
When I execute route -n, from where exactly (from which structs) is the information displayed retrieved? I tried executing strace route -n but I didn't help me finding the right place it's stored.
The route or the ip utility get their information from a pseudo filesystem called procfs. It is normally mounted under /proc. There is a file called /proc/net/route, where you can see the kernel's IP routing table. You can print the routing table with cat instead, but the route utility formats the output human readable, because the IP adresses are stored in hex. That file is not just a normal file. It is always generated at exactly the moment when opening it with an attempt to read, as all files in the proc filesystem. I you are interessted how that file is written, then you need to look at the kernel sources: That function outputs the routeing table. You see at line 2510, the header of the routing table is printed. The routing table appears to be mostly in the struct fib_info that is defined in the header file ip_fib.h, line 98.
Where is routing table stored internally in the Linux kernel?
1,514,121,729,000
When I type the command service vboxdrv setup in my CentOS 7 terminal, I get the following error: Your kernel headers for kernel 3.10.0-229.el7.x86_64 cannot be found How can I resolve this error? When I open the log file by typing vi /var/log/vbox-install.log, the contents are: Uninstalling modules from DKMS removing old DKMS module vboxhost version 5.0.4 ------------------------------ Deleting module version: 5.0.4 completely from the DKMS tree. ------------------------------ Done. Attempting to install using DKMS Creating symlink /var/lib/dkms/vboxhost/5.0.4/source -> /usr/src/vboxhost-5.0.4 DKMS: add completed. Failed to install using DKMS, attempting to install without Makefile:185: *** Error: unable to find the sources of your current Linux kernel. Specify KERN_DIR=<directory> and run Make again. Stop. The error is happening in the process of installing VirtualBox 5.0.4 using the instructions from this tutorial. To summarize, so far, I have: vi /etc/yum.repos.d/virtualbox.repo Add the following text, then save and exit: [virtualbox] name=Oracle Linux / RHEL / CentOS-$releasever / $basearch - VirtualBox baseurl=http://download.virtualbox.org/virtualbox/rpm/el/$releasever/$basearch enabled=1 gpgcheck=1 gpgkey=http://download.virtualbox.org/virtualbox/debian/oracle_vbox.asc Then at command prompt type: # rpm -Uvh http://ftp.jaist.ac.jp/pub/Linux/Fedora/epel/7/x86_64/e/epel-release-7-5.noarch.rpm # yum install gcc make patch dkms qt libgomp # yum install kernel-headers kernel-devel fontforge binutils glibc-headers glibc-devel ... Complete! # cd /usr/src/kernels # ls -al total 12 drwxr-xr-x. 3 root root 4096 Sep 25 16:14 . drwxr-xr-x. 4 root root 4096 Sep 25 14:17 .. drwxr-xr-x. 22 root root 4096 Sep 25 16:14 3.10.0-229.14.1.el7.x86_64 # export KERN_DIR=/usr/src/kernels/3.10.0-229.14.1.el7.x86_64 # yum install VirtualBox-5.0 ... Complete! # service vboxdrv setup Stopping VirtualBox kernel modules [ OK ] Uninstalling old VirtualBox DKMS kernel modules [ OK ] Removing old VirtualBox pci kernel module [ OK ] Removing old VirtualBox netadp kernel module [ OK ] Removing old VirtualBox netflt kernel module [ OK ] Removing old VirtualBox kernel module [ OK ] Trying to register the VirtualBox kernel modules using DKMSError! echo Your kernel headers for kernel 3.10.0-229.el7.x86_64 cannot be found at /lib/modules/3.10.0-229.el7.x86_64/build or /lib/modules/3.10.0-229.el7.x86_64/source. [FAILED] (Failed, trying without DKMS) Recompiling VirtualBox kernel modules [FAILED] (Look at /var/log/vbox-install.log to find out what went wrong) See above for contents of vi /var/log/vbox-install.log Out of curiousity, I looked in /lib/modules/ and found the following: [root@localhost kernels]# cd /lib/modules [root@localhost modules]# ls -al total 16 drwxr-xr-x. 4 root root 4096 Sep 25 15:58 . dr-xr-xr-x. 30 root root 4096 Sep 25 16:23 .. drwxr-xr-x. 7 root root 4096 Sep 25 15:59 3.10.0-229.14.1.el7.x86_64 drwxr-xr-x. 8 root root 4096 Sep 25 16:24 3.10.0-229.el7.x86_64 As per @EricRenouf's advice, I typed uname -a, and the terminal replied with: Linux localhost.localdomain 3.10.0-229.el7.x86_64 #1 SMP Fri Mar 6 11:36:42 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux I have not rebooted the machine, but the tutorial did not say to reboot.
The solution is likely to be found at this question the short version being, run sudo yum install "kernel-devel-uname-r == $(uname -r)" That will install the kernel headers for the version of the kernel you are currently running. I suspect that at some point you did a yum update or similar, and that actually installed a new kernel, but you have not yet started running it. What is probably happening is that when you do the yum install steps in your question it is looking at the latest installed version and getting the headers for that. Howerver, when you start vboxdrv it looks at the running kernel and tries to find the headers for that. Your running and installed kernels are out of sync (which isn't normally a huge problem), but you found a case where it matters.
Your kernel headers for kernel 3.10.0-229.el7.x86_64 cannot be found
1,514,121,729,000
I know, that there are several websites, that will list the changelog of kernel versions (e.g. what is new in 4.17) (KernelNewbies, heise.de), but where do I find information about a minor change (e.g. 4.17.1 -> 4.17.2)? (I try to hunt a bug, that appears in a very old kernel version, but not in a slightly newer one, so I'm interested in the changes, but I do not want to crawl the whole Git log.)
The changelogs are on kernel.org. The URLs have a predictable pattern. The current kernel change log is at: https://cdn.kernel.org/pub/linux/kernel/v4.x/ChangeLog-4.17.8 So, to read the changes from 4.17.1, you would go to: https://cdn.kernel.org/pub/linux/kernel/v4.x/ChangeLog-4.17.2
Where to find the Linux changelog of minor versions
1,514,121,729,000
What are the basic differences between spin locks and semaphores in action?
Both manage a limited resource. I'll first describe difference between binary semaphore (mutex) and spin lock. Spin locks perform a busy wait - i.e. it keeps running loop: while (try_acquire_resource ()); ... release(); It performs very lightweight locking/unlocking but if the locking thread will be preempted by other which will try to access the same resouce the second one will simply try to acquitre resource untill it run out of it CPU quanta. On the other hand mutex behave more like: if (!try_lock()) { add_to_waiting_queue (); wait(); } ... process *p = get_next_process_from_waiting_queue (); p->wakeUp (); Hence if the thread will try to acquire blocked resource it will be suspended till it will be avaible for it. Locking/unlocking is much more heavy but the waiting is 'free' and 'fair'. Semaphore is a lock that is allowed to be used multiple (known from initialization) number of times - for example 3 threads are allowed to simultainusly hold the resource but no more. It is used for example in producer/consumer problem or in general in queues: P(resources_sem) resource = resources.pop() ... resources.push(resources) V(resources_sem)
what is the difference between spin locks and semaphores?
1,514,121,729,000
When I visited the kernel.org website to download the latest Linux kernel, I noticed a package named 2.6.37-rc5 in the repository. What is the meaning of the "rc5" at the end?
Release Candidate. By convention, whenever an update for a program is almost ready, the test version is given a rc number. If critical bugs are found, that require fixes, the program is updated and reissued with a higher rc number. When no critical bugs remain, or no additional critical bugs are found, then the rc designation is dropped.
Meaning of "rc5" in "linux kernel 2.6.37-rc5"
1,514,121,729,000
If you ran the following, what would happen? # Do not run. # cat /dev/random > ~/randomFile Would it be written until the drive runs out of space, or would the system see a problem with this and stop it (like with an infinite symlink loop)?
It writes until the disk is full (usually there is still some space reserved for the root user). But as the pool of random data is limited, this could take a while. If you need a certain amount of random data, use dd. For 1MB: dd if=/dev/random iflag=fullblock of=$HOME/randomFile bs=1M count=1 Other possibilities are mentioned in answers to a related question. However, in almost all cases it is better to use /dev/urandom instead. It does not block if the kernel thinks that it get out of entropy. For better understanding, you can also read myths about /dev/urandom. Installing haveged speeds up /dev/random and also provides more entropy to /dev/urandom. EDIT: dd needs the fullblock option as /dev/random (in opposite of /dev/urandom) can return incomplete blocks if the entropy pool is empty. If your dd does not support units, write them out: dd if=/dev/random iflag=fullblock of=$HOME/randomFile bs=1048576 count=1
Writing /dev/random to file?
1,514,121,729,000
Once, I was installing some kernel patches & something went wrong on a live server where we had hundreds of clients. Only one kernel was there in the system. So, the server was down for some time, and using a live CD, we got the system up & running & did the further repairing work. Now my question: Is it a good idea to have a 2 versions of the kernel, so that if the kernel is corrupted we can always reboot with another available kernel? Please let me know. Also, is it possible to have 2 versions of the same kernel? So that I can choose the another kernel when there is kernel corruption? Edited: My Server Details: 2.6.32-431.el6.x86_64 CentOS release 6.5 (Final) How can I have the same copy of this kernel, so that when my kernel corrupts, I can start the the backup kernel?
Both RedHat and Debian-based distribution keep several versions of Kernel when you install a new one using yum or apt-get by default. That is considered a good practice and is done exactly for the case you describe: if something goes wrong with the latest kernel you can always reboot and in GRUB choose to boot using one of the previous kernels. In RedHat distros you control number of the kernels to keep in /etc/yum.conf with installonly_limit setting. On my fresh CentOS 7 install it defaults to 5. Also if on RedHat you're installing new kernel from RPM package you should use rpm -ivh, not rpm -Uvh: the former will keep the older kernel in place while the later will replace it. Debian keeps old kernels but don't automatically removes them. If you need to free up your boot partition you have to remove old kernels manually (remember to leave at least one of the previous kerneles). To list all kernel-installing and kernel-headers packages use dpkg -l | egrep "linux-(im|he)". Answering your question -- Also, Is it possible to have a 2 version of the same kernel ? -- Yes, it is possible. I can't check it on CentOS 6.5 right now, but on CentOS 7 I was able to yield the desired result by just duplicating kernel-related files of /boot directory and rebuilding the grub menu: cd /boot # Duplicate kernel files; # "3.10.0-123.el7" is a substring in the name of the current kernel ls -1 | grep "3.10.0-123.el7" | { while read i; \ do cp $i $(echo $i | sed 's/el7/el7.backup/'); done; } # Backup the grub configuration, just in case cp /boot/grub2/grub.cfg /boot/grub2/grub.cfg.backup # Rebuild grub configuration grub2-mkconfig -o /boot/grub2/grub.cfg # At this point you can reboot and see that a new kernel is available # for you to choose in GRUB menu
Is it good to have multiple version of Linux Kernel?
1,514,121,729,000
I have a kernel in which one initramfs is embedded. I want to extract it. I got the output x86 boot sector when I do file bzImage I have System.map file for this kernel image. Is there any way to extract the embedded initramfs image from this kernel with or without the help of System.map file ? The interesting string found in System map file is: (Just in case it helps) 57312:c17fd8cc T __initramfs_start 57316:c19d7b90 T __initramfs_size
There is some information about this in the gentoo wiki: https://wiki.gentoo.org/wiki/Custom_Initramfs#Salvaging It recommends the usage of binwalk which works exceedingly well. I'll give a quick walk-through with an example: first extract the bzImage file with binwalk: > binwalk --extract bzImage DECIMAL HEXADECIMAL DESCRIPTION -------------------------------------------------------------------------------- 0 0x0 Microsoft executable, portable (PE) 18356 0x47B4 xz compressed data 9772088 0x951C38 xz compressed data I ended up with three files: 47B4, 47B4.xz and 951C38.xz > file 47B4 47B4: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, BuildID[sha1]=aa47c6853b19e9242401db60d6ce12fe84814020, stripped Now lets run binwalk again on 47B4: > binwalk --extract 47B4 DECIMAL HEXADECIMAL DESCRIPTION -------------------------------------------------------------------------------- 0 0x0 ELF, 64-bit LSB executable, AMD x86-64, version 1 (SYSV) 9818304 0x95D0C0 Linux kernel version "4.4.6-gentoo (root@host) (gcc version 4.9.3 (Gentoo Hardened 4.9.3 p1.5, pie-0.6.4) ) #1 SMP Tue Apr 12 14:55:10 CEST 2016" 9977288 0x983DC8 gzip compressed data, maximum compression, from Unix, NULL date (1970-01-01 00:00:00) <snip> This came back with a long list of found paths and several potentially interesting files. Lets have a look. > file _47B4.extracted/* <snip> _47B4.extracted/E9B348: ASCII cpio archive (SVR4 with no CRC) file E9B348 is a (already decompressed) cpio archive, just what we are looking for! Bingo! To unpack the uncompressed cpio archive (your initramfs!) in your current directory just run > cpio -i < E9B348 That was almost too easy. binwalk is absolutely the tool you are looking for. For reference, I was using v2.1.1 here.
extract Embedded initramfs
1,514,121,729,000
So i was just wondering if my explanations of udev and how it works seem correct and my understanding is correct please let me know. SO my understanding of udev is that it is a dynamic device manager on Linux which runs as a daemon. when a change to a device occurs such as if a device is plugged in the kernel sends a uevent to udev, udev then can go to sysfs to find details on the device such as the vendor, device name and model etc. once it has the details of the device, Udev then matches these attributes to the rules set for a specific kind of device in this case it may be a usb device. if a rule like "create symlink for all usb devices" exists then udev will do this. udev checks the rules and matches the attributes to verify the device and then can start adding changes to the device as well as do whatever the rules say to the device. That is basically my current understanding please correct me if im wrong and tell me extra info.
UDEV Udev stand for "userspace /dev" it is a device manager for the Linux kernel. It is part of systemd (an init system used to bootstrap user space and manage user processes). Originally udev was independent from systemd, it was merged with systemd in 2012, this lead to some complication for distribution running without systemd like explained here for the gentoo distribution. This application (udev) is meant to replace devfsd and hotplug, udev primarily manages device nodes in the /dev directory. At the same time, udev also handles all user space events raised when hardware devices are added into the system or removed from it, including firmware loading as required by certain devices (via kernel modules). Concretely udev is run as systemd service (systemd-udevd.service) to achieve its tasks, it listens to kernel uevents. For every event, systemd-udevd executes matching instructions specified in udev rules (/etc/udev/rules.d/), details about rules writing are available on this article. At the Linux kernel level the required device information is exported by the sysfs file system. For every device the kernel has detected and initialized, a directory with the device name is created. It contains attribute files with device-specific properties. Every time a device is added or removed, the kernel sends a uevent to notify udev of the change. The behavior of the udev daemon (service) can be configured using udev.conf(5) (/etc/udev/udev.conf), its command line options, environment variables, and on the kernel command line, or changed dynamically with udevadm control. The udev, as a whole, is divided into three parts: Library libudev that allows access to device information. User space daemon (sytemd) udevd that manages the virtual /dev. Administrative command-line utility udevadm for diagnostics. Udev itself is divided on those three parts but it completely rely on the kernel device management and it's uevents calls, the system gets calls from the kernel via netlink socket. Earlier versions used hotplug, adding a link to themselves in /etc/hotplug.d/default with this purpose. Note that other application/daemon may listen to uevents calls over libudev, gudev or directly from the kernel with GUdevClient Further infos on udev are available on the sources of this answer: debian wiki, arch linux wiki, opensource.com, the geek diary, freedesktop.org, wikipedia, pks.mpg.de and other linked sites. Details about operation layers of udev are explained here and illustrated with this diagram:
How does udev/uevent work?
1,514,121,729,000
What does it mean when code is executed in kernel or user mode?
Kernel Mode A program running in this mode has full access to the underlying hardware. It can execute any CPU instruction, access any memory address and essentially do anything it wants. User Mode Code executing in this mode is restricted to hardware modification via the OS's API. It cannot access the hardware directly at all. The interesting thing here is that on the common architectures, this is enforced via hardware--not just the OS. In particular, the x86 architecture has protection rings. The big advantage to this kind of separation is that when a program crashes running in user mode, it isn't always fatal. In fact, on modern systems, it usually isn't. Check out Jeff's writeup. It's his usual good stuff.
What does it mean when code is executed in [kernel|user] mode?
1,514,121,729,000
I'm just trying to understand the modinfo output that describes a kernel module. For instance, in the case of the module i915, the output looks like this: $ modinfo i915 filename: /lib/modules/4.2.0-1-amd64/kernel/drivers/gpu/drm/i915/i915.ko license: GPL and additional rights description: Intel Graphics author: Intel Corporation [...] firmware: i915/skl_dmc_ver1.bin alias: pci:v00008086d00005A84sv*sd*bc03sc*i* [...] depends: drm_kms_helper,drm,video,button,i2c-algo-bit intree: Y vermagic: 4.2.0-1-amd64 SMP mod_unload modversions parm: modeset:Use kernel modesetting [KMS] (0=DRM_I915_KMS from .config, 1=on, -1=force vga console preference [default]) (int) [...] I'm able to understand some of the fields, but I have no idea what the following mean: firmware alias intree vermagic Does anyone know how to interpret them?
firmware: firmware: i915/skl_dmc_ver1.bin Many devices need two things to run properly. A driver and a firmware. The driver requests the firmware from the filesystem at /lib/firmware. This is a special file, needed by the hardware, it's not a binary. The diver then does what it needs to do to load the firmware into the device. The firmware does programming the hardware inside the device. alias: alias: pci:v00008086d00005A84sv*sd*bc03sc*i* This can be splitted up in the part after the characters: v00008086: v stands for the vendor id, it identifies a hardware manufacturer. That list is maintained by the PCI Special Interest Group. Your number 0x8086 stands for "Intel Corporation". d00005A84: d stands for the device id, which is selected by the manufacturer. This ID is usually paired with the vendor ID to make a unique 32-bit identifier for a hardware device. There is no offical list and I wasn't able to find a Intel device id list to lookup that number. sv*, sd*: The subsystem vendor version and subsystem device version are for further identification of a device (* indicates that it will match anything) bc03: The base class. It defines what kind of device it is; IDE interface, Ethernet controller, USB Controller, ... bc03 stands for Display controller. You may notice them from the output of lspci, because lspci maps the number to the device class. sc*: A sub class to the base class. i*: interface intree: intree: Y All kernel modules start their developments as out-of-tree. Once a module gets accepted to be included, it becomes an in-tree module. A modules without that flag (set to N) could taint the kernel. vermagic: vermagic: 4.2.0-1-amd64 SMP mod_unload modversions When loading a module, the strings in the vermagic value are checked if they match. If they don't match you will get an error and the kernel refuses to load the module. You can overcome that by using the --force flag of modprobe. Naturally, these checks are there for your protection, so using this option is dangerous.
How to understand the modinfo output?
1,514,121,729,000
I have just set up a Gentoo base system (which means I can boot and log in and do stuff with it now). My root partition is in an LVM2 virtual group (with a separated /boot partition). In order to boot I need to pass the parameters below to the kernel: root=/dev/ram0 real_root=/dev/vg/rootlv init=/linuxrc dolvm Apparently it is using an initial ramdisk to do something (I guess loading the LVM things) before mounting root. Is there a way that I can put this code into the kernel itself so that no initrd is needed? If not, how can I make the initrd myself? It might be useful to add that I had tried compiling the kernel for non-LVM root, without initrd and it worked perfectly. Then I tried to put the whole thing under LVM and couldn't get the machine to boot (I guess it cannot deal with the LVM stuff). Then I used the genkernel tool with the --lvm option and it creates the working kernel and initrd that I am currently using. Now I want to skip genkernel and do everything on my own, preferably without initrd so that the machine will boot somewhat faster (I don't need the flexibility anyway).
Simple answer: No. If you want LVM you need an initrd. But as others have said before: LVMs don't slow your system down or do anything bad in another way, they just allow you to create an environment that allows your kernel to load and do its job. The initrd allows your kernel to be loaded: If your kernel is on an LVM drive the whole LVM environment has to be established before the binary that contains the kernel can be loaded. Check out the Wikipedia Entry on initrd which explains what the initrd does and why you need it. Another note: I see your point in wanting to do things yourself but you can get your hands dirty even with genkernel. Use genkernel --menuconfig all and you can basically set everything as if you would build your kernel completely without tool support, genkernel just adds the make bzImage, make modules and make modules_install lines for you and does that nasty initrd stuff. You can obviously build the initrd yourself as it is outlined here for initramfs or here for initrd.
Is it possible to put root in LVM without using initrd?
1,514,121,729,000
I want to check if my Linux kernel is preemptive or non-preemptive. How can I check this using a command, something such as uname -a?
Whether a kernel is preemptive or not depends on what you want to preempt, as in the Linux kernel, there are various things that can have preemption enabled/disabled separately. If your kernel has CONFIG_IKCONFIG and CONFIG_IKCONFIG_PROC enabled, you can find out your preemption configuration through /proc/config.gz (if you don't have this, some distributions ship the kernel config in /boot instead): $ gzip -cd /proc/config.gz | grep PREEMPT CONFIG_TREE_PREEMPT_RCU=y CONFIG_PREEMPT_RCU=y CONFIG_PREEMPT_NOTIFIERS=y # CONFIG_PREEMPT_NONE is not set # CONFIG_PREEMPT_VOLUNTARY is not set CONFIG_PREEMPT=y CONFIG_PREEMPT_COUNT=y # CONFIG_DEBUG_PREEMPT is not set # CONFIG_PREEMPT_TRACER is not set If you have CONFIG_IKCONFIG, but not CONFIG_IKCONFIG_PROC, you can still get it out of the kernel image with extract-ikconfig.
How can I check my kernel preemption configuration?
1,514,121,729,000
In Arch Linux, after installing the most recent updates today, I see the following errors in the journal: kernel: FS-Cache: Duplicate cookie detected kernel: FS-Cache: O-cookie There are about 20 lines in total that are like these. I don't find any info on this via a search. Is this a serious or known problem? My CPU is an Intel Core i7 with an Asus motherboard. I can provide any requested relevant info. However, at this moment, I don't know what I'm looking at, so I am not sure what info is relevant. UPDATE: on a 2nd reboot there are fewer of the messages. Here is the complete output of journalctl -b -p err kernel: FS-Cache: Duplicate cookie detected kernel: FS-Cache: O-cookie c=000000001e72b895 [p=0000000089da8da7 fl=222 nc=0 na=1] kernel: FS-Cache: O-cookie d=00000000c3a2cbed n=00000000f757123a kernel: FS-Cache: O-key=[10] '040002000801c0a805c3' kernel: FS-Cache: N-cookie c=00000000ea48db1d [p=0000000089da8da7 fl=2 nc=0 na=1] kernel: FS-Cache: N-cookie d=00000000c3a2cbed n=000000000f72327e kernel: FS-Cache: N-key=[10] '040002000801c0a805c3'
This appears to be working as intended. The Duplicate cookie detected errors are not indicative of a situation that requires action by the sysadmin. As has been pointed out on the upstream bug report this may well be working as intended https://bugzilla.kernel.org/show_bug.cgi?id=200145#c12 https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=ec0328e46d6e5d0f17372eb90ab8e333c2ac7ca9 And: fscache: Maintain a catalogue of allocated cookies Maintain a catalogue of allocated cookies so that cookie collisions can be handled properly. For the moment, this just involves printing a warning and returning a NULL cookie to the caller of fscache_acquire_cookie(), but in future it might make sense to wait for the old cookie to finish being cleaned up. This requires the cookie key to be stored attached to the cookie so that we still have the key available if the netfs relinquishes the cookie. This is done by an earlier patch. The catalogue also renders redundant fscache_netfs_list (used for checking for duplicates), so that can be removed.
kernel: FS-Cache: Duplicate cookie detected - what is this?
1,514,121,729,000
I configured and compiled Linux kernel with nouveau driver built-into kernel, i.e. with <*> as opposed to <M> when doing make menuconfig inside Linux kernel source directory. Now, I intend to use another driver rather than nouveau. If nouveau was a module, I would add a line like blacklist nouveau inside /etc/modprobe.d/blacklist.conf What should I do now.
You can also temporarily blacklist them on the grub command line (linux line) when you boot with the syntax module_to_blacklist.blacklist=yes OR modprobe.blacklist=module_to_blacklist You need to modify the grub,cfg to make the changes permanent. Mind you, this solution will not work for few modules
How to block drivers built-into Kernel, i.e. drivers who are not a module
1,514,121,729,000
I'm trying to figure out how to blacklist modules, and I'm trying it on the USB storage. Unfortunately it seems to have no effect, and I get the module in even if it's not used (apparently). My experiment is taking place on an Ubuntu 12.04.3 LTS. raptor@raptor-VirtualBox:/etc/modprobe.d$ lsmod | grep usb usb_storage 39720 0 usbhid 46054 0 hid 82511 2 hid_generic,usbhid raptor@raptor-VirtualBox:/etc/modprobe.d$ cat blacklist.conf | grep usb blacklist usb_storage blacklist usbmouse blacklist usbkbd
Your problem probably results from the fact that a copy of /etc/modprobe.d/blacklist.conf is located in the initramfs. When you reboot your computer, it is still using the old copy that doesn't contain your change. Try to rebuild the initramfs with the following command and then reboot: sudo update-initramfs -u
Kernel module blacklist not working
1,514,121,729,000
I've the following one-liner to show files opened by process: sudo dtrace -n 'syscall::open*:entry { printf("%s %s",execname,copyinstr(arg0)); }' however I've plenty of repeated errors such as: dtrace: error on enabled probe ID 4 (ID 946: syscall::open_nocancel:entry): invalid user access in action #2 at DIF offset 24 dtrace: error on enabled probe ID 7 (ID 160: syscall::open:entry): invalid user access in action #2 at DIF offset 24 I'm aware that I can suppress them by redirecting to 2> /dev/null. What these errors means and why they're happening? Is it dtrace fault, or some specific process causing that? And how this problem can be addressed? I'm using OS X 10.11.2
This is potentially related to El Capitan and its System Integrity Protection (csrutil status) which can affect the dtrace behaviour. The potential fix includes rebooting Mac into recovery mode (⌘-R at boot time), then in Terminal run: csrutil enable --without dtrace to keep SIP enabled, but disable DTrace restrictions (note: this is undocumented parameter). Or disable SIP completely by: csrutil disable # Not recommended. See: What is the “rootless” feature in El Capitan, really? at Apple.SE How do I disable System Integrity Protection (SIP) on OS X?
Error on enabled probe: syscall::open_nocancel:entry): invalid user access in action #2 at DIF
1,514,121,729,000
What is the reason for the root filesystem being mounted ro in the initramfs (and in initrd). For example the Gentoo initramfs guide mounts the root filesystem with: mount -o ro /dev/sda1 /mnt/root Why not the following? mount -o rw /dev/sda1 /mnt/root I can see that there is a probably a good reason (and it probably involves switchroot), however it does not seem to be documented anywhere.
The initial ramdisk (initrd) is typically a stripped-down version of the root filesystem containing only that which is needed to mount the actual root filesystem and hand off booting to it. The initrd exists because in modern systems, the boot loader can't be made smart enough to find the root filesystem reliably. There are just too many possibilities for such a small program as the boot loader to cover. Consider NFS root, nonstandard RAID cards, etc. The boot loader has to do its work using only the BIOS plus whatever code can be crammed into the boot sector. The initrd gets stored somewhere the boot loader can find, and it's small enough that the extra bit of space it takes doesn't usually bother anyone. (In small embedded systems, there usually is no "real" root, just the initrd.) The initrd is precious: its contents have to be preserved under all conditions, because if the initrd breaks, the system cannot boot. One design choice its designers made to ensure this is to make the boot loader load the initrd read-only. There are other principles that work toward this, too, such as that in the case of small systems where there is no "real" root, you still mount separate /tmp, /var/cache and such for storing things. Changing the initrd is done only rarely, and then should be done very carefully. Getting back to the normal case where there is a real root filesystem, it is initially mounted read-only because initrd was. It is then kept read-only as long as possible for much the same reasons. Any writing to the real root that does need to be done is put off until the system is booted up, by preference, or at least until late in the boot process when that preference cannot be satisfied. The most important thing that happens during this read-only phase is that the root filesystem is checked to see if it was unmounted cleanly. That is something the boot loader could certainly do instead of leaving it to the initrd, but what then happens if the root filesystem wasn't unmounted cleanly? Then it has to call fsck to check and possibly fix it. So, where would initrd get fsck, if it was responsible for this step instead of waiting until the handoff to the "real" root? You could say that you need to copy fsck into the initrd when building it, but now it's bigger. And on top of that, which fsck will you copy? Linux systems regularly use a dozen or so different filesystems. Do you copy only the one needed for the real root at the time the initrd is created? Do you balloon the size of initrd by copying all available fsck.foo programs into it, in case the root filesystem later gets migrated to some other filesystem type, and someone forgets to rebuild the initrd? The Linux boot system architects wisely chose not to burden the initrd with these problems. They delegated checking of the real root filesystem to the real root filesystem, since it is in a better position to do that than the initrd. Once the boot process has proceeded far enough that it is safe to do so, the initrd gets swapped out from under the real root with pivot_root(8), and the filesystem is remounted in read-write mode.
Why does initramfs mount the root filesystem read-only
1,514,121,729,000
I use Knoppix (or other Live CDs/DVDs) as a secure environment for creating valuable crypto keys. Unfortunately entropy is a limited resource in such environments. I just noticed that each program start consumes quite some entropy. This seems to be due to some stack protection feature that needs address randomization. Nice feature but completely useless and - worse - destructive in my scenario. Is there any possibility to disable this feature? I would prefer one that allows me to continue using the original Knoppix (or whatever) image and just need some configuration at runtime. I read that this was caused by glibc. I am surprised that an strace -p $PID -f -e trace=open against bash does not show any accesses to /dev/random when I start programs. But I am not familiar with the interaction of execve() and the linker.
If this is indeed due to address randomization (ASLR has to do with where the program is loaded, see here: http://en.wikipedia.org/wiki/Address_space_layout_randomization) then you can disable it by passing norandmaps to the kernel in the boot options (see here: http://www.linuxtopia.org/online_books/linux_kernel/kernel_configuration/re30.html).
Can entropy consumption at program start be prevented?
1,514,121,729,000
I'm running Ubuntu 11.10, which came with kernel version 3.0.0-14. I downloaded and built a kernel from the 3.1.0 branch. After installing the new kernel, I see that my /boot/initrd.img-3.1.0 file is HUGE. It's 114MB, while my /boot/initrd.img-3.0.0-14-generic is about 13MB. I want to get rid of the bloat, which is clearly unnecessary. When building the new kernel, I copied my /boot/config-3.0.0-14-generic to .config in my build directory, as to keep the configuration of my original kernel. I ran make oldconfig, selected the defaults for all the new options, and then built the kernel. Looking at the file sizes within each of the initrd cpio archives, I see that all of my .ko modules are larger in size in the 3.1.0 ramdisk, than the 3.0.0-14. I assumed there was an unnecessary debug flag checked in my config file, but I don't see anything different that was not already enabled in the 3.0.0-14 config file. My /boot/config-3.0.0-14-generic is here: http://pastebin.com/UjH7nEqd And my /boot/config-3.0.1 is here: http://pastebin.com/HyT0M2k1 Can anyone explain where all the unnecessary bloat is coming from?
When building the kernel and module using make oldconfig, make and make install, the resulting modules will have debug information available in the files. Use the INSTALL_MOD_STRIP option for removing debugging symbols: make INSTALL_MOD_STRIP=1 modules_install Similarly, for building the deb packages: make INSTALL_MOD_STRIP=1 deb-pkg
Why is my initial ramdisk so big?
1,514,121,729,000
The default PID max number is 32768. To get this information type: cat /proc/sys/kernel/pid_max 32768 or sysctl kernel.pid_max kernel.pid_max = 32768 Now, I want to change this number... but I can't. Well, actually I can change it to a lower value or the same. For example: linux-6eea:~ # sysctl -w kernel.pid_max=32768 kernel.pid_max = 32768 But I can't do it for a greater value than 32768. For example: linux-6eea:~ # sysctl -w kernel.pid_max=32769 error: "Invalid argument" setting key "kernel.pid_max" Any ideas ? PS: My kernel is Linux linux-6eea 3.0.101-0.35-pae #1 SMP Wed Jul 9 11:43:04 UTC 2014 (c36987d) i686 i686 i386 GNU/Linux
The value can only be extended up to a theoretical maximum of 32768 for 32 bit systems or 4194304 for 64 bit. From man 5 proc: /proc/sys/kernel/pid_max This file (new in Linux 2.5) specifies the value at which PIDs wrap around (i.e., the value in this file is one greater than the maximum PID). The default value for this file, 32768, results in the same range of PIDs as on earlier kernels. On 32-bit platfroms, 32768 is the maximum value for pid_max. On 64-bit systems, pid_max can be set to any value up to 2^22 (PID_MAX_LIMIT, approximately 4 million).
How to change the kernel max PID number? [duplicate]
1,514,121,729,000
If I write a program that tries to read memory at every possible address, and I run it on a "full" Unix, it will not be able to access all of the physical RAM. But how does the operating system prevent it from doing so? I am more familiar with small CPU architectures where any piece of assembly code can access everything. I don't understand how a program (the kernel) can detect such malicious operations.
It's not the kernel that's preventing bad memory accesses, it's the CPU. The role of the kernel is only to configure the CPU correctly. More precisely, the hardware component that prevents bad memory accesses is the MMU. When a program accesses a memory address, the address is decoded by the CPU based on the content of the MMU. The MMU establishes a translation from virtual addresses to physical addresses: when the CPU does a load or a store at a certain virtual address, it calculates the corresponding physical address based on the MMU content. The kernel sets the MMU configuration in such a way that each program can only access memory that it's entitled to. Other programs' memory and hardware registers are not mapped at all in a program's memory: these physical addresses have no corresponding virtual address in the MMU configuration for that program. On a context switch between different processes, the kernel modifies the MMU configuration so that it contains the desired translation for the new process. Some virtual addresses are not mapped at all, i.e. the MMU translates them to a special “no such address” value. When the processor dereferences an unmapped address, this causes a trap: the processor branches to a predefined location in kernel code. Some traps are legitimate; for example the virtual address could correspond to a page that's in swap space, in which case the kernel code will load the page content from swap then switch back to the original program in such a way that the memory access instruction is executed again. Other traps are not legitimate, in which case the process receives a signal which by default kills the program immediately (and if not branches to the signal handler in the program: in any case the memory access instruction is not completed).
How does the kernel prevent a malicious program from reading all of physical RAM?
1,514,121,729,000
According to the man page of lsmod the command shows “what kernel modules are currently loaded”. I wrote a script that uses modinfo to show what kernel object (.ko) files are actually in use: #!/bin/sh for i in `lsmod | awk '{print $1}' | sed -n '1!p'`; do echo "###############################$i###############################" echo "" modinfo $i echo "" echo "" done Now I found out that modinfo nvidia shows the following output: ERROR: modinfo: could not find module nvidia Do you guys have any explanation for this?
Your nvidia module is perfectly loaded and working. The problem lies in modinfo. modinfo fetch the list of known modules by reading the /lib/modules/$(uname -r)/modules.* files, which are usually updated with depmod. If depmod -a has not been run after installing the nvidia module, then modinfo does not knows about it. This does not prevent anybody from loading the module with insmod and lsmod will show it just fine if loaded.
Why does modinfo say “could not find module”, yet lsmod claims the module is loaded?
1,514,121,729,000
I've noticed that when I do heavy write applications, the whole system slows down. To test this further I ran this to do a (relatively) low-CPU, high disk activity: john -incremental > file_on_SSD This pumps out tens of thousands of strings per second to a file on my system disk. When it's doing this, the mouse lags, TTYs become unresponsive, applications "fade" and generally the whole computer becomes unusable. When I can eventually Control+C john, the system comes back to full strength after a few seconds. This is an extreme example but I have similar issues with slightly less write-intensive activities like copying big files from fast sources or transcoding. My main OS disk is a quite fast SSD (OCZ Agility 60GB) with EXT4. If I write john's output to a mechanical disk with EXT4, I don't experience the same slow-downs though the rate is a lot slower (SSD does ~42,000 words per second, mechanical does 8,000 w/s). The throughput may be relevant. The mechanical disk also has nothing to do with the system. It's just data. And I'm using kernel 2.6.35-2 but I've noticed this issue since I got this SSD when I was probably using .31 or something from around that time. So what's causing the slowdown? EXT4 issue? Kernel issue? SSD issue? All of the above? Something else? If you think I need to run an additional test, just drop a comment telling me what to do and I'll append the result to the question.
This has been a known issue for awhile. Using an SSD-tuned FS like Btrfs might help, but it might not. Ultimately, it is a bug in the IO scheduler/memory management systems. Recently, there have been some patches that aim to address this issue. See Fixed: The Linux Desktop Responsiveness Problem? These patches may eventually make their way into the mainline kernel, but for now, you will probably have to compile your own kernel if you want to fix this issue.
Heavy write activity on SSD nukes system performance
1,514,121,729,000
I'm trying to get a deeper understanding of how system calls and hardware interrupts are implemented, and something that keeps confusing me is how they differ with respect to how they're handled. For example, I am aware that one way a system call (at least used to) be initiated is through the x86 INT 0x80 instruction. Does the processor handle this the exact same way as if, say, a hardware peripheral would have interrupted the CPU? If not, at what point do they differ? My understanding is they both index the IDT, just with different indices in the vector. In that same sense, my understanding is there's this idea of a softirq to handle the "bottom half" processing, but I only see this form of "software interrupt" in reference to being enqueued to run by physical hardware interrupts. Do system call "software interrupts" also trigger softirqs for processing? That terminology confuses me a bit as well, as I've seen people refer to system calls as "software interrupts" yet softirqs as "software interrupts" as well.
INT 0x80h is an old way to call kernel services (system functions). Currently, syscalls are used to invoke these services as they are faster than calling the interrupt. You can check this mapping in kernel's Interrupt Descriptor Table idt.c and in line 50 in the irq_vectors.h file. The important bit that I believe answers your question is the header of that last file, where you can see how interrupt requests (IRQs) are organized. This is the general layout of the IDT entries: Vectors 0 ... 31 : system traps and exceptions - hardcoded events Vectors 32 ... 127 : device interrupts Vector 128 : legacy int80 syscall interface Vectors 129 ... INVALIDATE_TLB_VECTOR_START-1 except 204 : device interrupts Vectors INVALIDATE_TLB_VECTOR_START ... 255 : special interrupts It really does not matter if it is by electrical means or software means. Whenever an interrupt is triggered, the kernel looks for its ID in the IDT and runs (in kernel mode) the associated interrupt handler. As they have to be really fast, they normally set some info to be handled later on by a softirq or a tasklet. Read chapter 2 (fast read...) of the Unreliable Guide To Hacking The Linux Kernel Let me recommend also reading this really good and thorough answer at stackoverflow to Intel x86 vs x64 system call question, where INT 0x80h, sysenter and syscall are put in context... I wrote my own (so very modest and still under construction) self learning page about interrupts and signals to help me understand the relation of signals and traps with interrupts (for instance SIGFPE - divide by zero).
Do system calls actually "interrupt" the CPU the same way that hardware interrupts do?
1,299,848,475,000
Running Xubuntu 16.04.1 LTS 64-bit. /proc/sys/kernel/yama/ptrace_scope keeps resetting to 1 if I reboot, despite me changing it to 0 manually. How can I keep ptrace_scope set to a value of 0?
/proc values are stored in RAM so it isn't persistent. But it read its initial values from a file. You can permanently change the value of /proc/sys/kernel/yama/ptrace_scope to 0 by editing the file /etc/sysctl.d/10-ptrace.conf and change the line: kernel.yama.ptrace_scope = 1 To kernel.yama.ptrace_scope = 0
/proc/sys/kernel/yama/ptrace_scope keeps resetting to 1
1,299,848,475,000
I have a perplexing problem. I have a library which uses sg for executing customized CDBs. There are a couple of systems which routinely have issues with memory allocation in sg. Usually, the sg driver has a hard limit of around 4mb, but we're seeing it on these few systems with ~2.3mb requests. That is, the CDBs are preparing to allocate for a 2.3mb transfer. There shouldn't be any issue here: 2.3 < 4.0. Now, the profile of the machine. It is a 64 bit CPU but runs CentOS 6.0 32-bit (I didn't build them nor do I have anything to do with this decision). The kernel version for this CentOS distro is 2.6.32. They have 16gb of RAM. Here is what the memory usage looks like on the system (though, because this error occurs during automated testing, I have not verified yet if this reflects the state when this errno is returned from sg). top - 00:54:46 up 5 days, 22:05, 1 user, load average: 0.00, 0.01, 0.21 Tasks: 297 total, 1 running, 296 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 15888480k total, 9460408k used, 6428072k free, 258280k buffers Swap: 4194296k total, 0k used, 4194296k free, 8497424k cached I found this article from Linux Journal which is about allocating memory in the kernel. The article is dated but does seem to pertain to 2.6 (some comments about the author at the head). The article mentions that the kernel is limited to about 1gb of memory (though it's not entirely clear from the text if that 1gb each for physical and virtual or total). I'm wondering if this is an accurate statement for 2.6.32. Ultimately, I'm wondering if these systems are hitting this limit. Though this isn't really an answer to my problem, I'm wondering about the veracity of the claim for 2.6.32. So then, what is the actual limit of memory for the kernel? This may need to be a consideration for troubleshooting. Any other suggestions are welcome. What makes this so baffling is that these systems are identical to many others which do not show this same problem.
The 1 GiB limit for Linux kernel memory in a 32-bit system is a consequence of 32-bit addressing, and it's a pretty stiff limit. It's not impossible to change, but it's there for a very good reason; changing it has consequences. Let's take the wayback machine to the early 1990s, when Linux was being created. Back in those days, we'd have arguments about whether Linux could be made to run in 2 MiB of RAM or if it really needed 4 whole MiB. Of course, the high-end snobs were all sneering at us, with their 16 MiB monster servers. What does that amusing little vignette have to do with anything? In that world, it's easy to make decisions about how to divide up the 4 GiB address space you get from simple 32-bit addressing. Some OSes just split it in half, treating the top bit of the address as the "kernel flag": addresses 0 to 231-1 had the top bit cleared, and were for user space code, and addresses 231 through 232-1 had the top bit set, and were for the kernel. You could just look at the address and tell: 0x80000000 and up, it's kernel-space, otherwise it's user-space. As PC memory sizes ballooned toward that 4 GiB memory limit, this simple 2/2 split started to become a problem. User space and kernel space both had good claims on lots of RAM, but since our purpose in having a computer is generally to run user programs, rather than to run kernels, OSes started playing around with the user/kernel divide. The 3/1 split is a common compromise. As to your question about physical vs virtual, it actually doesn't matter. Technically speaking, it's a virtual memory limit, but that's just because Linux is a VM-based OS. Installing 32 GiB of physical RAM won't change anything, nor will it help to swapon a 32 GiB swap partition. No matter what you do, a 32-bit Linux kernel will never be able to address more than 4 GiB simultaneously. (Yes, I know about PAE. Now that 64-bit OSes are finally taking over, I hope we can start forgetting that nasty hack. I don't believe it can help you in this case anyway.) The bottom line is that if you're running into the 1 GiB kernel VM limit, you can rebuild the kernel with a 2/2 split, but that directly impacts user space programs. 64-bit really is the right answer.
memory limit of the Linux kernel
1,299,848,475,000
What is the loopback interface and how does it differ from the eth0 interface? And why do I need to use it when mounting an ISO or running a service on localhost?
The loopback networking interface is a virtual network device implemented entirely in software. All traffic sent to it "loops back" and just targets services on your local machine. eth0 tends to be the name of the first hardware network device (on linux, at least), and will send network traffic to remote machines. You might see it as en0, ent0, et0, or various other names depending on which OS you're using at the time. (It could also be a virtual device, but that's another topic) The loopback option used when mounting an ISO image has nothing to do with the networking interface, it just means that the mount command has to first associate the file with a device node (/dev/loopback or something with a similar name) before mounting it to the target directory. It "loops back" reads (and writes, if supported) to a file on an existing mount, instead of using a device directly.
What is the loopback interface
1,299,848,475,000
I have an application that reads a file. Let's call it processname and the file ~/.configuration. When processname runs it always reads ~/.configuration and can't be configured differently. There are also other applications that rely on "~/.configuration", before and after, but not while processname is running. Wrapping processname in a script that replaces the contents of ~/.configuration is an option, but I recently had a power outage (while the contents were swapped out), where I lost the previous contents of said file, so this is not desirable. Is there a way (perhaps using something distantly related to LD_DEBUG=files processname?) for fooling a process into reading different contents when it tries to read a specific file? Searching and replacing the filename in the executable is a bit too invasive, but should work as well. I know it's possible to write a kernel module that takes over the open() call (https://news.ycombinator.com/item?id=2972958), but is there a simpler or cleaner way? EDIT: When searching for ~/.configuration in the processname executable I discovered that it tried to read another filename right before reading ~/.configuration. Problem solved.
In recent versions of Linux, you can unshare the mount namespace. That is, you can start processes that view the virtual file system differently (with file systems mounted differently). That can also be done with chroot, but unshare is more adapted to your case. Like chroot, you need superuser priviledged to unshare the mount namespace. So, say you have ~/.configuration and ~/.configuration-for-that-cmd files. You can start a process for which ~/.configuration is actually a bind-mount of ~/.configuration-for-that-cmd in there, and execute that-cmd in there. like: sudo unshare -m sh -c " mount --bind '$HOME/.configuration-for-that-cmd' \ '$HOME/.configuration' && exec that-cmd" that-cmd and all its descendant processes will see a different ~/.configuration. that-cmd above will run as root, use sudo -u another-user that-cmd if it need to run as a another-user.
Making a process read a different file for the same filename
1,299,848,475,000
After configuring and building the kernel using make, why don't I have vmlinuz-<version>-default.img and initrd-<version>.img, but only got a huge vmlinux binary (~150MB)?
The compressed images are under arch/xxx/boot/, where xxx is the arch. For example, for x86 and amd64, I've got a compressed image at /usr/src/linux/arch/x86/boot/bzImage, along with /usr/src/linux/vmlinux. If you still don't have the image, check if bzip2 is installed and working (but I guess if that were the problem, you'd get a descriptive error message, such as "bzip2 not found"). Also, the kernel config allows you to choose the compression method, so the actual file name and compression algorithm may differ if you changed that kernel setting. As others already mentioned, initrds are not generated by the linux compilation process, but by other tools. Note that unless, for some reason, you need external files (e.g. you need modules or udev to identify or mount /), you don't need an initrd to boot.
vmlinuz and initrd not found after building the kernel?
1,299,848,475,000
I saw a kernel option today in menuconfig that used braces for its checkbox. {*} Button This isn't listed in the legend at the top of the screen. [*] built-in [ ] excluded <M> module < > module capable What do the braces signify?
It represents an option that has been implied to a specific value by another option. This Gentoo's wiki has a clear explanation and lists all the available types that menuconfig can display. For example: the hyphen is also listed there.
What do the kernel options in braces mean?
1,299,848,475,000
I'm having CentOS 7 64 installed on my desktop. After recent system update, I am getting below error while booting the CentOS 7. Some time system is able to boot and I can work on it. but it gives the same error at the time of next boot. after entering this: systemctl status kdump.service I get this: ● kdump.service - Crash recovery kernel arming Loaded: loaded (/usr/lib/systemd/system/kdump.service; enabled) Active: failed (Result: exit-code) since Thu 2015-01-22 02:55:49 MST; 39min ago Main PID: 1139 (code=exited, status=1/FAILURE) Jan 22 02:55:49 localhost.localdomain kdumpctl[1139]: No memory reserved for crash kernel. Jan 22 02:55:49 localhost.localdomain kdumpctl[1139]: Starting kdump: [FAILED] Jan 22 02:55:49 localhost.localdomain systemd1: kdump.service: main process exited, code=exited, status=1/FAILURE Jan 22 02:55:49 localhost.localdomain systemd1: Failed to start Crash recovery kernel arming. Jan 22 02:55:49 localhost.localdomain systemd1: Unit kdump.service entered failed state. Jan 22 02:55:49 localhost.localdomain systemd1: kdump.service failed. system-config-kdump: command not found... Adding image
Install the required packages yum --enablerepo=debug install kexec-tools crash kernel-debug kernel-debuginfo-`uname -r` Modify grub A kernel argument must be added to /etc/grub.conf to enable kdump. It’s called crashkernel and it can be either auto or set as a predefined value e.g. 128M, 256M, 512M etc. The line will look similar to the following: GRUB_CMDLINE_LINUX="rd.lvm.lv=rhel/swap crashkernel=auto rd.lvm.lv=rhel/root rhgb quiet" Change the value of the crashkernel=auto to crashkernel=128 or crashkernel=256 ... Regenerate grub configuration: grub2-mkconfig -o /boot/grub2/grub.cfg On a system with UEFI firmware, execute the following instead: grub2-mkconfig -o /boot/efi/EFI/Centos/grub.cfg Open the /etc/zipl.conf configuration file locate the parameters= section, and edit the crashkernel= parameter (or add it if not present). For example, to reserve 128 MB of memory, use the following:crashkernel=128M save and exit Regenerate the zipl configuration:zipl ⁠Enabling the Service To start the kdump daemon at boot time, type the following command as root: chkconfig kdump on This will enable the service for runlevels 2, 3, 4, and 5. Similarly, typing chkconfig kdump off will disable it for all runlevels. To start the service in the current session, use the following command as root: service kdump start
Kdump.service FAILED centOS 7
1,299,848,475,000
I want to enable reversed path filtering to prevent source ip spoofing on my server. I noticed that I have the following settings at current: net.ipv4.conf.all.rp_filter = 0 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.lo.rp_filter = 0 net.ipv4.conf.p4p1.rp_filter = 1 net.ipv4.conf.eth0.rp_filter = 1 The setting in all and the one in default are not the same. There are no explicit settings on my /etc/sysctl.conf file. I would like to what is the impact to the rest of the configurations between setting net.ipv4.conf.all.rp_filter = 1 and net.ipv4.conf.default.rp_filter = 1 Do I have to set both or just one of them?
According to this post titled: all vs. default in /proc/sys/net/ipv4/conf [message #3139]: When you change variables in the /proc/sys/net/ipv4/conf/all directory, the variable for all interfaces and default will be changed as well. When you change variables in /proc/sys/net/ipv4/conf/default, all future interfaces will have the value you specify. This should only affect machines that can add interfaces at run time, such as laptops with PCMCIA cards, or machines that create new interfaces via VPNs or PPP, for example. References Linux Firewall-related /proc Entries /proc/sys/net/ipv4/* Variables:
What is the difference between all and default in kernel setting? [duplicate]
1,299,848,475,000
I don't have enough confidence to do this alone and risk the server not to boot or something. I would like to upgrade kernel from: $ uname -r 4.9.0-6-amd64 $ uname -v #1 SMP Debian 4.9.88-1+deb9u1 (2018-05-07) to kernel version 4.15 or 4.16. Whichever you recommend. I just think I know how to list versions available: $ apt-cache search linux-image | grep amd64 linux-headers-4.9.0-6-amd64 - Header files for Linux 4.9.0-6-amd64 linux-headers-4.9.0-6-rt-amd64 - Header files for Linux 4.9.0-6-rt-amd64 linux-image-4.9.0-6-amd64 - Linux 4.9 for 64-bit PCs linux-image-4.9.0-6-amd64-dbg - Debug symbols for linux-image-4.9.0-6-amd64 linux-image-4.9.0-6-rt-amd64 - Linux 4.9 for 64-bit PCs, PREEMPT_RT linux-image-4.9.0-6-rt-amd64-dbg - Debug symbols for linux-image-4.9.0-6-rt-amd64 linux-image-amd64 - Linux for 64-bit PCs (meta-package) linux-image-amd64-dbg - Debugging symbols for Linux amd64 configuration (meta-package) linux-image-rt-amd64 - Linux for 64-bit PCs (meta-package), PREEMPT_RT linux-image-rt-amd64-dbg - Debugging symbols for Linux rt-amd64 configuration (meta-package) linux-headers-4.9.0-3-amd64 - Header files for Linux 4.9.0-3-amd64 linux-headers-4.9.0-3-rt-amd64 - Header files for Linux 4.9.0-3-rt-amd64 linux-headers-4.9.0-4-amd64 - Header files for Linux 4.9.0-4-amd64 linux-headers-4.9.0-4-rt-amd64 - Header files for Linux 4.9.0-4-rt-amd64 linux-headers-4.9.0-5-amd64 - Header files for Linux 4.9.0-5-amd64 linux-headers-4.9.0-5-rt-amd64 - Header files for Linux 4.9.0-5-rt-amd64 linux-image-4.9.0-3-amd64 - Linux 4.9 for 64-bit PCs linux-image-4.9.0-3-amd64-dbg - Debug symbols for linux-image-4.9.0-3-amd64 linux-image-4.9.0-3-rt-amd64 - Linux 4.9 for 64-bit PCs, PREEMPT_RT linux-image-4.9.0-3-rt-amd64-dbg - Debug symbols for linux-image-4.9.0-3-rt-amd64 linux-image-4.9.0-4-amd64 - Linux 4.9 for 64-bit PCs linux-image-4.9.0-4-amd64-dbg - Debug symbols for linux-image-4.9.0-4-amd64 linux-image-4.9.0-4-rt-amd64 - Linux 4.9 for 64-bit PCs, PREEMPT_RT linux-image-4.9.0-4-rt-amd64-dbg - Debug symbols for linux-image-4.9.0-4-rt-amd64 linux-image-4.9.0-5-amd64 - Linux 4.9 for 64-bit PCs linux-image-4.9.0-5-amd64-dbg - Debug symbols for linux-image-4.9.0-5-amd64 linux-image-4.9.0-5-rt-amd64 - Linux 4.9 for 64-bit PCs, PREEMPT_RT linux-image-4.9.0-5-rt-amd64-dbg - Debug symbols for linux-image-4.9.0-5-rt-amd64 linux-headers-4.15.0-0.bpo.2-amd64 - Header files for Linux 4.15.0-0.bpo.2-amd64 linux-headers-4.15.0-0.bpo.2-cloud-amd64 - Header files for Linux 4.15.0-0.bpo.2-cloud-amd64 linux-headers-4.16.0-0.bpo.1-amd64 - Header files for Linux 4.16.0-0.bpo.1-amd64 linux-headers-4.16.0-0.bpo.1-cloud-amd64 - Header files for Linux 4.16.0-0.bpo.1-cloud-amd64 linux-image-4.15.0-0.bpo.2-amd64 - Linux 4.15 for 64-bit PCs linux-image-4.15.0-0.bpo.2-amd64-dbg - Debug symbols for linux-image-4.15.0-0.bpo.2-amd64 linux-image-4.15.0-0.bpo.2-cloud-amd64 - Linux 4.15 for x86-64 cloud linux-image-4.15.0-0.bpo.2-cloud-amd64-dbg - Debug symbols for linux-image-4.15.0-0.bpo.2-cloud-amd64 linux-image-4.16.0-0.bpo.1-amd64 - Linux 4.16 for 64-bit PCs linux-image-4.16.0-0.bpo.1-amd64-dbg - Debug symbols for linux-image-4.16.0-0.bpo.1-amd64 linux-image-4.16.0-0.bpo.1-cloud-amd64 - Linux 4.16 for x86-64 cloud linux-image-4.16.0-0.bpo.1-cloud-amd64-dbg - Debug symbols for linux-image-4.16.0-0.bpo.1-cloud-amd64 linux-headers-4.9.0-4-grsec-amd64 - Header files for Linux 4.9.0-4-grsec-amd64 linux-image-4.9.0-4-grsec-amd64 - Linux 4.9 for 64-bit PCs, Grsecurity protection (unofficial patch) linux-image-grsec-amd64 - Linux image meta-package, grsec featureset linux-image-cloud-amd64 - Linux for x86-64 cloud (meta-package) linux-image-cloud-amd64-dbg - Debugging symbols for Linux cloud-amd64 configuration (meta-package) I need headers too. On Ubuntu there is also package called extra or similarly, so I am confused not to see it here. What is the proper way of installing new kernel manually on Debian 9?
If you want to install a newer Debian-packaged kernel, you should use one from the backports repository. You seem to have that repository already added to your apt configuration, so you're all set. Since your current kernel is the basic amd64 version, I assume you won't need the realtime scheduler version, nor the cloud version. Just run apt-get install linux-image-4.16.0-0.bpo.1-amd64 linux-headers-4.16.0-0.bpo.1-amd64 i.e. "install the basic -amd64 version of the 4.16 kernel backported for Debian 9, and the corresponding headers package". Unlike for regular packages, the new version linux-image package will not outright replace the existing 4.9.0 kernel, but will install alongside it. (That's because the version number is included as part of the package name.) The bootloaders will automatically be configured at linux-image post-install to either present the available kernels in a version-number-based order, or if that is not possible for some bootloaders, just automatically set the most recently installed one as the preferred one. If it turns out that your new kernel won't boot, you can just select the previous kernel from the bootloader, and then remove the kernel package that proved to be non-functional. If you accidentally tell the package manager to remove the kernel you're currently running on, it is smart enough to know that isn't a good thing to do, and will abort the operation.
What is the proper way of installing new kernel manually on Debian 9?
1,299,848,475,000
Is there a way to tell from Bash what distro version # I'm running and also what Kernel version is included?
Basic commands will be the following: # cat /etc/gentoo-release Gentoo Base System release 2.1 # uname -r 3.1.6-gentoo Also you can obtain this information in a "gentoo-way" using app-portage/gentoolkit package utils: # equery list baselayout * Searching for baselayout ... [IP-] [ ] sys-apps/baselayout-2.1:0 # eselect kernel list Available kernel symlink targets: [1] linux-3.1.4-gentoo [2] linux-3.1.5-gentoo [3] linux-3.1.6-gentoo * [4] linux-3.1.7-gentoo [5] linux-3.2.0-gentoo [6] linux-3.2.0-gentoo-r1
How do I tell what version of Gentoo & Linux is running?
1,299,848,475,000
I know that many of the same programs run flawlessly on top of both kernels. I know that historically, the two kernels came from different origins. I know philosophically too that they stood for different things. My question is, today, in 2011, what makes a Unix kernel different from a Linux one, and vice versa?
There is no unique thing named "the Unix kernel". There are multiple descendants of the original Unix kernel source code trunk that forked branches from it at different stages and that have evolved separately according to their own needs. The mainstream ones these days are found in Operating Systems created either from System V source code: AIX, HPUX, Solaris or from BSD source code, OpenBSD, FreeBSD and Mac OS/X. All of these kernels have their particular strengths and weaknesses, just like Linux and other "from scratch" Unix like kernels (minix, Gnu hurd, ...). Here is a non exhaustive list of the areas where differences can be observed, in no particular order: CPU architecture support Availability of drivers File systems supported Virtualization capabilities Scheduling features, (alternate scheduling classes, real-time, ...) Modularity Observability Tunability Reliability Performance Scalability API stability between versions Open/Close source, license used Security (eg: privilege granularity) Memory management
What are the main differences between Unix and Linux kernels today?
1,299,848,475,000
I've read that the color red indicates "kernel processes." Does that mean little daemons that are regulating which task gets to use the CPU? And by extension, transaction costs in an oversubscribed system? I'm running some large-scale geoprocessing jobs, and I've got two scripts running in parallel at the same time. The first script does the actual processing, on all 96 cores. It is responsible for almost all of the memory use. The second script uses curl to download the data to feed the first process, and it does so in parallel. I wrote it to download only until there are n_cores * 3 files downloaded. If that constraint isn't met, it waits for a minute or so and then check again. So, most of the time it isn't running -- or rather it is executing the Sys.sleep() command in R. I've experimented with using fewer cores for the downloading process. When I do so, it can't keep up with the processing script (I'm DLing from S3). TL;DR: Would my processes run faster if I could make htop less red? And are they red because there are more processes than cores?
Red represents the time spent in the kernel, typically processing system calls on behalf of processes. This includes time spent on I/O. There’s no point in trying to reduce it just for the sake of reducing it, because it’s not time that’s wasted — it’s time that’s spent by the kernel doing useful stuff (as long as you’re not thrashing, so look at the number of context switches etc.). I've experimented with using fewer cores for the downloading process. When I do so, it can't keep up with the processing script (I'm DLing from S3). suggests that your current setup is evenly balanced between the I/O needed to feed the processing, and the processing itself, which is a rather nice result. If you suspect that you’ve got too many processes running, and that that’s causing waste (by thrashing), then you could try reducing the number of geoprocessing jobs, to see if your overall throughput increases. The usual benchmarking tips apply: identify what you’re going to tweak, determine what resulting variations could occur and what they mean, only tweak one thing at a time, and measure everything.
Lots of red in htop -- does that mean my tasks are tripping over each other?
1,299,848,475,000
What is the between likely and unlikely calls in Kernel. While searching through the kernel source i found these statements. # define likely(x) __builtin_expect(!!(x), 1) # define unlikely(x) __builtin_expect(!!(x), 0) Could somebody shed some light into it?
They are compiler hints for GCC. They're used in conditionals to tell the compiler if a branch is likely to be taken or not. It can help the compiler laying down the code in such a way that's optimal for the most frequent outcome. They are used like this: if (likely(some_condition)) { // the compiler will try and make the code layout optimal for the case // where some_condition is true, i.e. where this block is run most_likely_action(); } else { // this block is less frequently used corner_case(); } It should be used with great care (i.e. based on actual branch profiling results). A wrong hint can degrade performance (obviously). Some examples of how the code can be optimized are easily found by searching for GCC __builtin_expect. This blog post gcc optimisation: __builtin_expect for example details a disassembly with it. The kind of optimizations that can be done is very processor-specific. The general idea is that often, processors will run code faster if it does not branch/jump all over the place. The more linear it is, and the more predictable the branches are, the faster it will run. (This is especially true for processors with deep pipelines for example.) So the compiler will emit the code such that the most likely branch will not involve a jump if that's what the target CPU prefers, for instance.
What is the difference between likely and unlikely calls in Kernel?
1,299,848,475,000
I want to add a particular new system call in the linux kernel 3.2.x but as a loadable kernel module (as I don't want to recompile the kernel again and again) I read through many posts on internet and also on SO, and some places claim that implementing system calls as loadable modules is not possible, while others say it is possible. Which is it? How is it done if it's possible?
It is not possible because system call table (called sys_call_table) is a static size array. And its size is determined at compile time by the number of registered syscalls. This means there is no space for another one. You can check implementation for example for x86 architecture in arch/x86/kernel/syscall_64.c file, where sys_call_table is defined. Its size is exactly __NR_syscall_max+1. __NR_syscall_max is defined in arch/x86/kernel/asm-offsets_64.c as sizeof(syscalls) - 1 (it's the number of last syscall), where syscall is a table with all the syscalls. One possible solution is to reuse some existing (or deprecated one if your architecture has one, see sys_setaltroot for example) syscall number with yours as this won't require more space in memory. Some architectures may also have holes in the syscall table (like 64 bit version of x86) so you can use this too. You can use this technique if you are developing new syscall and just want to avoid rebooting while experimenting. You will have to define your new system call, find existing entry in syscall table and then replace it from your module. Doing this from kernel module is not trivial as kernel does not export sys_call_table to modules as of version 2.6 (the last kernel version that had this symbol exported was 2.5.41). One way to work around this is to change your kernel to export sys_call_table symbol to modules. To do this, you have to add following two lines to kernel/kallsyms.c (don't do this on production machines): extern void *sys_call_table; EXPORT_SYMBOL(sys_call_table); Another technique is to find syscall table dynamically. You iterate over kernel memory, comparing each word with a pointer to known system call function. Since you know the offset of this know syscall in the table, you can compute table beginning address.
Adding a new System call to Linux 3.2.x with a loadable kernel module [closed]
1,299,848,475,000
I'm modifying a bunch of initramfs archives from different Linux distros in which normally only one file is being changed. I would like to automate the process without switching to root user to extract all files inside the initramfs image and packing them again. First I've tried to generate a list of files for gen_init_cpio without extracting all contents on the initramfs archive, i.e. parsing the output of cpio -tvn initrd.img (like ls -l output) through a script which changes all permissions to octal and arranges the output to the format gen_init_cpio wants, like: dir /dev 755 0 0 nod /dev/console 644 0 0 c 5 1 slink /bin/sh busybox 777 0 0 file /bin/busybox initramfs/busybox 755 0 0 This involves some replacements and the script may be hard to write for me, so I've found a better way and I'm asking about how safe and portable is: In some distros we have an initramfs file with concatenated parts, and apparently the kernel parses the whole file extracting all parts packed in a 1-byte boundary, so there is no need to fill each part to a multiple of 512 bytes. I thought this 'feature' can be useful for me to avoid recreating the archive when modifying files inside it. Indeed it works, at least for Debian and CloneZilla. For example if we have modified the /init file on initrd.gz of Debian 8.2.0, we can append it to initrd.gz image with: $ echo ./init | cpio -H newc -o | gzip >> initrd.gz so initrd.gz has two concatenated archives, the original and its modifications. Let's see the result of binwalk: DECIMAL HEXADECIMAL DESCRIPTION -------------------------------------------------------------------------------- 0 0x0 gzip compressed data, maximum compression, has original file name: "initrd", from Unix, last modified: Tue Sep 1 09:33:08 2015 6299939 0x602123 gzip compressed data, from Unix, last modified: Tue Nov 17 16:06:13 2015 It works perfectly. But it is reliable? what restrictions do we have when appending data to initfamfs files? it is safe to append without padding the original archive to a multiple of 512 bytes? from which kernel version is this feature supported?
It's very reliable and supported by all kernel versions that support initrd, AFAIK. It's a feature of the cpio archives that initramfs are made up of. cpio just keeps on extracting its input....we might know the file is two cpio archives one after the other, but cpio just sees it as a single input stream. Debian advises use of exactly this method (appending another cpio to the initramfs) to add binary-blob firmware to their installer initramfs. For example: DebianInstaller / NetbootFirmware | Debian Wiki Initramfs is essentially a concatenation of gzipped cpio archives which are extracted into a ramdisk and used as an early userspace by the Linux kernel. Debian Installer's initrd.gz is in fact a single gzipped cpio archive containing all the files the installer needs at boot time. By simply appending another gzipped cpio archive - containing the firmware files we are missing - we get the show on the road!
Appending files to initramfs image - reliable?
1,299,848,475,000
When you compile a kernel source, you can choose to sign kernel modules using the CONFIG_MODULE_SIG* options. The modinfo tool should handle the task of verifying the module signature, but there has been some bug in it for years, and the tool simply can't do the job anymore. All I get is the following: sig_id: PKCS#7 signer: sig_key: sig_hashalgo: md4 signature: 30:82:02:F4:06:09:2A:86:48:86:F7:0D:01:07:02:A0:82:02:E5:30: ... So there's no key and the hash algorithm is md4, which isn't even compiled in the kernel. So how to manually check and verify the module signature? Is that even possible?
Yes, that's possible, but it's quite involved. First you have to extract the module signature -- you can use the extract-module.sig.pl script from the kernel source for that: $ scripts/extract-module-sig.pl -s MODULE.ko >/tmp/modsig. Read 789006 bytes from module file Found magic number at 789006 Found PKCS#7/CMS encapsulation Found 670 bytes of signature [3082029a06092a864886f70d010702a0] Second, you have to extract the certificate and public key from the kernel; you can use the extract-sys-certs.pl script for that: $ scripts/extract-sys-certs.pl /PATH/TO/vmlinux /tmp/cert.x509 Have 32 sections Have 28167 symbols Have 1346 bytes of certs at VMA 0xffffffff81be6db8 Certificate list in section .init.data Certificate list at file offset 0xde6db8 $ openssl x509 -pubkey -noout -inform der -in /tmp/cert.x509 -out /tmp/pubkey You can also extract the public key from the certs/signing_key.x509 or certs/signing_key.pem files from the linux kernel's build directory. Having done that, you have all the data you need in /tmp/modsig and /tmp/cert.x509 and can continue with the dozen or so steps necessary to verify a PKCS#7 signature. You can look at this blog post for the whole recipe. I've tried to put the whole process (except for the extract-certs.pl step) in a perl script. You can use it like this: perl checkmodsig.pl /path/to/cert.x509 mod1.ko mod2.ko ... YMMV. I've only tried this with a custom built kernel using sha512 signatures. This should be of course much better done by using the openssl libraries directly, instead of kludging together slow and fragile openssl x509, asn1parse and rsautl invocations. checkmodsig.pl use strict; sub through { my ($cmd, $data, $cb) = @_; use IPC::Open2; my $pid = open2 my $from, my $to, ref $cmd ? @$cmd : $cmd; print $to $data; close $to; my $out; if($cb){ while(<$from>){ last if $out = $cb->($_) } } else { local $/; $out = <$from>; } waitpid ($pid, 0); die "status $?" if $? != 0; $out; } sub gethash { my ($d) = @_; my ($alg, $hash); through [qw(openssl asn1parse -inform der)], $d, sub { if(/(\d+):d=\d+ +hl= *(\d+) +l= *(\d+) +prim: +OCTET STRING/){ $hash = substr $d, $1 + $2, $3 }elsif(/prim: +OBJECT +:(sha\w+)/){ $alg = $1; } undef }; $alg, $hash } use File::Temp; my $tf = new File::Temp; my $pub_key; my @type = qw(PGP X509 PKCS7); my $r = 0; if((my $cert = shift) =~ /(\.x509)$|\.pem$/i){ $pub_key = $tf->filename; system qw(openssl x509 -pubkey -noout), '-inform', $1 ? 'der' : 'pem', '-in', $cert, '-out', $pub_key; die "status $?" if $? != 0; } die "no certificate/key file" unless $pub_key; for my $kof (@ARGV){ open my $ko, '<', $kof or die "open $kof: $!\n"; seek $ko, -4096, 2 or die "seek: $!"; read $ko, my $d, 4096 or die "read: $!"; my ($algo, $hash, $type, $signer_len, $key_id_len, $sig_len, $magic) = unpack 'C5x3Na*', substr $d, -40; die "no signature in $kof" unless $magic eq "~Module signature appended~\n"; die "this script only knows about PKCS#7 signatures" unless $type[$type] eq 'PKCS7'; my $hash = gethash substr $d, - 40 - $sig_len, $sig_len; die "hash not found" unless $hash; my ($alg, $vhash) = gethash through [qw(openssl rsautl -verify -pubin -inkey), $pub_key], $hash; seek $ko, 0, 0 or die "seek: $!"; read $ko, my $d, (-s $ko) - $sig_len - 40 or die "read: $!"; use Digest::SHA; my $fhash = new Digest::SHA($alg)->add($d)->digest; if($fhash eq $vhash){ print "OK $kof\n"; }else{ print "**FAIL** $kof\n"; $r = 1; warn 'orig=', unpack('H*', $vhash), "\n"; warn 'file=', unpack('H*', $fhash), "\n"; } } exit $r;
How to verify a kernel module signature?
1,299,848,475,000
I updated archlinux with "pacman -Syu" and then when I've restart, the system can't start. This is the report: Warning: /lib/modules/4.11.9-1-ARCH/modules.devname not found - ignoring version 232 Error: device 'UUID=b5a9a977-e9a7-4d3d-96a9-dcf9c3a9010d' not found. Skipping fsck. Error: can't find UUID=b5a9a977-e9a7-4d3d-96a9-dcf9c3a9010d You are now being dropped into a emergency shell. Can't access tty: job control turned off In that shell my keyboard doesn't work. I'm trying with a livecd of archlinux: mounting the partitions and using chroot. I check the uuid of the root partition in "/etc/fstab". It's my fstab: # /dev/sda2 UUID=b5a9a977-e9a7-4d3d-96a9-dcf8c3a9010d / ext4 rw,relatime,data=ordered 0 1 # /dev/sda1 UUID=FBA9-977B /boot vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro 0 2 # /dev/sda4 UUID=a43b8426-c93a-4f32-99c8-9dd5cf645373 /home ext4 rw,relatime,data=ordered 0 2 # /dev/sda3 UUID=9eec735e-3157-4e0e-a5c6-ef3a7c674201 none swap defaults 0 And it's the result of "lsblk -f" NAME FSTYPE LABEL UUID MOUNTPOINT loop0 squashfs /run/archiso/sfs/airootfs sda ├─sda1 vfat FBA9-977B ├─sda2 ext4 b5a9a977-e9a7-4d3d-96a9-dcf8c3a9010d /mnt ├─sda3 swap 9eec735e-3157-4e0e-a5c6-ef3a7c674201 └─sda4 ext4 a43b8426-c93a-4f32-99c8-9dd5cf645373 /mnt/home I've updated the system again with "pacman -Syu" and I tried to make "mkinitcpio -p linux", but it haven't solved the problem (in spite of the result of the command it's ok). This is the report: ==> Building image from preset: /etc/mkinitcpio.d/linux.preset: 'default' -> -k /boot/vmlinuz-linux -c /etc/mkinitcpio.conf -g /boot/initramfs-linux.img ==> Starting build: 4.11.9-1-ARCH -> Running build hook: [base] -> Running build hook: [udev] -> Running build hook: [block] -> Running build hook: [block] WARNING: Possubly missing firmware for module: aic94xx WARNING: Possubly missing firmware for module: wd719x -> Running build hook: [autodetect] -> Running build hook: [modconf] -> Running build hook: [filesystems] -> Running build hook: [keyboard] -> Running build hook: [fsck] ==> Generating module dependencies ==> Creating gzip-compressed initcpio image: /boot/initramfs-linux.img ==> Image generation successful ==> Building image from preset: /etc/mkinitcpio.d/linux.preset: 'fallback' -> -k /boot/vmlinuz-linux -c /etc/mkinitcpio.conf -g /boot/initramfs-linux-fallback.img -S autodetect ==> Starting build: 4.11.9-1-ARCH -> Running build hook: [base] -> Running build hook: [udev] -> Running build hook: [block] WARNING: Possubly missing firmware for module: aic94xx WARNING: Possubly missing firmware for module: wd719x -> Running build hook: [modconf] -> Running build hook: [filesystems] -> Running build hook: [keyboard] -> Running build hook: [fsck] ==> Generating module dependencies ==> Creating gzip-compressed initcpio image: /boot/initramfs-linux-fallback.img ==> Image generation successful I tried to change the order of HOOKS in "/etc/mkinitcpio.conf". But it doesn't work. This is the current order: base udev block autodetect modconf filesystems keyboard fsck "uname -r" returns: 4.11.7-1-ARCH "pacman -Q linux" returns: linux 4.11.9-1 The file of warrning "/lib/modules/4.11.9-1-ARCH/modules.devnam" exists. I tried to install and use "linux-lts" but the result it's the same. I use grub and I tried to reconfigure it too. What can I do?
I just forgot mount boot (thank you, jasonwryan). The solution to this problem, in my case was: Use a livecd to mount all partitions and use chroot. Update: pacman -Syu Regenerate initramfs using: mkinitcpio -p linux If you use grub: grub-mkconfig -o /mnt/boot/grub/grub.cfg Restart.
Cannot start archlinux after update: Cannot find uuid
1,299,848,475,000
There used to be a kernel config option called sched_user or similar under cgroups. This allowed (to my knowledge) all users to fairly share system resources. In 2.6.35 it is not available. Is there a way I can configure my system to automatically share io/cpu/memory resources between all users (including root?). I have never set up a cgroup before, is there a good tutorial for doing so? Thank you very much.
The kernel documentation provides a general coverage of cgroups with examples. The cgroups-bin package (which depends on libcgroup1) already provided by the distribution should be fine. Configuration is done by editing the following two files: /etc/cgconfig.conf Used by libcgroup to define control groups, their parameters and mount points. /etc/cgrules.conf Used by libcgroup to define the control groups to which the process belongs to. Those configuration files already have examples in it, so try adjusting them to your requirements. The man pages cover their configuration quite well. Afterwards, start the workload manager and rules daemon: service cgconfig restart service cgred restart The workload manager (cgconfig) is responsible for allocating the ressources. Adding a new process to the manager: cgexec [-g <controllers>:<path>] command [args] Adding a already running process to the manager: cgclassify [-g <controllers>:<path>] <pidlist> Or automatically over the cgrules.conf file and the CGroup Rules Daemon (cgred), which forces every newly spawned process into the specified group. Example /etc/cgconfig.conf : group group1 { perm { task { uid = alice; gid = alice; } admin { uid = root; gid = root; } } cpu { cpu.shares = 500; } } group group2 { perm { task { uid = bob; gid = bob; } admin { uid = root; gid = root; } } cpu { cpu.shares = 500; } } mount { cpu = /dev/cgroups/cpu; cpuacct = /dev/cgroups/cpuacct; } Example /etc/cgrules.conf : alice cpu group1/ bob cpu group2/ This will share the CPU ressources about 50-50 between the user 'alice' and 'bob'
How can I configure cgroups to fairly share resources between users?
1,299,848,475,000
Given a distribution and its version, I can find which version of kernel it uses, e.g. for Ubuntu they are listed here, and for currently supported versions of Fedora they are here. In general, however, I'm interested in a reverse lookup: given kernel version X I'd like to find which distros are still using X or older versions. Is there any easy way to do this, at least for the most popular distributions? The use case of this is to decide whether I should bother supporting older Linux versions than version X in my software, if the newer one offers some feature I'd like to use.
So I'm not sure if you're looking to do this programmatically or not. But the first step you'd need to accomplish this is a database that catalogues all of this sort of information for each distribution and their respective releases. Luckily… that is exactly what distrowatch.com is. You can gather this information using their advanced search page, which has a cool feature that allows you to search for distribution releases that include a specific version of a package. In this case, you're interested in the linux package. Searching for a specific version of that package (which corresponds to the kernel version) will give you a nice list of distributions followed by the releases of that distribution that ship with that package version. I don't know of any DistroWatch API, so if you need to do this programmatically, you'll probably have to do some html parsing. But the format for the query to generate the results page for a given kernel version would be as follows: distrowatch.com/search.php?pkg=linux&pkgver=VERSION&distrorange=InAny#pkgsearch Play around with that, and you might be able to get a nice little tool to do exactly what you're trying to do. If anyone knows of a better way to search DistroWatch's Database, please chime in. It'd be really nice, since they have such a treasure-trove of information.
How to find out what distros are using particular Linux version?
1,299,848,475,000
How to know whether a particular patch is applied to the kernel? Especially RT-Preempt Patch.
In the case of preempt you can just use uname: uname -v #23 SMP PREEMPT RT Fri Oct 16 11:52:29 CET 2012 The string PREEMPT shows that you use a kernel version with the realtime patch. Some other patches might also changes the uname string. So it might also be a help. If this is not the case you can try to look at your .config. The file could be found in the /boot directory or (if enabled) by using cat /proc/config.gz. Maybe there is also a version in /usr/src/linux (or where you put the kernel sources). If you found the config file you can grep for specific settings and find out if a patch is used.
Checking linux Kernel for RT-Preempt Patch
1,299,848,475,000
I'd like to have all my modules built-in, but this fails with iwlagn: iwlagn 0000:03:00.0: request for firmware file 'iwlwifi-6000-4.ucode' failed. iwlagn 0000:03:00.0: no suitable firmware found! The microcode file exists in /lib/firmware and the whole thing works just fine if I compile iwlagn as module. I have no idea where it's looking for the file or what's wrong - any ideas?
Have a look at the CONFIG_FIRMWARE_IN_KERNEL, CONFIG_EXTRA_FIRMWARE, and CONFIG_EXTRA_FIRMWARE_DIR configuration options (found at Device Drivers -> Generic Driver Options). The first option will enable firmware being built into the kernel, the second one should contain the firmware filename (or a space-separated list of names), and the third where to look for the firmware. So in your example, you would set those options to: CONFIG_FIRMWARE_IN_KERNEL=y CONFIG_EXTRA_FIRMWARE='iwlwifi-6000-4.ucode' CONFIG_EXTRA_FIRMWARE_DIR='/lib/firmware' A word of advise: Compiling all modules into the kernel is not a good idea. I think I understand your ambition because at some point I was also desperate to do it. The problem with such approach is that you cannot unload the module once it is built-in - and, unfortunately especially the wireless drivers tend to be buggy which leads to a necessity of re-loading their modules. Also, in some cases, a module version of a recent driver will just not work.
Custom kernel: fails to load firmware when module built-in
1,299,848,475,000
I just updated one of our debian jessie servers and the kernel was updated, nothing special, as we have done this many times. But the first time there were some warnings when the grub configuration file was being generated. I have never seen them before. As far as I can tell the system runs nicely after a reboot. Setting up linux-image-3.16.0-4-amd64 (3.16.7-ckt25-2+deb8u3) ... /etc/kernel/postinst.d/initramfs-tools: update-initramfs: Generating /boot/initrd.img-3.16.0-4-amd64 /etc/kernel/postinst.d/zz-update-grub: Generating grub configuration file ... WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it! WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it! WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it! WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it! WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it! WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it! WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it! WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it! WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it! WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it! Found linux image: /boot/vmlinuz-3.16.0-4-amd64 Found initrd image: /boot/initrd.img-3.16.0-4-amd64 WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it! WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it! WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it! done I searched for the warning online, but I couldn't find a decent explanation that made sense to me (maybe not understood?) and also couldn't understand if this can be ignored. Anyone here has an idea? Thanks
according to info from Peter Rajnoha about an old 2014 fedora bug 1152185, "The warning is there because if lvmetad is already instantiated and running, then using use_lvmetad=0 will cause LVM commands run under this setting to not notify lvmetad about any changes - therefore lvmetad may miss some information - hence the warning.". https://bugzilla.redhat.com/show_bug.cgi?id=1152185 However, in our case use_lvmetad = 0, so I tend to believe the warnings appear only during the update and the grub reconfiguration. According to the explanations in the bug report, this is connected with lvm2-monitor, which is happily running on my system, I believe on yours too. Please check out the Process line: # systemctl status lvm2-monitor â lvm2-monitor.service - Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling Loaded: loaded (/lib/systemd/system/lvm2-monitor.service; enabled) Active: active (exited) since Sat 2016-07-09 04:04:49 EEST; 34min ago Docs: man:dmeventd(8) man:lvcreate(8) man:lvchange(8) man:vgchange(8) Process: 328 ExecStart=/sbin/lvm vgchange --monitor y --ignoreskippedcluster (code=exited, status=0/SUCCESS) Main PID: 328 (code=exited, status=0/SUCCESS) CGroup: /system.slice/lvm2-monitor.service I do not see any traces of the warning after reboot and based on the other information I believe the warning is safe to ignore at this stage. If you get any more or other warnings, you should look into it further. Also, I used to receive LVM warnings on each image update or grub reconfiguration about the names I believe, which turned out to be unimportant and most probably connected to the old hardware. So this is not uncommon. Preexo, I hope that this has answered your two concerns. Rubo77, I hope I have been helpful for you too. Kind regards!
kernel update - WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!
1,299,848,475,000
I have an Intel wireless card driven by iwlwifi, and I can see the following message in dmesg: iwlwifi 0000:03:00.0: loaded firmware version 17.168.5.3 build 42301 Given that I know which blob is loaded, how I can find out the version of this blob (.ucode file)? If you look at the below where the ucode is loaded, it doesn't tell me the version information just that a blob was loaded. But I know Intel versions these. $ sudo dmesg | grep ucode [ 26.132487] iwlwifi 0000:03:00.0: firmware: direct-loading firmware iwlwifi-6000g2a-6.ucode [40428.475015] (NULL device *): firmware: direct-loading firmware iwlwifi-6000g2a-6.ucode
The iwlwifi driver loads the microcode file for your wifi adapter at startup. If you want to know the version of the blobs you have on your machine, try Andrew Brampton's script. Run: ## Note the firmware may stored in `/usr/lib` ./ucode.py /lib/firmware/iwlwifi-*.ucode And compare the output to your journal (dmesg output). Note that the script works with python2.
How can I parse the microcode (ucode) in iwlwifi to get the version numbers?
1,299,848,475,000
I've been wondering for last few days how exactly does it work. We can set kernel runtimes parameters using sysctl or echo boolen 1> /proc/sys/module/exactParameter but in /sys/modules/module/parameters/parameter we can also set values. Are parameters for modules in /proc/sys/ related only to hard complied into kernel? or there could be parameters for Loadable Kernel Modules also? LKM after being loaded into running Kernel reveal their parameters in /sys/modules/module/paraeter/params. Does it mean, that there aren't parameters for modules compiled into Kernel? What is difference between both directories.
There is little relation between /proc/sys and /sys other than the fact that both are kernel interfaces and a coincidence of names. /proc/sys is an interface to sysctl, which are kernel configuration parameters. Reading or modifying /proc/sys/foo/bar is equivalent to getting or setting the foo.bar sysctl. Sysctl values are organized by semantic categories, they are not intrinsically related to the structure of the kernel. Many sysctl values are settings that are present on every Linux system regardless of what drivers or features are compiled in; some are related to optional features (e.g. certain network protocols) but never to specific hardware devices. /sys/module is, as the name indicates, an interface to kernel modules. Each directory corresponds to one kernel module. You can read, and sometimes modify, the parameters of the module foo by writing to /sys/module/foo/parameters/*. Components that are loaded in the kernel read their parameters from the kernel command line. These parameters cannot be set at runtime (at least not through an automatically-generated interface like /sys/module: the component can expose a custom interface for this).
/proc/sys vs /sys/modules/mod/parameter
1,299,848,475,000
Is there any developed automatic linux kernel configuration tool? I have found a method of make localmodconfig, but it is certainly very limited. I have searched the web but unfortunately have not come to any acceptable result. Although I am quite conversant in kernel configuration issues, I would like to optimize my time wasted on configuration every new system with particular hardware, since it is rather technical work than creative.
Now, that we talked about this a bit in the comments the answer for you is: no, there isn't. The main reason for that conclusion is that I think you are not looking for a tool to configure a kernel, but to automatically tune the kernel for your specfic (and yet unstated) use case. As stated in the comments, you can skip unneeded drivers and compile the wanted drivers statically into the kernel. That saves you some time during the boot process, but not after that, because the important code is the same whether builtin or module. Kernel tuning The kernel offers some alternatives, you mentioned scheduler yourself. Which scheduler works best for you depends on your use case the applications you use and the load and kind of load you put on your system. No install-and-run program will determine the best scheduler for you, if there even is such a thing. The same holds for buffers and buffer sizes. Also, a lot of (most?) settings are or at least can be set at runtime, not compile time. Optimal build options Also without automation, you can optimize the build options when compiling the kernel, if you have a very specialized CPU. I know of the Buildroot environment which gives you a nice framework for that. This may also help you if you are looking to create the same OS for many platforms. While this helps you building, it will not automate kernel tuning. That's why I and others tell you to use a generic kernel. Without a specific problem to solve building your own kernel is not worth while. Maybe you can get more help by identifying/stating the problem you are trying to solve.
Automatic kernel configuration tool
1,299,848,475,000
I have my terminal always opened (Fedora 22), because all my work I do from there. Sometimes I search some info in browser or just have fun. After 20-30 minutes of browsing (browser starts not from command line) I return to terminal and saw something strange - it was in all tabs of terminal: Message from syslogd@localhost at Jul 17 23:17:19 ... kernel:NMI watchdog: BUG: soft lockup - CPU#2 stuck for 22s! [migration/2:21] Message from syslogd@localhost at Jul 17 23:17:38 ... kernel:CPU: 2 PID: 21 Comm: migration/2 Not tainted 4.0.7-300.fc22.i686 #1 Message from syslogd@localhost at Jul 17 23:17:39 ... kernel:Hardware name: LENOVO 20126/123456789, BIOS 5BCN30WW 10/10/2012 Message from syslogd@localhost at Jul 17 23:17:39 ... kernel:task: f45f0000 ti: f45ec000 task.ti: f45ec000 Message from syslogd@localhost at Jul 17 23:17:39 ... kernel:Stack: Message from syslogd@localhost at Jul 17 23:17:40 ... kernel:Call Trace: Message from syslogd@localhost at Jul 17 23:17:40 ... kernel: <IRQ> Message from syslogd@localhost at Jul 17 23:17:40 ... kernel:#000<IRQ> #000868>] do_softirq_own_stack+0x28/0x30#0000xc0 [mac80211]#000c80211]#000014#000es iptable_nat nf_conntrack_localhost#000frag_ipv4 nf_nat_ipv4 nf_kernel#000conntrack#000#000#000#000el:#000_mangle iptable_security#000ul 17 23:17:40#000#000hda_codec_realtek snd_hda_codec_#000eneric#000arc4 s#000d_hda_intel#000rtl8192ce s#000d_hda_co#000#000#000#000�#001#000#000-#000#000#000�s#003�09b3e98>] ip_rcv+0x2e8/0x410#000#000#000#000%#000#000#000localhost.localdomain#000videob#025#000#000#000kernel#000Y#0009#000#000#025#000#000#000_MACHINE_ID#000-#000#000#000#006#000#000#000�'g�p&g�#001#000#000#000#000#000#000#000#020#026#000�#001#000#000#000#000#000#000#000#000#000#000#000#025#000#000#000_TRANSPORT#0001#025#000#000#000PRIORITY#0002#000#000-#000#000#000#006#000#000#000�'g�p&g�#001#000#000#000#000#000#000#000Pw#003�#006#000#000#000#000#000#000#000#000#000#000#000-#000#000#0000r#003��'g�p&g�#000#000#000#000#000#000#000#0008r#003� #000#000#000#000#000#000#000#000#000#000#000#025#000#000#0006036995285#000#0005#000#000#000 k#003�045c0c0>]... and a bit more stuff like these last long line. Laptop didn't behave like something wrong, it was just this log in all tabs of terminal. What's this???
Seems like a bug in the updated kernel; but, this maybe related to your laptop's battery poor performance. This you can be more affirmative by checking ACPI(Advanced Configuration and Power Interface) modules. When my kernel was updated, I restarted my system and started the new kernel---however it failed to load and the same error messages were sent to the terminal. I reverted back to my old kernel usage, which is still working for me. Maybe,not sure but newer kernel modules might have some enhancements which are unable to be supported by the current power source. Like, it needs more power or something. Also, my laptop's battery performance has declined severely and it needs to be replaced in my case. EDIT: (based on Nikos Alexandris's comment) You may consider replacing your charge source; it may have something to do with power management.
What does "kernel:NMI watchdog: BUG: soft lockup" followed by other errors mean?
1,299,848,475,000
While troubleshooting a Oracle Linux 6.3 server (RHEL Derivative) I tried to use some of the Magic SysRq Key commands for the first time. No such luck so I had to hard reboot. When it came back up I checked if SysRq was enabled... > sysctl kernel.sysrq kernel.sysrq = 0 But on our Oracle Linux 7.2 (RHEL Derivative) systems... > sysctl kernel.sysrq kernel.sysrq = 16 Looking at the Kernel Documentation for sysrq: 0 - disable sysrq completely 1 - enable all functions of sysrq >1 - bitmask of allowed sysrq functions (see below for detailed function description): 2 = 0x2 - enable control of console logging level 4 = 0x4 - enable control of keyboard (SAK, unraw) 8 = 0x8 - enable debugging dumps of processes etc. 16 = 0x10 - enable sync command 32 = 0x20 - enable remount read-only 64 = 0x40 - enable signalling of processes (term, kill, oom-kill) 128 = 0x80 - allow reboot/poweroff 256 = 0x100 - allow nicing of all RT tasks According to Fedora's QA for Sysrq: Stock Fedora and RHEL kernels do have this functionality enabled at compile-time, but the distributions disable it at boot time, by default, using sysctl.conf. Enabling this functionality by default on all of our systems seems like a good idea. On the off chance a system locks up, you can at-least semi-gracefully shut it down. My questions... If it's such an obviously good idea, why is the feature disabled in 6.X, and restricted to just filsystem syncs in 7.X? Are there any risks in setting kernel.sysrq to 1 on all of our systems?
You might not want to have the ability for some random person to walk up to the keyboard and reset the machine, or even worse, start printing registers, syslog or all tasks to the console, all without logging in. Its a potential security issue. I selectively enable it, for example, on hardware in our datacenter hooked up to a serial console concentrator. I disable it on our end user workstations.
Why is Magic SysRq not enabled by default on some systems? Is there a risk?
1,299,848,475,000
ksplice is an open source extension of the Linux kernel which allows system administrators to apply security patches to a running kernel without having to reboot the operating system. (From Wikipedia.) Is there a downside to using ksplice? Does it introduce any kind of instability? If not, why is it not included by default in more Linux distributions?
Technically it's very sound, I think that the fact distributions do provide this method of patching yet is: It does not integrate with the existing update methods (packaging wise) It adds to the burden of the distro to provide another method of upgrading.
Is there a downside to ksplice?
1,299,848,475,000
I'm running a fresh install of Kubuntu 20.04. Many times when I shutdown (not every time, but often), it pauses for a minute or so before showing the error: [drm:intel_cpu_fifo_underrun_irq_handler [i915]] *ERROR* CPU pipe A FIFO underrun This is usually associated with screen flickers. The screen also sometimes flickers during normal usage. Googling has yielded MANY reports of similar issues, but the given solution always seems to be "update your kernel" or to use a workaround that's deprecated (because it was for an older kernel). Example: drm/i915: Resetting chip after gpu hang I'm currently on kernel 5.4.42. I've also tried 5.4.0.29 (since 5.4.0 is what originally shipped with Kubuntu) and 5.6.14 (the latest stable). All have the same issue. I've tried updating drivers via sudo ubuntu-drivers autoinstall but the behavior is the same.
I was able to prevent this from happening by disabling C-States in my laptop's firmware configuration ("BIOS"). For reference, it's a Dell Latitude 5490. Found the solution here: https://askubuntu.com/questions/895329/flickering-screen-cpu-pipe-b-fifo-underrun-when-i-use-the-termnal
[drm:intel_cpu_fifo_underrun_irq_handler [i915]] *ERROR* CPU pipe A FIFO underrun
1,299,848,475,000
In my systemd jounal (journalctl) I often see this message: hibernation is restricted; see man kernel_lockdown.7 This seems to stem from the kernel lockdown feature that (only?) is active when you boot in UEFI mode with secure boot enabled. As far as I understand that this feature is supposed to prevent a program running at user-space from modifying the kernel. While I do understand that so far, I just don't get one thing: Why does the kernel lockdown disable that feature? Why does it disable hibernation altogether? What is exactly is “insecure” about hibernation that this is disabled? It seems a locked down kernel does not want me to hibernate my device. Linux kernel v5.6.15 Fedora 32 Silverblue Cross-posted at Fedora Ask.
As mentioned in the manpage, Unencrypted hibernation/suspend to swap are disallowed as the kernel image is saved to a medium that can then be accessed. Unencrypted hibernation stores the contents of the hibernated system’s memory as-is on disk. This allows an attacker to modify those contents while the system is hibernated, resulting in changes to the running system when it is resumed, thus defeating the lockdown. The manpage gives false hope that encrypted hibernation would be supported in lockdown, but that’s currently not the case, and the real requirement appears to be signed hibernation images rather than (or presumably in addition to, depending on the lockdown mode) encrypted images. Matthew Garrett has been working on fixing this; he described his proposal to get hibernation working with lockdown in February 2021, and gave an update with practical solutions to a couple of the remaining issues in December 2021. The general idea is to tie hibernation images to TPM states, such that a locked down system will only resume a hibernation image generated on that system and not modified since; getting there requires both knowing what TPM state is valid for the image, and that the TPM state was arrived at by the kernel on its own.
Why does the kernel lockdown prevent hibernation?
1,299,848,475,000
On Linux and Windows, I'm used to the situation that I require a 64-bit kernel to have a system with multiarch/WoW where I can run 32-bit and 64-bit software side-by-side. And then, years ago it blew my mind when someone showed me that MacOS 10.6 Snow Leopard could run 64-bit applications with the kernel in 32-bit mode. This may be largely forgotten now because it was a one-time technology transition. With the hardware ahead of the curve in the mobile space, as far as I know this was never needed in the move to 64-bit for iOS and Android. My question: What would it take to get the same capability in a 32-bit Linux kernel (i386 or armhf)? I understand that this probably isn't trivial. If it was, Microsoft could have put the feature into Windows XP 32-bit. What are the general requirements though? Has there ever been a proposed patch or proof-of-concept? In the embedded world I think this would be especially helpful, as 64-bit support can lag behind for a long time in device drivers.
Running 64-bit applications requires some support from the kernel: the kernel needs to at least set up page tables, interrupt tables etc. as necessary to support running 64-bit code on the CPU, and it needs to save the full 64-bit context when switching between applications (and from applications to the kernel and back). Thus a purely 32-bit kernel can’t support 64-bit userspace. However a kernel can run 32-bit code in kernel space, while supporting 64-bit code in user space. That involves handling similar to the support required to run 32-bit applications with a 64-bit kernel: basically, the kernel has to support the 64-bit interfaces the applications expect. For example, it has to provide some mechanism for 64-bit code to call into the kernel, and preserve the meaning of the parameters (in both directions). The question then is whether it’s worth it. On the Mac, and some other systems, a case can be made since supporting 32-bit kernel code means drivers don’t all have to make the switch simultaneously. On Linux the development model is different: anything in the kernel is migrated as necessary when large changes are made, and anything outside the kernel isn’t really supported by the kernel developers. Supporting 32-bit userland with a 64-bit kernel is certainly useful and worth the effort (at least, it was when x86-64 support was added), I’m not sure there’s a case to be made for 64-bit on 32-bit...
What does it take to run 64-bit userland software on a 32-bit kernel?
1,299,848,475,000
There are some tools in side the kernel, <kernel source root directory>/tools perf is one of them. In ubuntu I think the tools inside this folder is available as package linux-tools How can I compile it form source and install it and run it?
what's wrong with the following? make -C <kernel source root directory>/tools/perf
How can I compile, install and run the tools inside kernel/tools?
1,299,848,475,000
I already posted this over at reddit, but got no response until now. I bought this cable just to find out my system doesn't do anything. Both lsusb and tail -f /var(log/kern.log don't show any difference when plugging the cable in and out. Is it worth trying to get this to work or should I just send it back directly? What is the status of DP via USB–C in Linux? (Found a lot of rather confusing questions and answers out there) $ lspci -d ::0c03 -k 00:14.0 USB controller: Intel Corporation Sunrise Point-LP USB 3.0 xHCI Controller (rev 21) Subsystem: CLEVO/KAPOK Computer Sunrise Point-LP USB 3.0 xHCI Controller Kernel driver in use: xhci_hcd 01:00.0 USB controller: ASMedia Technology Inc. ASM1142 USB 3.1 Host Controller Subsystem: CLEVO/KAPOK Computer ASM1142 USB 3.1 Host Controller Kernel driver in use: xhci_hcd OS: elementary OS 0.4.1 Loki Kernel: 4.9.18-040918-generic Hardware: Dual-Core Intel® Core™ i5-7200U CPU @ 2.50GHz Intel Corporation Device 5916 (rev 02)
[EDIT: I append at the end of this answer a very brief update, one year after i gave the answer here. If this update should be a second, separate answer, please lmk. Apart from this update at the end, the answer is unchanged] Your questions are very timely, even though you asked them 7 months ago. And you asked two questions, so you get two answers: Is it worth trying to get this to work or should I just send it back directly? A set of kernel patches to support DisplayPort over USB-C have just been published to the Linux-kernel archive here. So for the moment, you need to apply patches and roll your own kernel for it to be possibly worthwhile. (This is less scary than it might seem at first, so I hope you'll consider this encouragement and not the opposite). A second constraint is that according to the that post in the Linux-Kernel archive, the patches are good for hardware platforms that use FUSB controllers. He will soon also publish support for UCSI controllers -- and I think (but am not positive) that both Intel and ASMedia controllers are of this type. To quote him: I've tested these with a platform that has fusb302, and also with UCSI platforms. The UCSI driver will need separate support for alternate modes >that I'm not including to this series. I'm still working on it. In other words, "soon." What is the status of DP via USB–C in Linux? I learned about the above in an article in Phoronix, and the article states that the hope is to merge these patches into the 4.19 kernel. Finally, it's worth noting that for the particular case of DisplayPort over USB-C, the cable is entirely passive and there is a rather mature standard, so you can be close to certain that your cable WILL work once there is OS support for it. This is also true of Thunderbolt over USB-C, but not true of HMDI, for example: A USB-C to HDMI cable is likely to be a DP-to-HDMI adapter on the inside, with the DP side simply using the standard USB-C connector. If you are not going to deal with kernel patches, I would guess that your cable will 'just work' sometime between 3 months to one year from now. EDIT/UPDATE: My day-to-day machine is a Dell 7577 Inspiron laptop, running stock Arch Linux. It has a USB-C port and an HDMI port, and I run X/openbox on it with THREE side-by-side monitors: one of them is connected with a stock/standard HDMI cable, and the other with a stock/standard USB-C-to-DisplayPort cable. "Three Monitors with Arch Linux and this particular Dell laptop: It just works". It seems that the prediction I made in the last sentence of the original answer has proved to be accurate. That being said, there are two important little caveats/nits that I would certainly consider if I were buying a machine today, and wanted this configuration of monitors: I find the whole "hybrid/mixed/dual discrete and integrated GPU" architecture to be a pain to understand and manage. It's a pain, but it is possible (barely). On Dell systems this architecture is called "Optimus", and how you set things up will have an enormous impact on the kind of video function and performance you get. I realize that I'm being very generic, but there isn't any one thing that's true for all set ups. Basically: if you are looking a machine that has BOTH an integrated GPU AND a discrete GPU, do some research to make sure that OS you intend to install can support the configuration you wish to use. In particular, it seems that many (most? all?) modern laptops seem to hard-wire each monitor output port to exactly ONE of the two GPUs. So, for example, if the laptop built-in LCD display is hard-wired to the integrated GPU, then any time you use the discrete NVIDIA or Radeon GPU with an application, each frame will be copied at the end over to the integrated GPU in order to actually get displayed on the screen. It may well be that the performance gain from the discrete GPU is so enormous that this extra copy is a negligible price to pay. But it might not be; and even if it is, intensive users of discrete GPU-power often are the type of person who don't like to pay even the most negligible of prices. I am no true expert, but i think that that's where linux support for three monitors is today. (If by "three monitors" one means "using simultaneously the built-in LCD screen and the two external monitor ports on the laptop."
USB C → DisplayPort Adapter support
1,299,848,475,000
I'm running Linux Mint, version 19 Tara. My battery life is really bad right now and my fan is always on because my computer is constantly at 70% CPU usage on this kworker thread. It's really starting to annoy me. I run top as soon as I boot up and before I even open a single program (other than the terminal), this process is already taking up 70% CPU. PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 5 root 20 0 0 0 0 I 66.1 0.0 1:27.86 kworker/0:0-kac when I run htop it identifies the kworker thread as going back and forth between being called kacpi_notify and kacipid. I tried editing grub to acpi=off but then my system boots to a black screen with a blinking underscore and that's it. Won't boot. I upgraded my kernel, so I'm now running 5.3.0-51-generic. My research so far makes me think I might need to update my BIOS, but my computer manufacturer only provides a BIOS update in .exe form. I've downloaded the exe, but I don't know where to go from here. Can anybody please help me?
I've been researching on this problem also. I've tried changing the BIOS settings and all kinds of tweaks. I finally came across this link (https://forum.manjaro.org/t/kworker-kacpid-cpu-100/131532) and it worked for a while. As I have been switching between Ubuntu, Mint and Win10, once the problem happens, it becomes consistent even when I switch/boot into all the OS's. Once I applied the above fix while in Ubuntu 20 then it goes away on every OS I boot into. Well the problem came back today while I booted up with Mint 19.3. I figure that since the problem came from the interrupt handling in the ACPI area, how can I trigger an ACPI event in hope to "reset" the problem? I decided to try putting the machine to "Suspend" mode, wait for it to complete, then hit the mouse/keyboard to wake it up to see it it'll correct or re-initialize the ACPI handling. Bingo! When it wakes up, the CPU usage drops right back down to the less than 5% range. This is not just a Linux issue, but when it happens, it happens when I boot into Windoz also. It also does not seem to be a manufacturer specific issue either. This might be a basic PC architecture/design issue. I suspect it may be the ACPI init routine that caused the CPU spike. There might be timing issues in setting up the ISR in handling the ACPI interrupts, so when the interrupts do occur, there's no handling or resetting of the INT, hence causing the INT keeps occurring. Hope this info may give the developers some new ideas to put in a fix for the problem. I have not tested it long enough to say this works all the time, but it's worth trying. Best regards, Jim C My setup: HP Z220, i5-3470, 16G DDR3, nVidia Quadro K1200. Adata 960G SSD + WD 160G ATA HD, APC UPS connected to USB port, nVidia Quadro K-1200, IBM Model M keyboard (1989) and HP optical mouse on PS/2 input. Not the greatest, not for gaming, but an old reliable. ;-)
kworker thread kacpid_notify/kacpid hogging 60-70% of CPU
1,299,848,475,000
Based on part of the first answer of this questions: read from a file (the kernel must check that the permissions allow you to read from said file, and then the kernel carries out the actual instructions to the disk to read the file) It requires to have root privilege to change permission to a file. With root privilege, user can access any files without worrying about permission. So, are the any relationships between root and kernel?
First, a clarification: It requires to have root privilege to change permission to a file. From man 2 chmod we can see that the chmod() system call will return EPERM (a permissions error) if: The effective UID does not match the owner of the file, and the process is not privileged (Linux: it does not have the CAP_FOWNER capability). This typically means that you either need to be the owner of the file or the root user. But we can see that the situation in Linux might be a bit more complicated. So, are the any relationships between root and kernel? As the text you quoted has pointed out, the kernel is responsible for checking that the UID of the process making a system call (that is, the user it is running as) is allowed to do what it is asking. Thus, root's superpowers come from the fact that the kernel has been programmed to always permit an operation requested by the root user (UID=0). In the case of Linux, most of the various permissions checks that happen check whether the given UID has the necessary capability. The capabilities system allows more fine grained control over who is allowed to do what. However, in order to preserve the traditional UNIX meaning of the "root" user, a process executed with the UID of 0 has all capabilities. Note that while processes running as UID=0 have superuser privileges they still have to make requests of the kernel via the system call interface. Thus, a userspace process, even running as root, is still limited in what it can do as it is running in "user mode" and the kernel is running in "kernel mode" which are actually distinct modes of operation for the CPU itself. In kernel mode a process can access any memory or issue any instruction. In user mode (on x86 CPUs there are actually a number of different protected modes), a process can only access its own memory and can only issue some instructions. Thus a userspace process running as root still only has access to the kernel mode features that the kernel exposes to it.
What is the relationship between root and kernel? [closed]
1,394,457,423,000
I understand that /dev/kmem and /dev/mem provide access to the memory (i.e. raw RAM) of the system. I am also aware, that /dev/kmem can be completely disabled in kernel and that access can be restricted for /dev/mem. It seems to me, having raw access to memory can be useful for developers and hackers, but why should I need access to memory through /dev/mem. AFAIK it cannot be disabled in kernel (unlike /dev/kmem). Having access to raw memory that can be potentially abused/exploited seems to me to be just asking for trouble. Is there some practical use for it? Do any user programs require it to work properly?
There's a slide deck from Scale 7x 2009 titled: Undermining the Linux Kernel: Malicious Code Injection via /dev/mem that contained these 2 bullets. Who needs this? X Server (Video Memory & Control Registers) DOSEmu From everything I've found from search thus far it would appear that these 2 bullets are the front-runners for legitimate uses. References Anthony Lineberry on /dev/mem Rootkits - LJ 8/2009 by Mick Bauer Who needs /dev/kmem?
kernel: disabling /dev/kmem and /dev/mem
1,394,457,423,000
I've been reading Linux Kernel Development and there's something that's not entirely clear to me -- when an interrupt is triggered by the hardware, what's the criterion to decide on which CPU to run the interrupt handling logic? I could imagine it having to be always the same CPU that raised the IO request, but as the thread is for all purposes now sleeping there would not really be that much of a point in doing that. On the other hand, there may be timing interrupts (for the scheduler, for instance) that need to be raised. On an SMP system are they always raised on the same core (let's say, #0) or they're always pretty much raised at any core? How does it actually work? Thanks
On a multiprocessor/multicore system, you might find a daemon process named irqbalance. Its job is to adjust the distribution of hardware interrupts across processors. At boot time, when the firmware hands over the control of the system to the kernel, initially just one CPU core is running. The first core (usually core #0, sometimes called the "monarch CPU/core") initially takes over all the interrupt handling responsibilities from the firmware before initializing the system and starting up the other CPU cores. So if nothing is done to distribute the load, the core that initially started the system ends up with all the interrupt handling duties. https://www.kernel.org/doc/Documentation/IRQ-affinity.txt suggests that on modern kernels, all CPU cores are allowed to handle IRQs equally by default. But this might not be the optimal solution, as it may lead to e.g. inefficient use of CPU cache lines with frequent IRQ sources. It is the job of irqbalance to fix that. irqbalance is not a kernel process: it's a standalone binary /usr/sbin/irqbalance that can run either in one-shot mode (i.e. adjust the distribution of interrupts once as part of the boot process, and exit) or as a daemon. Different Linux distributions can elect to use it differently, or to omit it altogether. It allows easy testing and implementation of arbitrarily complex strategies for assigning IRQs to processors by simply updating the userspace binary. It works by using per-IRQ /proc/irq/%i/smp_affinity files to control which IRQs can be handled by each CPU. If you're interested in details, check the source code of irqbalance: the actual assignment of IRQ settings happens in activate.c.
What's the policy determining which CPU handles which interrupt in the Linux Kernel?
1,394,457,423,000
I have a Lenovo IdeaPad Yoga 13 with Ubuntu 13.10 Installed. The device has a "Toggle TouchPad" button on the keyboard (F5). The keyboard's F* buttons are reversed (so to get F5, I need to press Fn + F5, and F5 is actually the toggle key). I've found out that the button is actually read by the keyboard (rather than the TouchPad like certain devices), which is at /dev/input/event3. So using sudo input-events 3 I was able to figure out that the button sends the scan code 190: Output of sudo lsinput: /dev/input/event3 bustype : BUS_I8042 vendor : 0x1 product : 0x1 version : 43907 name : "AT Translated Set 2 keyboard" phys : "isa0060/serio0/input0" bits ev : EV_SYN EV_KEY EV_MSC EV_LED EV_REP Output of sudo input-events 3: 23:13:03.849392: EV_MSC MSC_SCAN 190 23:13:03.849392: EV_SYN code=0 value=0 23:13:03.855413: EV_MSC MSC_SCAN 190 23:13:03.855413: EV_SYN code=0 value=0 No other programs (such as xev) seem to be able to read it except for input-events. Is there any way to map this button to make it toggle the TouchPad on my laptop? If so, how can I do so?
As it turns out the kernel did pick it up, but kept complaining that it's not recognised. For anyone else having this issue, or wants to map a key that's not read by the OS, read on. Open a terminal and run dmesg | grep -A 1 -i setkeycodes. This will give you multiple entries like this: [ 9.307463] atkbd serio0: Unknown key pressed (translated set 2, code 0xbe on isa0060/serio0). [ 9.307476] atkbd serio0: Use 'setkeycodes e03e <keycode>' to make it known. What we are interested is the hexadecimal value after "setkeycodes", in this case this is e03e. If you have multiple of these, you can run tail -f /var/log/kern.log. Once you've done so, you can tap the button you're looking for, and this will give you a the same line as above, and again, we only need the hexadecimal value. Make a note of this. Now run xmodmap -pke | less and find the appropriate mapping. In my case, I needed to map this to toggle my touch pad, which means I was interested in the following line: keycode 199 = XF86TouchpadToggle NoSymbol XF86TouchpadToggle If you can't find whatever you're interested in, read @Gilles answer too, as you can define custom mappings too, then read on (if the kernel reads it, you won't need to add it to xorg.conf.d) Now I ran the following command: sudo setkeycodes [hexadecimal] [keycode], so in my case that became: setkeycodes e03e 199. Now you can use the following line to test if it worked and/or you have the correct mapping: xev | grep -A2 --line-buffered '^KeyRelease' | sed -n '/keycode /s/^.*keycode \([0-9]*\).* (.*, \(.*\)).*$/\1 \2/p' When you run this command, you need to focus on the newly opened window (xev) and check the console output. In my case it read as following: 207 NoSymbol This was obviously wrong, as I requested keycode 199, so it's mapped to XF86TouchpadToggle. I checked xmodmap -pke again, and noticed that keycode 207 is actually mapped to NoSymbol, and I noticed that there was an offset difference of 8, so I tried the setkeycodes command again, but the key is mapped to keycode 191. sudo setkeycodes e03e 191 This worked perfectly. EDIT -- the solution I provided to have to working on start up does not. I will figure this out tomorrow and update this answer. For now I suppose you can run this on start up manually.
Capturing key input from events device and mapping it (toggle TouchPad key is unmapped)
1,394,457,423,000
Whenever there is high disk I/O, the system tends to be much slower and less responsive than usual. What's the progress on Linux kernel regarding this? Is this problem actively being worked on?
I think for the most part it has been solved. My performance under heavy IO has improved in 2.6.36 and I expect it to improve more in 2.6.37. See these phoronix Articles. Wu Fengguang and KOSAKI Motohiro have published patches this week that they believe will address some of these responsiveness issues, for which they call the "system goes unresponsive under memory pressure and lots of dirty / writeback pages" bug. Andreas Mohr, one of the users that has reported this problem to the LKML and tested the two patches that are applied against the kernel's vmscan reported success. Andreas' problem was the system becoming fully unresponsive (and switching to a VT took 20+ seconds) when making an EXT4 file-system when a solid-state drive was connected via USB 1.1. On his system when writing 300M from the /dev/zero file the problem was even worse. Here's a direct link to the bug Also from Phoronix Fortunately, from our testing and the reports of other Linux users looking to see this problem corrected, the relatively small vmscan patches that were published do seem to better address the issue. The user-interface (GNOME in our case) still isn't 100% fluid if the system is sustaining an overwhelming amount of disk activity, but it's certainly much better than before and what's even found right now with the Linux 2.6.35 kernel. There's also the Phoronix 2.6.36 release announcement It seems block barriers are going away and that should also help performance. In practice, barriers have an unpleasant reputation for killing block I/O performance, to the point that administrators are often tempted to turn them off and take their risks. While the tagged queue operations provided by contemporary hardware should implement barriers reasonably well, attempts to make use of those features have generally run into difficulties. So, in the real world, barriers are implemented by simply draining the I/O request queue prior to issuing the barrier operation, with some flush operations thrown in to get the hardware to actually commit the data to persistent media. Queue-drain operations will stall the device and kill the parallelism needed for full performance; it's not surprising that the use of barriers can be painful. There's also this LWN article on fair I/O Scheduling I would say IO reawakened as a big deal about the time of the release of ext4 in 2.6.28. The following links are to Linux Kernel Newbies Kernel releases, you should review the Block, and Filesystems sections. This may of course be unfair sentiment, or just the time I started watching FS development, I'm sure it's been improving all along, but I feel that some of the ext4 issues, 'caused people to look hard at the IO stack, or it might be that they were expecting ext4 to resolve all the performance issues, and then when it didn't they realized they had to look elsewhere for the problems. 2.6.28, 2.6.29, 2.6.30, 2.6.31, 2.6.32, 2.6.33, 2.6.34, 2.6.35, 2.6.36, 2.6.37
What's the progress regarding improving system performance/responsiveness during high disk I/O?
1,394,457,423,000
I'm just wondering where these values are being set and what they default to? Mine is currently 18446744073692774399. I didn't set it anywhere that I can see. $ cat /proc/sys/kernel/shmmax 18446744073692774399 $ sysctl kernel.shmmax kernel.shmmax = 18446744073692774399
The __init function ipc_ns_init sets the initial value of shmmax by calling shm_init_ns, which sets it to the value of the SHMMAX macro. The definition of SHMMAX is in <uapi/linux/shm.h>: #define SHMMAX (ULONG_MAX - (1UL << 24)) /* max shared seg size (bytes) */ On 64-bit machines, that definition equals the value you found, 18446744073692774399.
Where does Linux set the default values for SHMMAX?
1,394,457,423,000
Sometimes I see the Linux kernel being mentioned in the list of upgrades, when running pacman -Syu (updating my packages in Arch Linux). Whenever this happens, after installation of the packages, I can not mount USB drives anymore until I restart. I would just like to know if this is something that is common and expected (and if so, why, do I wonder), or if this is something not expected that I should investigate.
Probably, on that distribution, it is normal. It depends on how the package manager installs the new kernel. I suppose that your package manager (when upgrading the kernel) deletes the old kernel-modules directory immediately. This way, when you try to mount a vfat-formatted usb stick, the kernel will fail to load the needed vfat kernel module. To verify my supposition, next time you upgrade the kernel, you can check the existence of kernel module directory: before the upgrade, you should find that it exists a directory named as the current (the old) kernel version. ~> ls -d /lib/modules/`uname -r` /lib/modules/3.0.0-1.2-desktop after the upgrade but before the reboot, you should find that the directory does not exist any more (so you cannot manage new hardware). ~> ls -d /lib/modules/`uname -r` ls: cannot access /lib/modules/3.0.0-1.2-desktop: No such file or directory after the reboot, you should find that it exist a new kernel module directory named as the current (the new) kernel version. ~> ls -d /lib/modules/`uname -r` /lib/modules/3.1.0-1.4 To avoid this problem, other distributions (like openSuSE) delay the directory deletion till you reboot.
Is it normal that a restart is required to mount USB after a kernel upgrade?
1,394,457,423,000
I have a USB rocket launcher that I wish to experiment with through libusb. However, libusb cannot claim the interface; presumably because the output of usb-devices lists 'usbhid' as the driver for the device. From reading around on the internet, I've only come to the conclusion that I need to detach this driver from the device so I can use it with libusb. However, I have not found a single definitive way to do that, only several different ideas and bug reports. So, is there a way to detach the usbhid driver from a device that would be relevant with the kernel and tools supplied with Ubuntu 11.04? EDIT: I tried creating the file /etc/udev/rules.d/10-usbhid.rules and writing the following: ATTRS{idVendor}=="0a81", ATTRS{idProduct}=="0701", OPTIONS=="ignore_device" Saving, then rebooting. The file is still there, but it doesn't appear to be working at all. EDIT: Okay, I tried this: sudo -i echo -n "0003:0A81:0701.0006" > /sys/bus/hid/drivers/generic-usb/unbind After that, navigating to /sys/bus/hid/devices/0003:0A81:0701.0006 and ls yields: drwxr-xr-x 2 root root 0 2011-05-29 15:46 power lrwxrwxrwx 1 root root 0 2011-05-29 13:19 subsystem -> ../../../../../../../../../bus/hid -rw-r--r-- 1 root root 4096 2011-05-29 13:19 uevent It no longer lists a "driver" symlink like it did before, so I would assume that it unbound it. However, all evidence seems to suggest that the driver is still usbhid. For example usb-devices yields: T: Bus=02 Lev=03 Prnt=07 Port=00 Cnt=01 Dev#= 9 Spd=1.5 MxCh= 0 D: Ver= 1.10 Cls=00(>ifc ) Sub=00 Prot=00 MxPS= 8 #Cfgs= 1 P: Vendor=0a81 ProdID=0701 Rev=00.01 S: Manufacturer=Dream Link S: Product=USB Missile Launcher v1.0 C: #Ifs= 1 Cfg#= 1 Atr=a0 MxPwr=100mA I: If#= 0 Alt= 0 #EPs= 1 Cls=03(HID ) Sub=00 Prot=00 Driver=usbhid libusb still retuns -1 on usb_claim_interface()....
If you simply run the libusb program as root, usb_detach_kernel_driver_np() actually works as expected.
Prevent claiming of novelty usb device by usbhid so I can control it with libusb?
1,394,457,423,000
If I disable memory overcommit by setting vm.overcommit_memory to 2, by default the system will allow to allocate the memory up to the dimension of swap + 50% of physical memory, as explained here. I can change the ratio by modifying vm.overcommit_ratio parameter. Let's say I set it to 80%, so 80% of physical memory may be used. My question are: what the system will do with the remaining 20%? why is this parameter required in first place? why I should not always set it to 100%?
What the system will do with the remaining 20%? The kernel will use the remaining physical memory for its own purposes (internal structures, tables, buffers, caches, whatever). The memory overcommitment setting handle userland application virtual memory reservations, the kernel doesn't use virtual memory but physical one. Why is this parameter required in first place? The overcommit_ratio parameter is an implementation choice designed to prevent applications to reserve more virtual memory than what will reasonably be available for them in the future, i.e. when they actually access the memory (or at least try to). Setting overcommit_ratio to 50% has been considered a reasonable default value by the Linux kernel developers. It assumes the kernel won't ever need to use more than 50% of the physical RAM. Your mileage may vary, the reason why it is a tunable. Why I should not always set it to 100%? Setting it to 100% (or any "too high" value) doesn't reliably disable overcommitment because you cannot assume the kernel will use 0% (or too little) of RAM. It won't prevent applications to crash as the kernel might preempt anyway all the physical memory it demands.
Where the remaining memory of vm.overcommit_ratio goes?
1,394,457,423,000
When my kernel boots, apart from the useful important information, it prints lots of debugging info, such as .... kernel: [0.00000] BIOS-e820: [mem 0x0000000000000000-0x000000000009d3ff] usable kernel: [0.00000] BIOS-e820: [mem 0x000000000009d400-0x000000000009ffff] reserved kernel: [0.00000] BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved ... kernel: [0.00000] MTRR variable ranges enabled: kernel: [0.00000] 0 base 0000000000 mask 7E00000000 write-back ... kernel: [0.00000] init_memory_mapping: [mem 0x00100000-0xcf414fff] kernel: [0.00000] [mem 0x00100000-0x001fffff] page 4k kernel: [0.00000] [mem 0x00200000-0xcf3fffff] page 2M kernel: [0.00000] [mem 0xcf400000-0xcf414fff] page 4k .... kernel: [0.00000] ACPI: XSDT 0xD8FEB088 0008C (v01 DELL CBX3 01072009 AMI 10013) kernel: [0.00000] ACPI: FACP 0xD8FFC9F8 0010C (v05 DELL CBX3 01072009 AMI 10013) .... kernel: [0.00000] Early memory node ranges kernel: [0.00000] node 0: [mem 0x00001000-0x0009cfff] kernel: [0.00000] node 0: [mem 0x00100000-0xcf414fff] kernel: [0.00000] node 0: [mem 0xcf41c000-0xcfdfcfff] .... kernel: [0.00000] ACPI: Local APIC address 0xfee00000 kernel: [0.00000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled) kernel: [0.00000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled) and much much more. I don't see how this can be useful to anybody other than a kernel developer/debugger. I have found, that I can get rid of these by using loglevel=5 as boot parameter. The debugging logs are no longer printed on the terminal, but they are still in dmesg and in syslog. Is it possible to decrease the boot log verbosity globally, so that dmesg and syslog are not flooded by this useless information ? I am using self compiled kernel 3.18 ACEPTED SOLUTION Turns out, putting following lines to /etc/rsyslog.conf solved the problem for me: kern.debug /dev/null & ~
For syslog You can add following line to /etc/syslog.conf: kern.info; kern.debug /dev/null It will discard kernel .info and .debug messages ( which are discarded with loglevel=5 ) Also, dmesg can be used with option -n to show messages with certain loglevel.
decrease kernel boot log verbosity level
1,394,457,423,000
I'm running Debian Testing (Last updated 31/10/2017) and when I play a video in full screen through a browser from either Twitch or iView it hangs the GPU, so the GUI is all frozen. The computer I have is an 'Up Squared' with an Intel 505HD. The kernel is still running though, as I can still access it via ssh. I'm running kernel 4.12 Linux BB-8 4.12.0-0.bpo.2-amd64 #1 SMP Debian 4.12.13-1~bpo9+1 (2017-09-28) x86_64 GNU/Linux I'm also using a work around for video tearing in my /etc/X11/xorg.conf Section "Device" Identifier "Intel Graphics" Driver "intel" Option "TearFree" "true" End Error message (dmesg output); [52661.796383] [drm] GPU HANG: ecode 9:1:0xeeffefa1, in Xorg [688], reason: Hang on bcs, action: reset [52661.796642] drm/i915: Resetting chip after gpu hang [52661.799118] BUG: unable to handle kernel NULL pointer dereference at 0000000000000070 [52661.807992] IP: reset_common_ring+0x8d/0x110 [i915] [52661.813475] PGD 0 [52661.813476] P4D 0 [52661.819653] Oops: 0000 [#1] SMP [52661.823178] Modules linked in: ftdi_sio usbserial bnep 8021q garp mrp stp llc cpufreq_conservative cpufreq_userspace cpufreq_powersave iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat snd_hda_codec_hdmi nf_conntrack libcrc32c usb_f_acm i2c_designware_platform iptable_mangle i2c_designware_core usb_f_fs iptable_filter usb_f_serial u_serial libcomposite udc_core snd_soc_skl snd_soc_skl_ipc snd_soc_sst_ipc snd_soc_sst_dsp snd_hda_ext_core configfs snd_soc_sst_match snd_soc_core snd_compress bluetooth ecdh_generic rfkill intel_rapl x86_pkg_temp_thermal coretemp kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel intel_rapl_perf binfmt_misc pcspkr nls_ascii efi_pstore nls_cp437 vfat fat efivars lpc_ich evdev snd_hda_intel snd_hda_codec snd_hda_core i915 joydev idma64 [52661.902704] hid_generic snd_hwdep snd_pcm snd_timer drm_kms_helper snd mei_me intel_lpss_pci drm soundcore sg mei intel_lpss shpchp i2c_algo_bit mfd_core video button i2c_dev parport_pc ppdev lp parport efivarfs ip_tables x_tables autofs4 hid_logitech_hidpp hid_logitech_dj usbhid hid ext4 crc16 jbd2 crc32c_generic fscrypto ecb mbcache sd_mod mmc_block crc32c_intel aesni_intel aes_x86_64 crypto_simd cryptd glue_helper ahci i2c_i801 sdhci_pci xhci_pci libahci sdhci xhci_hcd mmc_core usbcore usb_common libata r8169 scsi_mod mii [52661.955100] CPU: 0 PID: 10403 Comm: kworker/0:1 Not tainted 4.12.0-0.bpo.2-amd64 #1 Debian 4.12.13-1~bpo9+1 [52661.966039] Hardware name: AAEON UP-APL01/UP-APL01, BIOS UPA1AM18 06/23/2017 [52661.973996] Workqueue: events_long i915_hangcheck_elapsed [i915] [52661.980744] task: ffff94e405403180 task.stack: ffffa3c603344000 [52661.987428] RIP: 0010:reset_common_ring+0x8d/0x110 [i915] [52661.993492] RSP: 0000:ffffa3c603347b98 EFLAGS: 00010206 [52661.999356] RAX: 0000000000003e60 RBX: ffff94e417fec900 RCX: ffff94e5322fb6f8 [52662.007364] RDX: 0000000000003ea0 RSI: ffff94e5350e8000 RDI: ffff94e5322fb6c0 [52662.015372] RBP: ffffa3c603347bb8 R08: 000000000009fbe0 R09: ffffa3c6300137a0 [52662.023380] R10: 00000000ffffffff R11: 0000000000000070 R12: ffff94e5320f6000 [52662.031406] R13: 0000000000000000 R14: ffff94e533b8a900 R15: ffff94e533b88000 [52662.039418] FS: 0000000000000000(0000) GS:ffff94e53fc00000(0000) knlGS:0000000000000000 [52662.048499] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [52662.054945] CR2: 0000000000000070 CR3: 0000000254761000 CR4: 00000000003406f0 [52662.062955] Call Trace: [52662.065706] ? bit_wait_io_timeout+0x90/0x90 [52662.070534] ? i915_gem_reset+0xbe/0x370 [i915] [52662.075661] ? intel_uncore_forcewake_put+0x36/0x50 [i915] [52662.081845] ? bit_wait_io_timeout+0x90/0x90 [52662.086673] ? i915_reset+0xd9/0x160 [i915] [52662.091424] ? i915_reset_and_wakeup+0x17d/0x190 [i915] [52662.097309] ? i915_handle_error+0x1df/0x220 [i915] [52662.102789] ? scnprintf+0x49/0x80 [52662.106644] ? hangcheck_declare_hang+0xce/0xf0 [i915] [52662.112456] ? fwtable_read32+0x83/0x1b0 [i915] [52662.117569] ? i915_hangcheck_elapsed+0x2b1/0x2e0 [i915] [52662.123533] ? process_one_work+0x181/0x370 [52662.128227] ? worker_thread+0x4d/0x3a0 [52662.132531] ? kthread+0xfc/0x130 [52662.136246] ? process_one_work+0x370/0x370 [52662.140935] ? kthread_create_on_node+0x70/0x70 [52662.146018] ? do_group_exit+0x3a/0xa0 [52662.150215] ? ret_from_fork+0x25/0x30 [52662.154420] Code: c8 01 00 00 89 50 14 48 8b 83 80 00 00 00 8b 93 c8 01 00 00 89 50 28 48 8b bb 80 00 00 00 e8 2b 29 00 00 4d 8b ac 24 60 02 00 00 <49> 8b 45 70 48 39 43 70 74 51 4d 85 ed 74 14 4c 89 ef e8 cc c1 [52662.175681] RIP: reset_common_ring+0x8d/0x110 [i915] RSP: ffffa3c603347b98 [52662.183401] CR2: 0000000000000070 [52662.201377] ---[ end trace c9ac8dcf9dad3202 ]--- [52665.887380] asynchronous wait on fence i915:Xorg[688]/0:9fbe1 timed out [52665.947423] pipe A vblank wait timed out [52665.951876] ------------[ cut here ]------------ [52665.957155] WARNING: CPU: 0 PID: 8318 at /build/linux-RdeW6Z/linux-4.12.13/drivers/gpu/drm/i915/intel_display.c:12636 intel_atomic_commit_tail+0xf21/0xf50 [i915] [52665.973347] Modules linked in: ftdi_sio usbserial bnep 8021q garp mrp stp llc cpufreq_conservative cpufreq_userspace cpufreq_powersave iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat snd_hda_codec_hdmi nf_conntrack libcrc32c usb_f_acm i2c_designware_platform iptable_mangle i2c_designware_core usb_f_fs iptable_filter usb_f_serial u_serial libcomposite udc_core snd_soc_skl snd_soc_skl_ipc snd_soc_sst_ipc snd_soc_sst_dsp snd_hda_ext_core configfs snd_soc_sst_match snd_soc_core snd_compress bluetooth ecdh_generic rfkill intel_rapl x86_pkg_temp_thermal coretemp kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel intel_rapl_perf binfmt_misc pcspkr nls_ascii efi_pstore nls_cp437 vfat fat efivars lpc_ich evdev snd_hda_intel snd_hda_codec snd_hda_core i915 joydev idma64 [52666.052546] hid_generic snd_hwdep snd_pcm snd_timer drm_kms_helper snd mei_me intel_lpss_pci drm soundcore sg mei intel_lpss shpchp i2c_algo_bit mfd_core video button i2c_dev parport_pc ppdev lp parport efivarfs ip_tables x_tables autofs4 hid_logitech_hidpp hid_logitech_dj usbhid hid ext4 crc16 jbd2 crc32c_generic fscrypto ecb mbcache sd_mod mmc_block crc32c_intel aesni_intel aes_x86_64 crypto_simd cryptd glue_helper ahci i2c_i801 sdhci_pci xhci_pci libahci sdhci xhci_hcd mmc_core usbcore usb_common libata r8169 scsi_mod mii [52666.104723] CPU: 0 PID: 8318 Comm: kworker/u8:2 Tainted: G D 4.12.0-0.bpo.2-amd64 #1 Debian 4.12.13-1~bpo9+1 [52666.116991] Hardware name: AAEON UP-APL01/UP-APL01, BIOS UPA1AM18 06/23/2017 [52666.124918] Workqueue: events_unbound intel_atomic_commit_work [i915] [52666.132146] task: ffff94e3d1a27140 task.stack: ffffa3c604d1c000 [52666.138825] RIP: 0010:intel_atomic_commit_tail+0xf21/0xf50 [i915] [52666.145673] RSP: 0018:ffffa3c604d1fda8 EFLAGS: 00010286 [52666.151517] RAX: 000000000000001c RBX: ffff94e533b88000 RCX: 0000000000000000 [52666.159498] RDX: 0000000000000000 RSI: ffff94e53fc0dee8 RDI: ffff94e53fc0dee8 [52666.167486] RBP: 0000000000000000 R08: ffff94e5339a7a18 R09: 000000000000036e [52666.175473] R10: ffffa3c604d1fda8 R11: ffffffffbb6cddcd R12: 0000000000000000 [52666.183489] R13: 0000000000000000 R14: ffff94e5321b9000 R15: 0000000000000001 [52666.191476] FS: 0000000000000000(0000) GS:ffff94e53fc00000(0000) knlGS:0000000000000000 [52666.200533] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [52666.206964] CR2: 00007f0ca6011670 CR3: 0000000200009000 CR4: 00000000003406f0 [52666.214954] Call Trace: [52666.217729] ? remove_wait_queue+0x60/0x60 [52666.222316] ? process_one_work+0x181/0x370 [52666.227027] ? worker_thread+0x4d/0x3a0 [52666.231311] ? kthread+0xfc/0x130 [52666.235018] ? process_one_work+0x370/0x370 [52666.239711] ? kthread_create_on_node+0x70/0x70 [52666.244792] ? do_group_exit+0x3a/0xa0 [52666.248988] ? ret_from_fork+0x25/0x30 [52666.253181] Code: 4c 89 44 24 08 48 83 c7 08 e8 5c 4b fe f9 4c 8b 44 24 08 4d 85 c0 0f 85 36 fe ff ff 8d 75 41 48 c7 c7 b0 9e 94 c0 e8 05 2b 0b fa <0f> ff e9 20 fe ff ff 8d 70 41 48 c7 c7 80 9e 94 c0 e8 ef 2a 0b [52666.274391] ---[ end trace c9ac8dcf9dad3203 ]--- Full dmesg: https://gist.github.com/anonymous/9cf0a1768cbcc950bba593e50ca024f1 This is pretty easy to replicate and can be done by loading twitch in fullscreen and after about 5minutes it will hang. Sometimes at the start and sometimes a little way through. How can I fix this? Update I updated the kernel to 4.13 which didn't help the problem. I then removed the TearFree option. This did help I believe, but then I have tearing. ** Error ** Here is a slightly different error but still the same result being it's frozen GUI screen and have to do the Magic Keys to reboot. [125311.098771] systemd-gpt-auto-generator[16633]: Failed to dissect: Input/output error [125311.375868] systemd-gpt-auto-generator[16649]: Failed to dissect: Input/output error [244118.272043] [drm:intel_cpu_fifo_underrun_irq_handler [i915]] *ERROR* CPU pipe A FIFO underrun [324053.730179] usb 1-3.4: reset low-speed USB device number 7 using xhci_hcd [425672.192351] [drm:intel_pipe_update_end [i915]] *ERROR* Atomic update failure on pipe A (start=21284282 end=21284283) time 232 us, min 1074, max 1079, scanline start 1069, end 1082 [428291.432332] [drm:intel_pipe_update_end [i915]] *ERROR* Atomic update failure on pipe A (start=21415244 end=21415245) time 136 us, min 1074, max 1079, scanline start 1073, end 1080 [597930.852731] [drm:intel_pipe_update_end [i915]] *ERROR* Atomic update failure on pipe A (start=29897215 end=29897216) time 158 us, min 1074, max 1079, scanline start 1071, end 1080 [664909.893109] [drm:intel_pipe_update_end [i915]] *ERROR* Atomic update failure on pipe A (start=33246167 end=33246168) time 346 us, min 1074, max 1079, scanline start 1066, end 1088 [678368.073058] [drm:intel_pipe_update_end [i915]] *ERROR* Atomic update failure on pipe A (start=33919076 end=33919077) time 165 us, min 1074, max 1079, scanline start 1072, end 1081 [682058.832485] [drm] GPU HANG: ecode 9:1:0xeeffefa1, in Xorg [786], reason: Hang on bcs0, action: reset [682058.832609] drm/i915: Resetting chip after gpu hang [682058.835055] BUG: unable to handle kernel NULL pointer dereference at 0000000000000070 [682058.844025] IP: reset_common_ring+0x99/0xf0 [i915]
Please do as root: EDIT /etc/default/grub Find the line that starts with GRUB_CMDLINE_LINUX and append i915.enable_rc6=0, giving you for example: GRUB_CMDLINE_LINUX="splash quiet i915.enable_rc6=0" EXECUTE: update-grub REBOOT (optional step) EXECUTE systool -m i915 -av | grep enable_rc6 to check whether you have set this option correctly
drm/i915: Resetting chip after gpu hang
1,394,457,423,000
Does the latest version of the Linux kernel (3.x) still use the Completely Fair Scheduler (CFS) for process scheduling which was introduced in 2.6.x ? If it doesn't, which one does it use, and how does it work? Please provide a source.
That's still the default, yes, though I would not call it the same, as it is constantly in development. You can read how it works with links to the code at http://git.kernel.org/?p=linux/kernel/git/next/linux-next.git;a=blob;f=Documentation/scheduler/sched-design-CFS.txt
Does Linux kernel 3.x use the CFS process scheduler?