date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,545,111,506,000
As I spawn several processes in a loop, I hit the maximum number of open files handles and the for loop breaks. When I count the open file handles using lsof I get: $ lsof | wc -l 1464377 However checking fs.file-nr returns: $ sysctl fs.file-nr fs.file-nr = 35328 0 6553201 I expected the first number to be 1464377. I have a couple of questions: What's the difference between the output of lsof (1464377) and file-nr (35328). The maximum seems to be 6553201, which looks rather arbitrary. What's the maximum value for this number?
as per man lsof An open file may be a regular file, a directory, a block special file, a character special file, an executing text reference, a library, a stream or a network file (Internet socket, NFS file or UNIX domain socket.) A specific file or all the files in a file system may be selected by path. So number of line in lsof is likely much more than actual file descriptor. See this reated question : https://serverfault.com/questions/485262/number-of-file-descriptors-different-between-proc-sys-fs-file-nr-and-proc-pi
Open file handles
1,545,111,506,000
Embedded device, Linux version 2.6.26.5, U-Boot 2009.03 bootloader. ARM Linux Kernel Image on NAND flash, loading from NAND. How to access the filesystem as the root user, and to reset the root password? Is it possible to get this by supplying single boot argument (single-user mode) to Linux kernel via U-boot parameters? Or by adding init=/bin/bash argument to the end of the boot parameters. The output of bootargs and bootcmd environment variables: Kernel command line: console=ttyS1,115200n8 rootfstype=squashfs noalign half_image=0 verify=y Hw_Model=RZU017 Router_Mode=0
The correct command for this board is: setenv bootargs ${bootargs} single init=/bin/sh (there is no bash installed)
Access the filesystem as the root user
1,545,111,506,000
Having some problems with booting in Centos 7 since the last update. The Centos 7 partition does not boot due to a soft kernel panic after the following error is thrown *ERROR* atombios stuck in loop for more than 5secs aborting. I cannot find back the trace of the error in /var/log/messages. Also, there is no hardware error as far as I know since I can boot perfectly well in an older version of Centos. Below are the specifics about the particular situation, but first more in general: How should I proceed in troubleshooting kernel panic during boot (when there are no logs stored)? Here are screenshots of the kernel panic message: Note: A lot of similar problems are mentioned in other posts at first boot after the installation. This is not the case for me, I have been working perfectly fine on Centos for a couple of months now. I would like to avoid reinstalling Centos for exactly this reason. As I told before, I have several versions of Centos 7 to choose from during boot time, see particular kernel versions below. The upper one (newest) is the one failing to boot. Unable to find a solution, I'm using the second from the top already for some time now. possible solutions taken into account already: selinux is disabled (from this post) choose legacy boot mode in bios (from another post) The last one didn't help since I get the following error at boot time:
I found an ongoing discussion on this issue on Red Hat discussions and the CentOS bug tracker. It has to do with the graphics driver of my pc (AMD driver, working on a DELL computer). Bug appears for the 3.10.0-693 kernel version. A workaround for the moment is to add nomodeset to the grub boot command line, as suggested in the links. A good explanation on the meaning of this option and on how to change the grub boot command line (with some small adjustments on CentOS) can be found on this Ubuntu forum discussion.
troubleshooting kernel panic during boot (centos7)
1,545,111,506,000
under /sys/kernel/debug/dri there are couple folders in there, what are those numbers represent? are they bus numbers? PCI numbers? is it possible to map that information with lshw or even lspci? ps. I'm using CentOS
They map to device minors; you can see the correspondence with the numbers in /dev/dri/: $ sudo ls /sys/kernel/debug/dri 0 128 64 $ ls -l /dev/dri/ total 0 crw-rw----+ 1 root video 226, 0 Sep 19 23:28 card0 crw-rw---- 1 root video 226, 64 Aug 30 18:44 controlD64 crw-rw----+ 1 root video 226, 128 Aug 30 18:44 renderD128 The name file in each directory will tell you which driver and device these map to: i915 dev=0000:00:02.0 master=pci:0000:00:02.0 unique=0000:00:02.0
what are /sys/kernel/debug/dri number means?
1,545,111,506,000
I have an os with the kernel version of about 3.3 and the driver requires the kernel to be at least 4.4, there's no way to upgrade due to hardware limitations. would it be feasible to just change the configuration of the driver or would I have to build it from scratch?
The interface between the core of the Linux kernel and drivers evolves very quickly. You may of course be lucky and find that the interfaces that the driver uses have remained mostly compatible, but chances are that the interfaces have changed a lot and porting a driver across 4 years of kernel development would be difficult. Unless you need to support both newer hardware and some antique hardware that recent kernels no longer support, compile a 4.4 kernel. The interfaces between the kernel and applications are extremely stable. You should be able to replace any kernel since 2.0 or so by a newer kernel on any Linux system.
Rebuilding a driver for a different kernel version
1,545,111,506,000
I am using wifi using rtl8188cus. I know that the rtl8188cus supports IEEE 802.11b/g/n. Below is an example of accessing an AP from my system. SSID: test is an AP that only supports 802.11b(CCK). root@test:~# iwconfig wlan0 IEEE 802.11b ESSID:"test" Nickname:"<WIFI@REALTEK>" Mode:Managed Frequency:2.412 GHz Access Point: 00:26:66:63:1C:54 Bit Rate:11 Mb/s Sensitivity:0/0 Retry:off RTS thr:off Fragment thr:off Encryption key:****-****-****-****-****-****-****-**** Security mode:open Power Management:off Link Quality=97/100 Signal level=-78 dBm Noise level=0 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 lo no wireless extensions. eth0 no wireless extensions. eth1 no wireless extensions. I want the rtl8188cus not to support IEEE 802.11b. [Hope] 1. I do not want the rtl8188cus to support IEEE 802.11b. 2. I do not want to use IEEE 802.11b when connecting to an IEEE 802.11b/g/n AP. 3. When connecting to an AP that supports only IEEE 802.11b, it wants the connection to fail. I looked up the settings related to IEEE 802.11b in my kernel(3.10) but could not find it. User program(command) did not find a possible way. I looked into the rtl8188cus driver, but I have not found it either. I did not find it even after Google. I tried to comment on rate related stuff in the rtl8188cus driver but it failed. Can I not just use IEEE 802.11b?
802.11n and 802.11g are extensions to 802.11b, and include the older protocol. You can't "turn 802.11b off". Also, your WLAN driver will automatically switch to the bitrate and protocol that works best under the current circumstances given signal strength and channel use. There's a reason for that: If it tried to use a higher bitrate, it would get too many errors, so throughput will decrease with this attempt, not increase. The signal strength varies a bit with drivers and hardware, but in my experience, your -78 dB are at the edge where communcations ist possible just so. So if you want higher throughput, find a better place for your antennas, use better antennas, or bring computer and router closer together. BTW, you can get more detailed information with iw wlan0 station dump instead of using the older iwconfig. Edit: Don't say "I am seeing 802.11b bitrates". If you see a bitrate of, say, 11 MB, you still may be connected using protocols as defined by 802.11g or 802.11n, and in fact your WLAN and the access points may have negotiated that bitrate using a protocol that isn't part of 802.11b. And as I said, 802.11g is an extension of 802.11b. 802.11g supports all bitrates and protocol information of 802.11b. 802.11n supports all bitrates and protocol information of both 802.11g and 802.11b. You just usually only see the new bitrates in Wikis etc. listed, because repeating the old ones would be tedious.
I do not want to use IEEE 802.11b(CCK)
1,545,111,506,000
I am using the following Kernel Device Driver. /** * @file ebbchar.c * @author Derek Molloy * @date 7 April 2015 * @version 0.1 * @brief An introductory character driver to support the second article of my series on * Linux loadable kernel module (LKM) development. This module maps to /dev/ebbchar and * comes with a helper C program that can be run in Linux user space to communicate with * this the LKM. * @see http://www.derekmolloy.ie/ for a full description and follow-up descriptions. */ #include <linux/init.h> // Macros used to mark up functions e.g. __init __exit #include <linux/module.h> // Core header for loading LKMs into the kernel #include <linux/device.h> // Header to support the kernel Driver Model #include <linux/kernel.h> // Contains types, macros, functions for the kernel #include <linux/fs.h> // Header for the Linux file system support #include <asm/uaccess.h> // Required for the copy to user function #define DEVICE_NAME "ebbchar" ///< The device will appear at /dev/ebbchar using this value #define CLASS_NAME "ebb" ///< The device class -- this is a character device driver MODULE_LICENSE("GPL"); ///< The license type -- this affects available functionality MODULE_AUTHOR("Derek Molloy"); ///< The author -- visible when you use modinfo MODULE_DESCRIPTION("A simple Linux char driver for the BBB"); ///< The description -- see modinfo MODULE_VERSION("0.1"); ///< A version number to inform users static int majorNumber; ///< Stores the device number -- determined automatically static char message[256] = {0}; ///< Memory for the string that is passed from userspace static short size_of_message; ///< Used to remember the size of the string stored static int numberOpens = 0; ///< Counts the number of times the device is opened static struct class* ebbcharClass = NULL; ///< The device-driver class struct pointer static struct device* ebbcharDevice = NULL; ///< The device-driver device struct pointer // The prototype functions for the character driver -- must come before the struct definition static int dev_open(struct inode *, struct file *); static int dev_release(struct inode *, struct file *); static ssize_t dev_read(struct file *, char *, size_t, loff_t *); static ssize_t dev_write(struct file *, const char *, size_t, loff_t *); /** @brief Devices are represented as file structure in the kernel. The file_operations structure from * /linux/fs.h lists the callback functions that you wish to associated with your file operations * using a C99 syntax structure. char devices usually implement open, read, write and release calls */ static struct file_operations fops = { .open = dev_open, .read = dev_read, .write = dev_write, .release = dev_release, }; /** @brief The LKM initialization function * The static keyword restricts the visibility of the function to within this C file. The __init * macro means that for a built-in driver (not a LKM) the function is only used at initialization * time and that it can be discarded and its memory freed up after that point. * @return returns 0 if successful */ static int __init ebbchar_init(void){ printk(KERN_INFO "EBBChar: Initializing the EBBChar LKM\n"); // Try to dynamically allocate a major number for the device -- more difficult but worth it majorNumber = register_chrdev(0, DEVICE_NAME, &fops); if (majorNumber<0){ printk(KERN_ALERT "EBBChar failed to register a major number\n"); return majorNumber; } printk(KERN_INFO "EBBChar: registered correctly with major number %d\n", majorNumber); // Register the device class ebbcharClass = class_create(THIS_MODULE, CLASS_NAME); if (IS_ERR(ebbcharClass)){ // Check for error and clean up if there is unregister_chrdev(majorNumber, DEVICE_NAME); printk(KERN_ALERT "Failed to register device class\n"); return PTR_ERR(ebbcharClass); // Correct way to return an error on a pointer } printk(KERN_INFO "EBBChar: device class registered correctly\n"); // Register the device driver ebbcharDevice = device_create(ebbcharClass, NULL, MKDEV(majorNumber, 0), NULL, DEVICE_NAME); if (IS_ERR(ebbcharDevice)){ // Clean up if there is an error class_destroy(ebbcharClass); // Repeated code but the alternative is goto statements unregister_chrdev(majorNumber, DEVICE_NAME); printk(KERN_ALERT "Failed to create the device\n"); return PTR_ERR(ebbcharDevice); } printk(KERN_INFO "EBBChar: device class created correctly\n"); // Made it! device was initialized return 0; } /** @brief The LKM cleanup function * Similar to the initialization function, it is static. The __exit macro notifies that if this * code is used for a built-in driver (not a LKM) that this function is not required. */ static void __exit ebbchar_exit(void){ device_destroy(ebbcharClass, MKDEV(majorNumber, 0)); // remove the device class_unregister(ebbcharClass); // unregister the device class class_destroy(ebbcharClass); // remove the device class unregister_chrdev(majorNumber, DEVICE_NAME); // unregister the major number printk(KERN_INFO "EBBChar: Goodbye from the LKM!\n"); } /** @brief The device open function that is called each time the device is opened * This will only increment the numberOpens counter in this case. * @param inodep A pointer to an inode object (defined in linux/fs.h) * @param filep A pointer to a file object (defined in linux/fs.h) */ static int dev_open(struct inode *inodep, struct file *filep){ numberOpens++; printk(KERN_INFO "EBBChar: Device has been opened %d time(s)\n", numberOpens); return 0; } /** @brief This function is called whenever device is being read from user space i.e. data is * being sent from the device to the user. In this case is uses the copy_to_user() function to * send the buffer string to the user and captures any errors. * @param filep A pointer to a file object (defined in linux/fs.h) * @param buffer The pointer to the buffer to which this function writes the data * @param len The length of the b * @param offset The offset if required */ static ssize_t dev_read(struct file *filep, char *buffer, size_t len, loff_t *offset){ int error_count = 0; // copy_to_user has the format ( * to, *from, size) and returns 0 on success error_count = copy_to_user(buffer, message, size_of_message); if (error_count==0){ // if true then have success printk(KERN_INFO "EBBChar: Sent %d characters to the user\n", size_of_message); return (size_of_message=0); // clear the position to the start and return 0 } else { printk(KERN_INFO "EBBChar: Failed to send %d characters to the user\n", error_count); return -EFAULT; // Failed -- return a bad address message (i.e. -14) } } /** @brief This function is called whenever the device is being written to from user space i.e. * data is sent to the device from the user. The data is copied to the message[] array in this * LKM using the sprintf() function along with the length of the string. * @param filep A pointer to a file object * @param buffer The buffer to that contains the string to write to the device * @param len The length of the array of data that is being passed in the const char buffer * @param offset The offset if required */ static ssize_t dev_write(struct file *filep, const char *buffer, size_t len, loff_t *offset){ sprintf(message, "%s(%zu letters)", buffer, len); // appending received string with its length size_of_message = strlen(message); // store the length of the stored message printk(KERN_INFO "EBBChar: Received %zu characters from the user\n", len); return len; } /** @brief The device release function that is called whenever the device is closed/released by * the userspace program * @param inodep A pointer to an inode object (defined in linux/fs.h) * @param filep A pointer to a file object (defined in linux/fs.h) */ static int dev_release(struct inode *inodep, struct file *filep){ printk(KERN_INFO "EBBChar: Device successfully closed\n"); return 0; } /** @brief A module must use the module_init() module_exit() macros from linux/init.h, which * identify the initialization function at insertion time and the cleanup function (as * listed above) */ module_init(ebbchar_init); module_exit(ebbchar_exit); When I open the device driver in User Space, the dev_open() function is executed. static struct file_operations fops = { .open = dev_open, .read = dev_read, .write = dev_write, .release = dev_release, }; static int dev_open(struct inode *inodep, struct file *filep){ numberOpens++; printk(KERN_INFO "EBBChar: Device has been opened %d time(s)\n", numberOpens); return 0; } I want to know who has open() the Device Driver node (/dev/ebbchar). When a process calls the dev_open() function of the device driver, I want to debug the process name via printk(). What should I do?
Johan Myréen gave me the link below. https://stackoverflow.com/questions/11915728/getting-user-process-pid-when-writing-linux-kernel-module So I was able to debug as below. static int dev_open(struct inode *inodep, struct file *filep){ numberOpens++; printk(KERN_INFO "EBBChar: Device has been opened %d time(s)\n", numberOpens); printk(KERN_INFO "Loading Module\n"); printk("The process id is %d\n", (int) task_pid_nr(current)); printk("The process vid is %d\n", (int) task_pid_vnr(current)); printk("The process name is %s\n", current->comm); printk("The process tty is %d\n", current->signal->tty); printk("The process group is %d\n", (int) task_tgid_nr(current)); printk("\n\n"); return 0; } In User Space, the following confirmation is made. root@Test:~# ./ebbchar_open EBBChar: Device has been opened 1 time(s) Loading Module The process id is 458 The process vid is 458 The process name is ebbchar_open The process tty is -294157312 The process group is 458 Johan Myréen Thanks!
I want to know which process open() the "Kernel Device Driver"
1,545,111,506,000
This is a request for official documentation. As I understand it, kernel command line parameters, e.g.: root=/dev/sda1 foo=bar That are not recognized by the kernel itself are passed on to init, on linux now most commonly systemd. While the kernel's own parameters are largely documented, I can't find any for systemd itself. man systemd does refer to a handful of things but it does not seem to cover this, and searching online has led me nowhere.
As @Bigon notes, the official documentation you are looking for is in man kernel-command-line.
Systemd invocation parameters
1,545,111,506,000
When I have some Linux distribution installed on a x64 system, for example I can pretty much unplug my storage drive put it into another x64 machine, install a few HL drivers, like the graphics driver and it will most likely run without any hassle. When it comes to ARM systems, especially ARM SoCs, like smartphones of any sort, there is a completely different picture. There is a different build of the same OS (for a example a OEM Android distro) for every singly smartphone. My question is: Why is that? I understand that unlike PCs with there standardized architecture, there are lots and lots of SoC chips and architectures. But with the device tree in mind I ask myself why there isn't a way to put the device tree, as the hardware description, together with the bootloader on some ROM chip and build the Linux OS independently from any hardware specs, at least within some defined limits.
I ask myself why there isn't a way to put the device tree, as the hardware description, together with the bootloader on some ROM chip and build the Linux OS independently from any hardware specs, at least within some defined limits. Answer: Cheapness. Nobody wants to pay for the ROM chip. The SoC has a boot ROM in it, but the device tree varies depending on the circuit the SoC is in, so that's no good. You'd need a separate "BIOS chip" like x86 boards have to make this work. You can sort of make it work by treating the SD card that most ARM boards boot from as the BIOS chip; just put U-Boot and the device tree on it, and have U-Boot load the kernel from a USB drive. Then the USB drive would be (fairly) portable from ARM board to ARM board. In terms of optimization, while you can compile for ARM generically, it really pays off to target a specific processor (much more so than on x86_64).
Why are ARM SoCs so seemingly hard to handle with the Kernel?
1,545,111,506,000
I tried to figure the kernel's year of my linux out,but it won’t show me, when I entered uname -a , the output is like this : Linux xx-xx-xx-xx 3.2.0-4-amd64 #1 SMP Debian 3.2.82-1 x86_64 GNU/Linux I even wrote this : $ cat /proc/version Linux version 3.2.0-4-amd64 ([email protected]) (gcc version 4.6.3 (Debian 4.6.3-14) ) #1 SMP Debian 3.2.82-1 Is there anyway to find out the complete information about my kernel ? I want some output like this (it's for the other system) Linux xx-xx-xx-xx 3.14.32-xxxx-std-ipv6-64 #7 SMP Wed Jan 27 18:35:08 CET 2016 x86_64 GNU/Linux
In the output from uname -a, the 3.2.0-4-amd64 part is the kernel release (uname -r) and the #1 SMP Debian 3.2.82-1 part is the kernel version (uname -v). The kernel release always has the same format; the version string can be changed at compile time. Some distributions include the compilation date in the version string, but this is not an obligation. Since you appear to have a kernel compiled by Debian scripts, you can find when the source was last patched by looking at the changelog (/usr/share/doc/linux-image-3.2.0-4-amd64/changelog.Debian.gz) and you can find when the package was built by looking at the file times (ls -l /boot/vmlinuz-3.2.0-4-amd64). There is no generic way to find the date when a kernel was built, but the date of the kernel image file is usually the same.
how to find out the kernel's information?
1,545,111,506,000
Suppose I have a memory-mapped peripheral which can be read from or written to at some address, say 0x43C00000. I want to be able to read from that same memory location in my Linux OS in order to communicate with that peripheral. Since the address in question would be a physical address, I should be able to write a kernel module that can read from that address location. In the kernel, if I have something like #define BASE_ADDR 0x43C00000 #define OFFSET 4 int * mem_addr; mem_addr = BASEADDR + OFFSET; That, I think, should give me the pointer to the second writing block of the peripheral, at 0x43C00004. Printing printk(KERN_INFO "%p\n", mem_addr) seems to tell me this is right. Now if I try to do something like printk(KERN_INFO "%d\n", *mem_addr); I would have thought that that should therefore read the data that is being written to memory by my peripheral, accomplishing what I was trying to do. But if I try putting a statement like this into a module, Linux kills it. Looking at my /var/log/messages I see this: Oops: 0000 [#1] SMP Modules linked in; TEST_MOD(0+) ... followed by a bunch of information about register states. So I'm apparently not allowed to just read memory like that. Is there some way to grant access to a kernel module to read memory?
You need to set up a kernel virtual address mapping for the location e.g. mem_addr = ioremap_nocache(BASEADDR + OFFSET, SIZE); (you appear to have asked the same question twice - see enter link description here).
How to allow access to memory in a kernel module? [closed]
1,545,111,506,000
I have little Linux system with 256MB RAM. I'm little bit confused, where the RAM may be lost? It is running old linux kernel 2.6.38 and I'm not able to ubgrade it (some specific ARM board). SHM and all tmpfs mounted filesystems are almost empty shmem:448kB Everyhing is consumed by active_anon pages but running processes does not correspond wih this. Sum of total_vm is just 90MB and there are duplicates, shared memory, unallocated memory... But active_anon is reported as 235MB. Why? How can I resolve this problem? Is there some memory leak in the kernel? Here is relevant dmesg Mem-info: Normal per-cpu: CPU 0: hi: 90, btch: 15 usd: 14 active_anon:60256 inactive_anon:67 isolated_anon:0 active_file:0 inactive_file:185 isolated_file:0 unevictable:0 dirty:0 writeback:0 unstable:0 free:507 slab_reclaimable:120 slab_unreclaimable:463 mapped:108 shmem:112 pagetables:217 bounce:0 Normal free:2028kB min:2036kB low:2544kB high:3052kB active_anon:241024kB inactive_anon:268kB active_file:0kB inactive_file:740kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:260096kB mlocked:0k lowmem_reserve[]: 0 0 Normal: 37*4kB 139*8kB 42*16kB 1*32kB 1*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 2028kB 305 total pagecache pages 65536 pages of RAM 622 free pages 1976 reserved pages 404 slab pages 393 pages shared 0 pages swap cached [ pid ] uid tgid total_vm rss cpu oom_adj oom_score_adj name [ 713] 0 713 666 40 0 0 0 busybox [ 719] 0 719 634 18 0 0 0 busybox [ 725] 0 725 634 15 0 0 0 busybox [ 740] 0 740 654 19 0 0 0 inetd [ 752] 0 752 634 17 0 0 0 ifplugd [ 761] 0 761 634 21 0 0 0 busybox [ 790] 0 790 4297 110 0 0 0 app [ 792] 0 792 635 15 0 0 0 getty [ 812] 0 812 634 16 0 0 0 exe [ 849] 101 849 630 57 0 0 0 lighttpd [ 850] 101 850 3005 218 0 0 0 php-cgi [ 851] 101 851 3005 218 0 0 0 php-cgi [ 3172] 0 3172 72156 59739 0 0 0 app [ 3193] 0 3193 675 23 0 0 0 ntpd [ 4003] 0 4003 634 15 0 0 0 ntpd_prog [ 4004] 0 4004 634 15 0 0 0 hwclock [ 4005] 0 4005 634 20 0 0 0 hwclock Out of memory: Kill process 3172 (app) score 912 or sacrifice child Killed process 3172 (app) total-vm:288624kB, anon-rss:238684kB, file-rss:272kB Here is a list of mounted filesystems. Root filesystem is r/w YAFFS2 on MTD flash. rootfs on / type rootfs (rw) /dev/root on / type yaffs2 (rw,relatime) none on /proc type proc (rw,relatime) none on /sys type sysfs (rw,relatime) mdev on /dev type tmpfs (rw,nosuid,relatime,size=10240k,mode=755) none on /proc/bus/usb type usbfs (rw,relatime) none on /dev/pts type devpts (rw,relatime,mode=622) shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime) none on /tmp type tmpfs (rw,relatime,size=102400k,mode=777) none on /run type tmpfs (rw,relatime,size=10240k,mode=755)
Total_vm was badly calculated by me and the OOM report is correct. app has allocated 59739 pages which is 233MB. So, this is the correct reason of OOM.
Embedded Linux OOM - help with lost RAM
1,545,111,506,000
I have recently accessed STMicroelectronics Base Distribution with BusyBox v1.18.2, built-in shell. I read all the information on STLinux documentary regarding my problems, and followed all the steps. There was no directory /home/STLinux so i just created a new one. # Step 2 [root@stlinux]# cp -r /opt/STM/STLinux-X.X/devkit/sources/kernel/linux-sh4/ BusyBox v1.18.2 (2011-07-13 20:52:52 CST) multi-call binary. Usage: cp [OPTIONS] SOURCE DEST Copy SOURCE to DEST, or multiple SOURCE(s) to DIRECTORY Options: -a Same as -dpR -R,-r Recurse -d,-P Preserve symlinks (default if -R) -L Follow all symlinks -H Follow symlinks on command line -p Preserve file attributes if possible -f Overwrite -i Prompt before overwrite -l,-s Create (sym)links # Step 3 [root@stlinux]#cd /home/STLinux/linux-sh4 -sh: cd: can't cd to /home/STLinux/linux-sh4 # Step 4 [root@stlinux]#make ARCH=sh CROSS_COMPILE=sh4-linux- vmlinux -sh: make: not found # Step 5 [root@stlinux]#make ARCH=sh CROSS_COMPILE=sh4-linux- help | grep ^mb -sh: make: not found # All other steps were associated with make. # Step 9 [root@stlinux]#find . | grep "\.ko$" [root@stlinux]# # Step 10 [root@stlinux]# make ARCH=sh CROSS_COMPILE=sh4-linux- vmlinux -sh: make: not found Is it possible to build the kernel without having these toolkits? If not, how can I install them without having any package managers?
Unfortunately, you cannot compile a kernel for STLinux on STLinux. You are not supposed to, at least. They are embedded devices with limited resources. What you have to do is having or installing a Linux on another (Intel) machine, cross compile the kernel and then copy it over to the destination machine. BTW, cross compilation is the act of building up binaries/tool on a machine of a different architecture. If you look at the directive CROSS_COMPILE you have in your post, it seems evident the tutorial you are following is not written to be used directly on the device. As an example, I cross compiled my ARM NetBSD toolchain, binaries and custom kernel in a Debian Jessie. You have to build the toolkit using the native development tools. After that, it seems STLinux has some additions to the toolkit; you will then be using the new compiler and environment of the cross compiler toolkit to generate native binaries. There is another related thread here: How do I install another distro on a linux DVR Furthermore, I doubt you maybe successful compiling a new kernel without knowing what you are doing. The ARM/Mips architecture has some kirks that differ substantially from the standard Intel, and often this chinese devices got kernels substantially hacked. Nonetheless, to give a very short answer to your question, no you won't be able to compile a new kernel on your device. The RAM and disk are usually pretty limited for the development needs.
Compiling a kernel on STLinux
1,545,111,506,000
My ubuntu server returns Linux 3.13.0-63-generic when I run uname -rs. From what I found in internet, uname is also a system call which can't be easily overridden when third party program run this system call from C++ for example. Does anyone know if there is a way to spoof the return value, like by manipulating /proc/sys/kernel/ostype files? It will be even better if I can spoof it per process instance.
As far as I know, there are only limited ways in which release (uname -r) and machine (uname -m) can be customized per process using the personality() system call, all exposed through the setarch command, and sysname (uname -r) cannot be customized at all. $ uname -rsm; setarch i386 --uname-2.6 --32bit uname -rsm Linux 3.16.0-4-amd64 x86_64 Linux 2.6.56-4-amd64 i686 If you want to spoof uname in a different way and the program is dynamically linked, you can use LD_PRELOAD to override the uname function; see Redirect a file descriptor before execution for an example of function overload through LD_PRELOAD. If the program is statically linked, you can use ptrace to spoof its system calls, but that requires fancier programming.
How to spoof uname -rs per process
1,545,111,506,000
My system crashed and even though I haven't been able to know what caused the crash after going through the usual logs: messages, dmesg, secure, etc. There's nothing valuable I have seen on them so I decided to follow a tutorial in how to run the crash application to see what might have happened. Every time I run it with: $ sudo crash System.map-3.10.0-123.el7 vmlinuz-3.10.0-123.el7 vmcore I get the following error: crash: vmlinuz-3.10.0-123.el7: not a supported file format Here's the output: crash 7.0.9-5.el7_1 Copyright (C) 2002-2014 Red Hat, Inc. Copyright (C) 2004, 2005, 2006, 2010 IBM Corporation Copyright (C) 1999-2006 Hewlett-Packard Co Copyright (C) 2005, 2006, 2011, 2012 Fujitsu Limited Copyright (C) 2006, 2007 VA Linux Systems Japan K.K. Copyright (C) 2005, 2011 NEC Corporation Copyright (C) 1999, 2002, 2007 Silicon Graphics, Inc. Copyright (C) 1999, 2000, 2001, 2002 Mission Critical Linux, Inc. This program is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Enter "help copying" to see the conditions. This program has absolutely no warranty. Enter "help warranty" for details. crash: vmlinuz-3.10.0-123.el7: not a supported file format Usage: crash [OPTION]... NAMELIST MEMORY-IMAGE[@ADDRESS] (dumpfile form) crash [OPTION]... [NAMELIST] (live system form) Enter "crash -h" for details. I'm using CentOS 7. What might be wrong? And also, Any ideas how to get better information about the crash would be appreciated.
The vmlinuz format is compressed AFAIK. The steps are too long to copy-paste here, so I'm gonna provide a link instead. https://stackoverflow.com/a/30514503/3979290
After my system crashed and I run crash, but I get the following error: not a supported file format
1,545,111,506,000
So, I've recently purchased the named keyboard and have been doing some reverse engineering as to how the Logitech Gaming Software does things with it. In this process I've discovered that a few magic packets are sent to the device to unbind the default f1-6 from g1-6; however after this part things get tricky. None of the special keys (m1-3, mr, g1-6) report any scancode according to any standard tool, and that they all send hid reports on the same usage, ff00.0003, using bitwise logic. Each key sends an hid report in the following format: 03 gg mm where gg is g# = (0x01 << #-1) and mm is m# = (0x01 << #-1) [mr treated as m4 for this math), so pressing g1 and g2 at the same time yields 04 03 01 and so on; the values are ANDd together. As such, I cannot find any particularly useful way of mapping these hid reports to a known scancode (say, BTN_TRIGGER_HAPPY?) for easy userspace remapping with xbindkeys or the like. You can find an extensive dump of information on this keyboard at https://github.com/GSeriesDev/gseries-tools/blob/master/g105/info , if its of any help.
There is now a Linux driver for the Logitech G105 Keyboard, it's called sidewinderd, available on github.
Map non-standard hid reports to scancodes for Logitech G105 Gaming Keyboard
1,545,111,506,000
I need to install some iptable ruels to block traffic that originates from a certain country, I found this script example on http://www.cyberciti.biz/faq/block-entier-country-using-iptables/ it works great on another host I have but on this one (an embedded box) I get: ./iptable_rules.sh modprobe: module ip_tables not found in modules.dep iptables v1.4.16.3: can't initialize iptables table `filter': Table does not exist (do you need to insmod?) Perhaps iptables or your kernel needs to be upgraded. Now upgrading the kernel due to the nature of the device, is not an option. Does anyone know a way I can get around this? This system running on kernel 3.2.34
You should be able to re-compile it using something to the similar commands. make KERNEL_DIR=/usr/src/linux make install KERNEL_DIR=/usr/src/linux make dep make bzImage make make install make modules Source: iptables: Table does not exist (do you need to insmod?)
iptables - why do I get "Table does not exist (do you need to insmod?)"
1,545,111,506,000
On all my Unixes (CentOS, FreeBSD, MacOS X) I activate the system accounting as a basic security rule. On MacOS X (Yosemite, 10.10.3) I see a misbehaviour I am investigating to fix. Everytime I run lastcomm to analyze a recent set of processes terminated I find processes really terminated but with dates coming from the future as in this example: ••My_Mac••$ lastcomm | more lastcomm -X bob ttys007 0.00 secs Mon Jul 26 14:13 (0:00:03.05) more - bob ttys007 0.00 secs Mon Jul 26 14:13 (0:00:03.05) stty - bob ttys007 0.00 secs Mon Jul 26 14:13 (0:00:00.05) path_helpe - bob ttys007 0.00 secs Mon Jul 26 14:13 (0:00:00.50) sshd -SF _sshd __ 0.00 secs Mon Jan 27 17:31 (0:00:04.91) procmail -S bob __ 0.00 secs Mon Jul 26 14:11 (0:00:00.09) cron -F root __ 0.00 secs Mon Jun 1 13:10 (0:00:00.33) sendmail -S root __ 0.00 secs Mon Jun 1 13:10 (0:00:00.31) postdrop - root __ 0.00 secs Mon Jun 1 13:10 (0:00:00.09) [...] ••My_Mac••$ date Mon Jun 1 13:12:07 CEST 2015 ••My_Mac••$ At first, the problem isn't as simple as a timezone error: Jul 26, 14:13 (wrong timestamp) and Jun 1, 13:10 (correct timestamp) are many days appart. This isn't either a huge time drift caused by a date erroneous use. Moreover this server is ntp synchronized on a strate 1 ntp server. Did someone else see the same misbehaviour? Do you see a path to better investigate this problem? Is this a known bug?
Found same misbehavior on ElCapitan (MacOSX 10.11..). On investigation, saw that timestamp data in /var/account/acct file was correct. Compiling source of lastcomm.c in http://opensource.apple.com/tarballs/shell_cmds/shell_cmds-187.tar.gz showed compiler message "warning: incompatible pointer types passing 'u_int32_t *' (aka 'unsigned int *') to parameter of type 'const time_t *' (aka 'const long *')" The following patch to fix bad accounting timestamp worked for me in ElCapitan. It may work for Yosemite (10.10..) and earlier. *** lastcomm.c 2016/08/26 19:44:23 1.1 --- lastcomm.c 2016/08/27 00:30:49 1.2 *************** *** 135,138 **** --- 135,140 ---- if (!*argv || requested(argv, &ab)) { + time_t timelong; + timelong = ab.ac_btime; t = expand(ab.ac_utime) + expand(ab.ac_stime); (void)printf( *************** *** 144,148 **** user_from_uid(ab.ac_uid, 0), UT_LINESIZE, UT_LINESIZE, getdev(ab.ac_tty), ! t / (double)AHZ, ctime(&ab.ac_btime)); delta = expand(ab.ac_etime) / (double)AHZ; printf(" (%1.0f:%02.0f:%05.2f)\n", --- 146,150 ---- user_from_uid(ab.ac_uid, 0), UT_LINESIZE, UT_LINESIZE, getdev(ab.ac_tty), ! t / (double)AHZ, ctime(&timelong)); delta = expand(ab.ac_etime) / (double)AHZ; printf(" (%1.0f:%02.0f:%05.2f)\n",
System accounting misbehaviour on MacOS X (Yosemite)?
1,545,111,506,000
I am trying to modify the default congestion control algorithm in FreeBSD (NewReno) by creating a copy of the source file (cc_newreno.c, located in /usr/src/sys/netinet/cc) called cc_newreno_mod.c and making changes to it. Suppose I have made some modifications. How do I test them? Compiling the cc_newreno_mod.c directly (using the built-in C compiler) results in multiple errors, some of which seem strange (for example netinet/cc/cc_module.h file not found, although the file clearly is there). Should I build a new Kernel? Will the module from the changed file be created automatically? Or am I totally wrong and I should take a different approach?
For compiling kernel module you should create Makefile and to include kernel module makefile /usr/src/share/mk/bsd.kmod.mk for example: # Note: It is important to make sure you include the <bsd.kmod.mk> makefile after declaring the KMOD and SRCS variables. # Declare Name of kernel module KMOD = module # Enumerate Source files for kernel module SRCS = module.c # Include kernel module makefile .include <bsd.kmod.mk> And finally you run make to compile it so you can test it if it compiles properly. And as it is not presented in kernel modules (/boot/kernel/*.ko), but it is listed in sys/conf/files I think you should recompile your kernel to apply changes. For more info you can see this page. As it is a copy of cc_newreno.c you can rename your original /usr/src/sys/netinet/cc/cc_newreno.c to something else for saving it copy your new one there and recompile.
How to test modified FreeBSD source code?
1,545,111,506,000
I am conducting a kind of research in that I schedule multiple parallel applications (e.g., OpenMP/pthreaded applications) and execute the applications on specific (partitioned) cores on Linux-based multi-processor platforms. We can set CPU affinities for each application by using sched_setaffinity() system call. But, as you know, Linux manages (all) running programs as well. So, the applications' executions that I scheduled are sometimes interrupted by other processes that Linux scheduled. I want to set all processes and daemons (except for applications that I scheduled) to CPU 0. My first thought was to set CPU 0 manually by traversing all tasks from init task in a kernel module. But the result will be affected by Linux load-balancing. We need another way to somehow turn off or manage Linux CPU load balancing. Is there any possible way or system configurations to do this? My target platform is AMD Opteron server (containing 64 cores) and Linux version is 3.19.
you should be able to disable the automated load-balancing by telling the kernel to only use the first N CPUs. e.g. adding the following to your boot-parameters, should effectively run the entire system on CPU #0 (as the system will only use a single CPU): maxcpus=1 then use taskset or similar to run your process on a different CPU.
setting (system-wide) CPU affinities for running processes on a Linux platform
1,545,111,506,000
I am setting up my Macbook Air 6.2 with Debian Jessie. since I do not have any wireless, I need to set it up manually, but all documentation I find how to do it are using apt-get apt-get install linux-headers-$(uname -r|sed 's,[^-]*-[^-]*-,,') broadcom-sta-dkms source How can I do it without apt-get? I did install the linux-headers and other essential packages manually, but now I am stuck rebuilding my kernel without this short and beautiful command .. -.-'
I think this could work: apt-get install --print-uris linux-headers-$(uname -r|sed 's,[^-]*-[^-]*-,,') broadcom-sta-dkms That would print the ulr for the .deb files you need to download. You can download them on another computer and copy them through a USB drive. Later: dpkg -i downloadedfile.deb And so on...
rebuilding kernel without using apt-get
1,545,111,506,000
I am running GNU/Linux (Centos 6) on kernel 2.6.32-431.17.1.el6.x86_64. I am trying to update the kernel to 3.2.61. I performed the following steps inside the 3.2.61 folder structure: make menuconfig (took defaults- didn't add anything) make make modules make modules_install make install On step 5, I received the following error: ERROR: modinfo: could not find module lpc_ich I tried yum install lpc_ich, but that did not exist. This is my first time trying to install a new kernel. I am not really sure if I am doing this correctly. Could someone please help point me in the right direction?
It's important to give to the toolchain used to build the kernel the location of the kernel source tree. Otherwise, even if the compilation runs perfectly, the installation may fail with errors about missing modules or parts. The kernel source tree is specified through the KERNEL_TREE environment variable. It defaults to /usr/src/linux. So either export this variable in the terminal in which you make the kernel: export KERNEL_TREE=/usr/src/linux-3.2.61 or define a symlink from /usr/src/linux-3.2.61 to /usr/src/linux`: ln -s /usr/src/linux-3.2.61 /usr/src/linux Of course, replace /usr/src/linux-3.2.61 with the corresponding kernel source directory.
Error installing kernel on Centos (from source)
1,545,111,506,000
I am facing a similar problem as this: Kernel does not recognize nand bad blocks marked by u-boot I'm using a friendlyARM micro2440 board that contains the s3c2440 ARM processor. u-Boot has found some bad blocks and written their positions in the bad block table, but when I boot the kernel it seems to be unable to find those bad blocks and then crashes. I wanted to try the obscure solution found by that user before, but I can't figure out how to do it: figuring out the BBT offset (maybe s3c2440's BBT offset is also an unusual value and not the one used by uboot). Also, if that's the case, how would I change u-Boot's BBT offset?
It was found that the problem did not reside in the BBT offset as previously stated. The source of the problem was the usage of squashfs, as said in this link: http://elinux.org/Support_read-only_block_filesystems_on_MTD_flash The solutions would be to either use another filesystem or to use UBI to detect the bad blocks.
How to find out the bad block table offset and how to change it in u-Boot
1,545,111,506,000
While trying to read some kernel parameters, I think I have mistakenly set some instead: # sysctl --system -r ^net.*tcp * Applying /usr/lib/sysctl.d/00-system.conf ... * Applying /usr/lib/sysctl.d/50-default.conf ... kernel.sysrq = 16 kernel.core_uses_pid = 1 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 fs.protected_hardlinks = 1 fs.protected_symlinks = 1 * Applying /etc/sysctl.d/99-sysctl.conf ... * Applying /etc/sysctl.conf ... Now, is there a way to undo those changes?
During a boot sysctl settings are initially set to default values hardcoded into the kernel. You most likely don't want to revert to these, as system specific settings are loaded from the various system configuration files by a sysctl init script, in a manner similar to the command you executed, except not limited to settings matching a certain pattern. Unless you've actually edited some settings in any of the configuration files, or configured certain sysctl settings directly via sysctl, chances are that you've not actually changed any of the settings by reloading the configuration. If you've actually set some settings directly with sysctl, without recording the corresponding change to a particular configuration file, the change will be lost at reboot. The command sysctl -a displays all available sysctl values.
How to undo kernel settings?
1,545,111,506,000
I have to load a few additionals modules. One of them generates /dev/knem file. I have to set the permission to 0666, so a basic chmod 0666 /dev/knem works, but I would like to assign it directly at boot time. Where shall I write the config so that it is directly set by the kernel when it loads the module? Thanks in advance
I might be wrong, but can't you use udev rules, to assign 0666 permissions when /dev/knem is mounted ? http://www.reactivated.net/writing_udev_rules.html#syntax Controlling permissions and ownership udev allows you to use additional assignments in rules to control ownership and permission attributes on each device. The GROUP assignment allows you to define which Unix group should own the device node. Here is an example rule which defines that the video group will own the framebuffer devices: KERNEL=="fb[0-9]*", NAME="fb/%n", SYMLINK+="%k", GROUP="video" The OWNER key, perhaps less useful, allows you to define which Unix user should have ownership permissions on the device node. Assuming the slightly odd situation where you would want john to own your floppy devices, you could use: KERNEL=="fd[0-9]*", OWNER="john" udev defaults to creating nodes with Unix permissions of 0660 (read/write to owner and group). If you need to, you can override these defaults on certain devices using rules including the MODE assignment. As an example, the following rule defines that the inotify node shall be readable and writable to everyone: KERNEL=="inotify", NAME="misc/%k", SYMLINK+="%k", MODE="0666" A step-by-step on creting UDEV rules can be found in this post: http://ubuntuforums.org/showthread.php?t=168221
set permission 0666 on /dev files at boot time
1,398,327,992,000
I am currently running the 3.5.0.48 kernel on my Ubuntu 12.04.4, and I was wondering whether I should upgrade to 3.8 or 3.11. This makes it seem easy enough. Though I will wait with throwing away my current kernel for now. Can someone list reasons not to upgrade? And potentially other / better ways to do it?
I am running the Saucy kernel on both my development machine and my in-house server without problems. I followed the official Ubuntu guide for LTSE enabled stacks. As with any change, your primary question should be: for what do I need this. If there is no need, and you are not just investigating things for fun, why run the risk of breaking your setup. The main reason for me to do so (I mean upgrading, not breaking) was that I have btrfs filesystems that had performance problems.
Should I upgrade Ubuntu 12.04 kernel [closed]
1,398,327,992,000
I'm using Buildroot to generate an embedded Linux with a kernel v. 2.6.39, which in the end starts busybox. Everything works fine when building with Initramfs as "rootfs". But Initramfs isn't the best for my needs, so i want to switch to other fs like SquashFS or even better not compressing it at all. Anyways i can't figure out how to tell the kernel it shall boot for instance the SquashFS file. What i do know, is that this is done by some kernel command line parameters. Unfortunately i can't find more about this with different search engines or here. And so it doesn't work. It always ends, as expected, with a kernelpanic. And how is it done if I haven't got it compressed and therefor it just has to be copied from Flash to RAM ?
Make sure you build what ever file system you want directly into the kernel and not as a module. SquashFS is readonly so you can't use that alone. You may be better off booting from initramfs then loading root from an image, but that's your call.
Change the root filesystem on an embedded system
1,398,327,992,000
I know that with a GNU/Linux distribution you can switch between different distributions at runtime when they use the same kernel or the kernel is compatible between 2 or more different distributions; I never used this feature of the Linux-based OSs, but I would like to ask what is the software/s that makes this possible and how to perform this switch at runtime correctly. It will be nice to also have a list of pros and cons, for example what settings I loose and what I can keep during the switch.
Generally speaking, this is virtualization, which can take many forms. One OS is running on the physical hardware, and the other OSes are running in a more or less virtual environment. If you want to run very different operating systems (for example Linux and Windows), run one of them in a virtual machine. At the other extreme, if you want to have access to programs from several distributions (for example a stable distribution and a bleeding edge distribution), all running the same kernel, you can install one of the distributions in a directory subtree and run its programs inside the directory subtree thanks to the chroot command. For an example of how to do this on Debian, Ubuntu and derivatives, see How do I run 32-bit programs on a 64-bit Debian/Ubuntu? Linux offers more complex features to run multiple Linux systems off the same kernel: LXC, namespaces, …
What software allows the change of runtime-environment, while keeping the same kernel running?
1,398,327,992,000
I often use lspci -v to check the LKM in use for particular hardware device. LKMs are listed as "Kernel modules" and can be seen with lsmod. However, what is a "Kernel driver"? For example here: Is the "bcma-pci-bridge" a module built into kernel(I'm using 3.11.0) and thus it's not loadable and thus it will not appear in lsmod, can not be unloaded with modprobe -r or checked with modinfo?
From checking /boot/config-3.11.0-13-generic (yours might be different), I would guess that it's built into the kernel, thus you can't unload/reload it. $ grep -i BCMA /boot/config-3.11.0-13-generic [...] CONFIG_BCMA_HOST_PCI_POSSIBLE=y CONFIG_BCMA_HOST_PCI=y [...]
"kernel driver" in "lspci" output
1,398,327,992,000
My debian computer will not install the new kernel. It says I have unmet dependencies, and those dependencies say they have unmet dependencies. Many of these dependencies are already installed however. Running apt-get update, apt-get upgrade, and apt-get install -f do not fix the problem. My sources list is as follows: deb http://ftp.us.debian.org/debian stable main contrib non-free deb-src http://ftp.us.debian.org/debian stable main contrib non-free deb http://ftp.debian.org/debian/ squeeze-updates main contrib non-free deb-src http://ftp.debian.org/debian/ squeeze-updates main contrib non-free deb http://security.debian.org/ squeeze/updates main contrib non-free deb-src http://security.debian.org/ squeeze/updates main contrib non-free # Debian Squeeze Backports deb http://backports.debian.org/debian-backports squeeze-backports main contrib non-free deb-src http://backports.debian.org/debian-backports squeeze-backports main contrib non-free I've tried installing from the sqeeze-backports and still no luck. Do you guys know what might be going on? Thanks for the help :)
If you want to install from squeeze-backports, you should tell it to apt-get with the -t parameter, and specify the version of the package you want, i.e.: apt-get install -t squeeze-backports <package-name>=<version> In order to know which version is provided by a given repository, you can use the apt-cache show command, and look for the info of the package in that repository. In your case, the command should be: apt-get install -t squeeze-backports linux-image-2.6-amd64=3.2+45~bpo60+1 for an linux amd64 kernel. Of course, you may have to run the usual apt-get update first, and if a new kernel package has been uploaded, you might have to replace 3.2+45~bpo60+1 with the new package version. If you want to install the 3.2 kernel, then the command is: apt-get install -t squeeze-backports linux-image-3.2.0-0.bpo.3-amd64=3.2.23-1~bpo60+2 but you might omit the version number if you don't have any other repository hosting that kernel in your source list (ie, you don't have testing or unstable).
Apt-Get Install Unmet Dependencies
1,398,327,992,000
Is it possible to compile the linux kernel so that when it is booting up, instead of the boot up text to display and image and ONLY an image (like osx)?
There are typically two main components: The bootloader (nowadays typically grub2) on Linux boot cds typically isolinux A program displaying some kind of graphical interface, nowadays typically plymouth If you are using a Distribution targeted for consumers both should automatically be configured and installed from your distribution.
Displaying just an image instead of text
1,398,327,992,000
From here: http://www.xenomai.org/index.php/FAQs#Which_kernel_settings_should_be_avoided.3F Which kernel settings should be avoided? Note that Xenomai will warn you about known invalid combinations during kernel configuration. - CONFIG_CPU_FREQ - CONFIG_APM - CONFIG_ACPI_PROCESSOR Now, when I look in the .config, I do find these options clearly but I don't know their dependencies. So, it is wise to simply put a n next to these options in the .config file? Will the make procedure take care of the dependencies? The make menuconfig window do not present these options explicitly.
make menuconfig does present this option. If you are in the menu press / and search for CPU_FREQ. This will show all CONFIG parameters containing CPU_FREQ. It does also show how you can access it through the menu, e.g: │ Symbol: CPU_FREQ [=y] │ Type : boolean │ Prompt: CPU Frequency scaling │ Defined at drivers/cpufreq/Kconfig:3 │ Location: │ -> Power management and ACPI options │ -> CPU Frequency scaling This means you find it under Power managment and ACPI options -> CPU Frequency scaling and the name of the entry is CPU Frequency scaling.
Edit the .config file when en/disabling a particular option like CONFIG_CPU_FREQ?
1,398,327,992,000
I've updated my Mint 12 system to be running a kernel built from Linus' git repo. Now, whenever I run apt-get upgrade, I see that linux*-generic are still on the update list (they're being "kept back", but they're still present). How do I remove these from the list of packages for APT to track?
Since you're running your own kernel, you don't need to keep the official kernel installed. Remove (apt-get remove or - in Aptitude) the linux*-generic packages.
Removing generic kernel updates from aptitude post custom kernel install
1,398,327,992,000
I rebooted a compiled kernel 3.1.0, and those are the errors that I am getting: linux-dopx:/usr/src/linux-3.1.0-1.2 # make install sh /usr/src/linux-3.1.0-1.2/arch/x86/boot/install.sh 3.1.0 arch/x86/boot/bzImage \ System.map "/boot" Kernel image: /boot/vmlinuz-3.1.0 Initrd image: /boot/initrd-3.1.0 Root device: /dev/disk/by-id/ata-ST3250310AS_6RYNQEXY-part2 (/dev/sda2) (mounted on / as ext4) Resume device: /dev/disk/by-id/ata-ST3250310AS_6RYNQEXY-part1 (/dev/sda1) find: `/lib/modules/3.1.0/kernel/drivers/ata': No such file or directory modprobe: Module ata_generic not found. WARNING: no dependencies for kernel module 'ata_generic' found. modprobe: Module ext4 not found. WARNING: no dependencies for kernel module 'ext4' found. Features: block usb resume.userspace resume.kernel Bootsplash: openSUSE (1280x1024) 41713 blocks Rebooting says: Could not load /lib/modules/3.1.0/modules.dep EDIT1: Here's what I did: linux-dopx:/usr/src/linux-3.1.0-1.2 # make bzImage CHK include/linux/version.h CHK include/generated/utsrelease.h CALL scripts/checksyscalls.sh CHK include/generated/compile.h Kernel: arch/x86/boot/bzImage is ready (#1) linux-dopx:/usr/src/linux-3.1.0-1.2 # make modules CHK include/linux/version.h CHK include/generated/utsrelease.h CALL scripts/checksyscalls.sh Building modules, stage 2. MODPOST 3 modules linux-dopx:/usr/src/linux-3.1.0-1.2 # make modules install CHK include/linux/version.h CHK include/generated/utsrelease.h CALL scripts/checksyscalls.sh CHK include/generated/compile.h Building modules, stage 2. MODPOST 3 modules sh /usr/src/linux-3.1.0-1.2/arch/x86/boot/install.sh 3.1.0 arch/x86/boot/bzImage \ System.map "/boot" Kernel image: /boot/vmlinuz-3.1.0 Initrd image: /boot/initrd-3.1.0 Root device: /dev/disk/by-id/ata-ST3250310AS_6RYNQEXY-part2 (/dev/sda2) (mounted on / as ext4) Resume device: /dev/disk/by-id/ata-ST3250310AS_6RYNQEXY-part1 (/dev/sda1) find: `/lib/modules/3.1.0/kernel/drivers/ata': No such file or directory modprobe: Module ata_generic not found. WARNING: no dependencies for kernel module 'ata_generic' found. modprobe: Module ext4 not found. WARNING: no dependencies for kernel module 'ext4' found. Features: block usb resume.userspace resume.kernel Bootsplash: openSUSE (1280x1024) 41713 blocks linux-dopx:/usr/src/linux-3.1.0-1.2 # make install sh /usr/src/linux-3.1.0-1.2/arch/x86/boot/install.sh 3.1.0 arch/x86/boot/bzImage \ System.map "/boot" Kernel image: /boot/vmlinuz-3.1.0 Initrd image: /boot/initrd-3.1.0 Root device: /dev/disk/by-id/ata-ST3250310AS_6RYNQEXY-part2 (/dev/sda2) (mounted on / as ext4) Resume device: /dev/disk/by-id/ata-ST3250310AS_6RYNQEXY-part1 (/dev/sda1) find: `/lib/modules/3.1.0/kernel/drivers/ata': No such file or directory modprobe: Module ata_generic not found. WARNING: no dependencies for kernel module 'ata_generic' found. modprobe: Module ext4 not found. WARNING: no dependencies for kernel module 'ext4' found. Features: block usb resume.userspace resume.kernel Bootsplash: openSUSE (1280x1024) 41713 blocks EDIT2: linux-dopx:/usr/src/linux-3.1.0-1.2 # make modules_install install INSTALL arch/x86/kernel/test_nx.ko INSTALL drivers/scsi/scsi_wait_scan.ko INSTALL net/netfilter/xt_mark.ko DEPMOD 3.1.0 sh /usr/src/linux-3.1.0-1.2/arch/x86/boot/install.sh 3.1.0 arch/x86/boot/bzImage \ System.map "/boot" Kernel image: /boot/vmlinuz-3.1.0 Initrd image: /boot/initrd-3.1.0 Root device: /dev/disk/by-id/ata-ST3250310AS_6RYNQEXY-part2 (/dev/sda2) (mounted on / as ext4) Resume device: /dev/disk/by-id/ata-ST3250310AS_6RYNQEXY-part1 (/dev/sda1) find: `/lib/modules/3.1.0/kernel/drivers/ata': No such file or directory modprobe: Module ata_generic not found. WARNING: no dependencies for kernel module 'ata_generic' found. modprobe: Module ext4 not found. WARNING: no dependencies for kernel module 'ext4' found. Features: block usb resume.userspace resume.kernel Bootsplash: openSUSE (1280x1024) 41713 blocks EDIT 3: This message is still getting shown after make install: /lib/modules/2.6.35.13/kernel/drivers/ata': No such file or directory I set to '[*]' the "Generic ATA support" under "Serial ATA and Parallel ATA driver", but that's of no avail. The kernel version is different this time, but the problem is same. EDIT 4: linux-dopx:~ # lspci -vvv 00:00.0 Host bridge: Intel Corporation 82G33/G31/P35/P31 Express DRAM Controller (rev 10) Subsystem: ASUSTeK Computer Inc. P5KPL-VM Motherboard Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ >SERR- <PERR- INTx- Latency: 0 Capabilities: [e0] Vendor Specific Information: Len=0b <?> Kernel driver in use: agpgart-intel 00:02.0 VGA compatible controller: Intel Corporation 82G33/G31 Express Integrated Graphics Controller (rev 10) (prog-if 00 [VGA controller]) Subsystem: ASUSTeK Computer Inc. P5KPL-VM Motherboard Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 18 Region 0: Memory at fea80000 (32-bit, non-prefetchable) [size=512K] Region 1: I/O ports at dc00 [size=8] Region 2: Memory at e0000000 (32-bit, prefetchable) [size=256M] Region 3: Memory at fe900000 (32-bit, non-prefetchable) [size=1M] Expansion ROM at <unassigned> [disabled] Capabilities: [90] MSI: Enable+ Count=1/1 Maskable- 64bit- Address: fee0100c Data: 4149 Capabilities: [d0] Power Management version 2 Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-) Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME- Kernel driver in use: i915 00:1b.0 Audio device: Intel Corporation N10/ICH 7 Family High Definition Audio Controller (rev 01) Subsystem: ASUSTeK Computer Inc. Device 83a1 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 32 bytes Interrupt: pin A routed to IRQ 20 Region 0: Memory at fea78000 (64-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 2 Flags: PMEClk- DSI- D1- D2- AuxCurrent=55mA PME(D0+,D1-,D2-,D3hot+,D3cold+) Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME- Capabilities: [60] MSI: Enable+ Count=1/1 Maskable- 64bit+ Address: 00000000fee0100c Data: 4159 Capabilities: [70] Express (v1) Root Complex Integrated Endpoint, MSI 00 DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s <64ns, L1 <1us ExtTag- RBE- FLReset- DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported- RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop+ MaxPayload 128 bytes, MaxReadReq 128 bytes DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr+ TransPend- LnkCap: Port #0, Speed unknown, Width x0, ASPM unknown, Latency L0 <64ns, L1 <1us ClockPM- Surprise- LLActRep- BwNot- LnkCtl: ASPM Disabled; Disabled- Retrain- CommClk- ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- LnkSta: Speed unknown, Width x0, TrErr- Train- SlotClk- DLActive- BWMgmt- ABWMgmt- Capabilities: [100 v1] Virtual Channel Caps: LPEVC=0 RefClk=100ns PATEntryBits=1 Arb: Fixed- WRR32- WRR64- WRR128- Ctrl: ArbSelect=Fixed Status: InProgress- VC0: Caps: PATOffset=00 MaxTimeSlots=1 RejSnoopTrans- Arb: Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256- Ctrl: Enable+ ID=0 ArbSelect=Fixed TC/VC=01 Status: NegoPending- InProgress- VC1: Caps: PATOffset=00 MaxTimeSlots=1 RejSnoopTrans- Arb: Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256- Ctrl: Enable- ID=0 ArbSelect=Fixed TC/VC=00 Status: NegoPending- InProgress- Capabilities: [130 v1] Root Complex Link Desc: PortNumber=0f ComponentID=00 EltType=Config Link0: Desc: TargetPort=00 TargetComponent=00 AssocRCRB- LinkType=MemMapped LinkValid+ Addr: 00000000fed1c000 Kernel driver in use: snd_hda_intel 00:1c.0 PCI bridge: Intel Corporation N10/ICH 7 Family PCI Express Port 1 (rev 01) (prog-if 00 [Normal decode]) Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 32 bytes Bus: primary=00, secondary=02, subordinate=02, sec-latency=0 I/O behind bridge: 00001000-00001fff Memory behind bridge: 7f900000-7fafffff Prefetchable memory behind bridge: 000000007fb00000-000000007fcfffff Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR- BridgeCtl: Parity- SERR+ NoISA- VGA- MAbort- >Reset- FastB2B- PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn- Capabilities: [40] Express (v1) Root Port (Slot+), MSI 00 DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s unlimited, L1 unlimited ExtTag- RBE- FLReset- DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported- RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop- MaxPayload 128 bytes, MaxReadReq 128 bytes DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr+ TransPend- LnkCap: Port #1, Speed 2.5GT/s, Width x1, ASPM L0s L1, Latency L0 <1us, L1 <4us ClockPM- Surprise- LLActRep+ BwNot- LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- Retrain- CommClk- ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- LnkSta: Speed 2.5GT/s, Width x0, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt- SltCap: AttnBtn- PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise+ Slot #4, PowerLimit 25.000W; Interlock- NoCompl- SltCtl: Enable: AttnBtn- PwrFlt- MRL- PresDet- CmdCplt- HPIrq- LinkChg- Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock- SltSta: Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet- Interlock- Changed: MRL- PresDet- LinkState- RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna- CRSVisible- RootCap: CRSVisible- RootSta: PME ReqID 0000, PMEStatus- PMEPending- Capabilities: [80] MSI: Enable+ Count=1/1 Maskable- 64bit- Address: fee0100c Data: 4129 Capabilities: [90] Subsystem: ASUSTeK Computer Inc. Device 8179 Capabilities: [a0] Power Management version 2 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+) Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME- Capabilities: [100 v1] Virtual Channel Caps: LPEVC=0 RefClk=100ns PATEntryBits=1 Arb: Fixed+ WRR32- WRR64- WRR128- Ctrl: ArbSelect=Fixed Status: InProgress- VC0: Caps: PATOffset=00 MaxTimeSlots=1 RejSnoopTrans- Arb: Fixed+ WRR32- WRR64- WRR128- TWRR128- WRR256- Ctrl: Enable+ ID=0 ArbSelect=Fixed TC/VC=01 Status: NegoPending- InProgress- VC1: Caps: PATOffset=00 MaxTimeSlots=1 RejSnoopTrans- Arb: Fixed+ WRR32- WRR64- WRR128- TWRR128- WRR256- Ctrl: Enable- ID=0 ArbSelect=Fixed TC/VC=00 Status: NegoPending- InProgress- Capabilities: [180 v1] Root Complex Link Desc: PortNumber=01 ComponentID=00 EltType=Config Link0: Desc: TargetPort=00 TargetComponent=00 AssocRCRB- LinkType=MemMapped LinkValid+ Addr: 00000000fed1c001 Kernel driver in use: pcieport 00:1c.1 PCI bridge: Intel Corporation N10/ICH 7 Family PCI Express Port 2 (rev 01) (prog-if 00 [Normal decode]) Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 32 bytes Bus: primary=00, secondary=01, subordinate=01, sec-latency=0 I/O behind bridge: 0000e000-0000efff Memory behind bridge: feb00000-febfffff Prefetchable memory behind bridge: 000000007f700000-000000007f8fffff Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR- BridgeCtl: Parity- SERR+ NoISA- VGA- MAbort- >Reset- FastB2B- PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn- Capabilities: [40] Express (v1) Root Port (Slot+), MSI 00 DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s unlimited, L1 unlimited ExtTag- RBE- FLReset- DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported- RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop- MaxPayload 128 bytes, MaxReadReq 128 bytes DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr+ TransPend- LnkCap: Port #2, Speed 2.5GT/s, Width x1, ASPM L0s L1, Latency L0 <1us, L1 <4us ClockPM- Surprise- LLActRep+ BwNot- LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- Retrain- CommClk- ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- LnkSta: Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive+ BWMgmt- ABWMgmt- SltCap: AttnBtn- PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise+ Slot #0, PowerLimit 0.000W; Interlock- NoCompl- SltCtl: Enable: AttnBtn- PwrFlt- MRL- PresDet- CmdCplt- HPIrq- LinkChg- Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock- SltSta: Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock- Changed: MRL- PresDet+ LinkState+ RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna- CRSVisible- RootCap: CRSVisible- RootSta: PME ReqID 0000, PMEStatus- PMEPending- Capabilities: [80] MSI: Enable+ Count=1/1 Maskable- 64bit- Address: fee0100c Data: 4141 Capabilities: [90] Subsystem: ASUSTeK Computer Inc. Device 8179 Capabilities: [a0] Power Management version 2 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+) Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME- Capabilities: [100 v1] Virtual Channel Caps: LPEVC=0 RefClk=100ns PATEntryBits=1 Arb: Fixed+ WRR32- WRR64- WRR128- Ctrl: ArbSelect=Fixed Status: InProgress- VC0: Caps: PATOffset=00 MaxTimeSlots=1 RejSnoopTrans- Arb: Fixed+ WRR32- WRR64- WRR128- TWRR128- WRR256- Ctrl: Enable+ ID=0 ArbSelect=Fixed TC/VC=01 Status: NegoPending- InProgress- VC1: Caps: PATOffset=00 MaxTimeSlots=1 RejSnoopTrans- Arb: Fixed+ WRR32- WRR64- WRR128- TWRR128- WRR256- Ctrl: Enable- ID=0 ArbSelect=Fixed TC/VC=00 Status: NegoPending- InProgress- Capabilities: [180 v1] Root Complex Link Desc: PortNumber=02 ComponentID=00 EltType=Config Link0: Desc: TargetPort=00 TargetComponent=00 AssocRCRB- LinkType=MemMapped LinkValid+ Addr: 00000000fed1c001 Kernel driver in use: pcieport 00:1d.0 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #1 (rev 01) (prog-if 00 [UHCI]) Subsystem: ASUSTeK Computer Inc. P5KPL-VM,P5LD2-VM Mainboard Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 5 Region 4: I/O ports at d400 [size=32] Kernel driver in use: uhci_hcd 00:1d.1 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #2 (rev 01) (prog-if 00 [UHCI]) Subsystem: ASUSTeK Computer Inc. P5KPL-VM,P5LD2-VM Mainboard Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin B routed to IRQ 7 Region 4: I/O ports at d480 [size=32] Kernel driver in use: uhci_hcd 00:1d.2 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #3 (rev 01) (prog-if 00 [UHCI]) Subsystem: ASUSTeK Computer Inc. P5KPL-VM,P5LD2-VM Mainboard Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin C routed to IRQ 3 Region 4: I/O ports at d800 [size=32] Kernel driver in use: uhci_hcd 00:1d.3 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #4 (rev 01) (prog-if 00 [UHCI]) Subsystem: ASUSTeK Computer Inc. P5KPL-VM,P5LD2-VM Mainboard Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin D routed to IRQ 10 Region 4: I/O ports at d880 [size=32] Kernel driver in use: uhci_hcd 00:1d.7 USB Controller: Intel Corporation N10/ICH 7 Family USB2 EHCI Controller (rev 01) (prog-if 20 [EHCI]) Subsystem: ASUSTeK Computer Inc. P5KPL-VM,P5LD2-VM Mainboard Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 5 Region 0: Memory at fea77c00 (32-bit, non-prefetchable) [size=1K] Capabilities: [50] Power Management version 2 Flags: PMEClk- DSI- D1- D2- AuxCurrent=375mA PME(D0+,D1-,D2-,D3hot+,D3cold+) Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME- Capabilities: [58] Debug port: BAR=1 offset=00a0 Kernel driver in use: ehci_hcd 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev e1) (prog-if 01 [Subtractive decode]) Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Bus: primary=00, secondary=03, subordinate=03, sec-latency=32 I/O behind bridge: 0000f000-00000fff Memory behind bridge: fff00000-000fffff Prefetchable memory behind bridge: 00000000fff00000-00000000000fffff Secondary status: 66MHz- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort+ <SERR- <PERR- BridgeCtl: Parity- SERR+ NoISA- VGA- MAbort- >Reset- FastB2B- PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn- Capabilities: [50] Subsystem: ASUSTeK Computer Inc. Device 8179 00:1f.0 ISA bridge: Intel Corporation 82801GB/GR (ICH7 Family) LPC Interface Bridge (rev 01) Subsystem: ASUSTeK Computer Inc. P5KPL-VM Motherboard Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Capabilities: [e0] Vendor Specific Information: Len=0c <?> 00:1f.1 IDE interface: Intel Corporation 82801G (ICH7 Family) IDE Controller (rev 01) (prog-if 8a [Master SecP PriP]) Subsystem: ASUSTeK Computer Inc. P5KPL-VM Motherboard Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx+ Latency: 0 Interrupt: pin A routed to IRQ 3 Region 0: I/O ports at 01f0 [size=8] Region 1: I/O ports at 03f4 [size=1] Region 2: I/O ports at 0170 [size=8] Region 3: I/O ports at 0374 [size=1] Region 4: I/O ports at ffa0 [size=16] Kernel driver in use: ata_piix 00:1f.2 IDE interface: Intel Corporation N10/ICH7 Family SATA IDE Controller (rev 01) (prog-if 8f [Master SecP SecO PriP PriO]) Subsystem: ASUSTeK Computer Inc. P5KPL-VM Motherboard Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz+ UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin B routed to IRQ 7 Region 0: I/O ports at d080 [size=8] Region 1: I/O ports at d000 [size=4] Region 2: I/O ports at cc00 [size=8] Region 3: I/O ports at c880 [size=4] Region 4: I/O ports at c800 [size=16] Capabilities: [70] Power Management version 2 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot+,D3cold-) Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME- Kernel driver in use: ata_piix 00:1f.3 SMBus: Intel Corporation N10/ICH 7 Family SMBus Controller (rev 01) Subsystem: ASUSTeK Computer Inc. P5KPL-VM Motherboard Control: I/O+ Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Interrupt: pin B routed to IRQ 7 Region 4: I/O ports at 0400 [size=32] Kernel driver in use: i801_smbus 01:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 01) Subsystem: ASUSTeK Computer Inc. Device 8136 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 32 bytes Interrupt: pin A routed to IRQ 19 Region 0: I/O ports at e800 [size=256] Region 2: Memory at febff000 (64-bit, non-prefetchable) [size=4K] Expansion ROM at febc0000 [disabled] [size=128K] Capabilities: [40] Power Management version 2 Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=375mA PME(D0-,D1+,D2+,D3hot+,D3cold+) Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME- Capabilities: [48] Vital Product Data Unknown small resource type 05, will not decode more. Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Address: 00000000fee0100c Data: 4151 Capabilities: [60] Express (v1) Endpoint, MSI 00 DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s <128ns, L1 unlimited ExtTag+ AttnBtn+ AttnInd+ PwrInd+ RBE- FLReset- DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported- RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+ MaxPayload 128 bytes, MaxReadReq 512 bytes DevSta: CorrErr- UncorrErr+ FatalErr- UnsuppReq+ AuxPwr+ TransPend- LnkCap: Port #0, Speed 2.5GT/s, Width x1, ASPM L0s, Latency L0 unlimited, L1 unlimited ClockPM- Surprise- LLActRep- BwNot- LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- Retrain- CommClk- ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- LnkSta: Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk- DLActive- BWMgmt- ABWMgmt- Capabilities: [84] Vendor Specific Information: Len=4c <?> Capabilities: [100 v1] Advanced Error Reporting UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol- UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UESvrt: DLP+ SDES- TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol- CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr- CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr- AERCap: First Error Pointer: 14, GenCap- CGenEn- ChkCap- ChkEn- Capabilities: [12c v1] Virtual Channel Caps: LPEVC=0 RefClk=100ns PATEntryBits=1 Arb: Fixed- WRR32- WRR64- WRR128- Ctrl: ArbSelect=Fixed Status: InProgress- VC0: Caps: PATOffset=00 MaxTimeSlots=1 RejSnoopTrans- Arb: Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256- Ctrl: Enable+ ID=0 ArbSelect=Fixed TC/VC=01 Status: NegoPending- InProgress- Capabilities: [148 v1] Device Serial Number 01-00-00-00-36-4c-e0-00 Capabilities: [154 v1] Power Budgeting <?> Kernel driver in use: r8169 linux-dopx:~ #
You must enable module ata_generic and module ext4 in menuconfig. The option are: CONFIG_ATA_GENERIC=y: http://cateee.net/lkddb/web-lkddb/ATA_GENERIC.html CONFIG_EXT4_FS=y: http://cateee.net/lkddb/web-lkddb/EXT4_FS.html
modprobe: Module ext4 not found. WARNING: no dependencies for kernel module 'ext4' found
1,398,327,992,000
While reading through the Linux Kernel Development by Robert Love, I went through the following line : The kernel stores the list of processes in a circular doubly linked list called the task list. I would like to know what is the size of this Task list.
The task list is stored in a circular doubly linked list; each node is a struct task_struct. The list structure is specifically in the tasks field. There's no separate object in memory to represent the list: each node contains pointers to the previous and next node (some_task->tasks.prev and some_task->tasks.next). This data structure doesn't have an inherent maximum size. The limiting factor as far as the number of tasks is concerned will be either the available memory for tasks structures and other resources consumed by the tasks, or the number of process (more precisely, task group) identifiers, which are limited to 15 bits by default. Read chapter 5 of Linux Kernel Development, or chapter 11 of Linux Device Drivers , for more information on this data structure in the Linux kernel.
What is the size of the TaskList?
1,398,327,992,000
I found that System.map file contains addresses of symbols. Does it involve system calls? I read that it is only updated when a new kernel is newly compiled. So does that means that except for a new kernel compilation, these are always stored in the same address?
System.map contains a symbol table, i.e. a list of function names in the Linux kernel, with for each function the address at which its code is loaded in memory (the addresses are not physical addresses, they're in the kernel's address space, like any executable symbol table is in the loaded process address space). This isn't limited to system calls (the interfaces exposed to user processes): the file also lists functions that might be called by a loaded module, and even internal functions. The system calls are the symbols whose name begins with sys_. The addresses are associated to a particular kernel binary (vmlinux, bzImage or other format; the image format doesn't change the addresses, it's just an encoding); they are reproducible for a given kernel source, configuration and compiler. The file is generated by scripts/mksysmap near the end of the kernel build process; it is the output of the nm command. The file is used mainly for debugging, but it's also read when compiling some third-party modules that use unstable kernel interfaces (unstable as in changing from one version to the next).
System.map file update
1,398,327,992,000
I have a sony VPCZ12 laptop. It has those dual video cards that are a pain to get working in linux. The new 2.6.35 kernel is supposed to support that with the vga_switcheroo module which is supposed to be located in /sys/kernel/debug on >2.6.35. The problem is that when I boot my laptop, it freezes at a blackscreen unless I boot with options i915.nomodeset=0. It won't boot into X, but I can get to a terminal which is fine. But then vga_switcheroo isn't in /sys/kernel/debug. Is this a ubuntu bug on how the kernel is compiled? Or is it because I have to boot with i915.nomodeset? The livecd boots into X just fine, but I've never found a way to get X working on the installation. There's a lot of information about linux on the z12 but most of it is either outdated, doesn't work, or just plain doesn't make any sense
You need to have a kernel with vga_switcheroo enabled and KMS, i.e. kernel mode setting, active (which you haven't as you boot with nomodeset). To check if vga_switcheroo is enabled in the kernel, have a look into the Ubuntu config of your kernel. You should find it in /boot with a name along the lines of /boot/config-2.6.35-XX-generic with XX some number corresponding to your kernel. If it is not enabled, you find a custom Ubuntu kernel with vga_switcheroo enabled at http://www.ramoonus.nl/2010/08/linux-kernel-2-6-35-installation-guide-for-ubuntu-linux/
vga_switcheroo not in /sys/kernel/debug in 2.6.35-22 (kubuntu maverick)
1,398,327,992,000
I want to write a small Linux driver extension. More specific: I want to write all the communication between the host and a M.2-nvme-ssd into a userspace file. The nvme driver is pretty big though and i have difficulties pinpointing some place to start at. A colleague of mine has done something similar for SD cards. He traced the IO after the host has received the response from the card and is about to wrap up the operation (the function is sdhci_request_done). The trace shows requests and responses with opcode, data and timestamps. Something like this would be my goal. I have found programs that trace IO, but they operate in userspace. That is a problem, as i might send a message to the card directly from the driver. So my question is: Where do i tap into the host-driver to get the data, without delaying the operations or allocating much memory. Or is there a driver-function that does this?
A colleague of mine has done something similar for SD cards. He traced the IO after the host has received the response from the card and is about to wrap up the operation (the function is sdhci_request_done). Unlike SD cards, most data will actually be exchanged via DMA to an nvme device (usually), so your Linux can't know the content of the transfer, only that it happened. I'm sure you can disable DMA, to huge performance reduction. I don't know how to do that, but you can possibly achieve it using a kernel boot flag. Other than that, you can already trace all commands exchanged, without having to extend anything. Linux has tracepoints, and nvme is just one family of them; so sudo perf trace -e nvme:nvme_\* > logfile
Where do i trace NVME IO within the Linux driver?
1,398,327,992,000
I have tried to mount an SMB share using cifs with this command: sudo mount.cifs -o vers=3.0,uid=user,credentials=/home/user/credentials,file_mode=0644,dir_mode=0755 //path/to/share /mnt/share And I got: mount error: cifs filesystem not supported by the system mount error(19): No such device so I debugged it and when i ran modinfo cifs this is what returned: filename: /lib/modules/5.14.0-162.6.1.el9_1.×86_64/extra/mlnx-ofa_kernel/fs/cifs/cifs.ko version: 2.31 license: Dual BSD/GPL description: cifs dummy kernel module author: Mohammad Kabat rheiversion: 9.1 sreversion: 01E451882B55F354B7F130B depends: mlx_compat retpoline: Y name: cifs vermagic: 5.14.0-162.6.1.el9_1.×86_64 SMP preempt mod_unload modversions sig_id: PKCS#7 signer: Mellanox Technologies signing key sig_key: BA:BO:F5:CD:23:24:A0 sig_hashalgo: sha256 I use MLNX_OFED on my system and it seems that the Mellanox Kernel is using a dummy module of cifs and by that is disabling the CIFS option. (As refered in the MLNX_OFED v5.8-2.0.3.0 known issues). My question is if there's a way to work around it and renabling CIFS manually.
After further investigating the issue, I have found that I am using OFED-5.8-1.1.2.1 On Rocky Linux 9.1 and it supports only up to Rocky Linux 9.0. So, updating the OFED to OFED-5.8-2.0.3.0 which does offer support to RL9.1 resolved the issue for me.
Rocky Linux 9 can't mount share because CIFS is a dummy module
1,398,327,992,000
I'm running Linux Mint 21.1 Xfce and it was all good up to and including 5.15.0-60-generic but after updating to -67 I started getting an "out of memory" error at boot (hit any key) followed by "kernel panic, not syncing, VFS unable to mount rootfs on unknown block". Booting trusty old version -60 via advanced options worked fine so I figured I'd just skip -67 and wait until the next version. But now -69 is out and doing the same thing. So a bit of Googling turned up these: https://www.geekswarrior.com/2019/07/solved-how-to-fix-kernel-panic-on-linux.html https://forums.linuxmint.com/viewtopic.php?t=338544 So I tried this: sudo mount /dev/nvme0n1p1 /mnt sudo mount --bind /dev /mnt/dev sudo mount --bind /dev/pts /mnt/dev/pts sudo mount --bind /proc /mnt/proc sudo mount --bind /sys /mnt/sys sudo chroot /mnt update-initramfs -u -k 5.15.0-69-generic update-grub2 But only ended up losing my Windows Grub entry and -69 STILL won't boot. It's fairly new hardware (13700K, DDR5, 4070 Ti) so I wondered if that has anything to do with it, but it works fine on -60 and earlier? I'm happy enough to stick with an older kernel for now, but eventually that's going to become a undesirable security-wise if I'm way out-of-date. Any help would be appreciated.
I found a related answer: https://unix.stackexchange.com/a/717710 It says to modify MODULES and COMPRESS in initramfs.conf Using an editor of your choice (you will need to do this with sudo privilege)- Set MODULES=dep Set COMPRESS=xz Execute sudo update-initramfs -u
Kernel panic when booting new kernels
1,398,327,992,000
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME 5520396 3165734 57% 0.38K 131438 42 2103008K mnt_cache I searched the kernel source code, only 1 hit, and I'm unable to interpret from it. https://github.com/torvalds/linux/search?q=mnt_cache
This is the slab cache for struct mount allocations which are used to store information about mount points. The cache is initialised in mnt_init and used whenever a struct mount needs to be instantiated in alloc_vfsmnt.
What is mnt_cache in slabtop output on Linux?
1,398,327,992,000
How to fix this reset? This is a Debian server. There is a constant and repeating message on the console/syslog: kernel: usb 1-7.4: reset low-speed USB device number 4 using xhci_hcd I know that this device is the IPMI's virtual keyboard and mouse. After resetting the IPMI, the following log entries occur, but after a while, the problem starts up again. kernel: usb 1-7.3: USB disconnect, device number 3 kernel: cdc_ether 1-7.3:2.0 enXXXXX: unregister 'cdc_ether' usb-0000:02:00.0-7.3, CDC Ethernet Device kernel: usb 1-7.4: USB disconnect, device number 4 kernel: usb 1-7: USB disconnect, device number 2 kernel: usb 1-7: new high-speed USB device number 5 using xhci_hcd kernel: usb 1-7: New USB device found, idVendor=046b, idProduct=ff01, bcdDevice= 1.00 kernel: usb 1-7: New USB device strings: Mfr=1, Product=2, SerialNumber=3 kernel: usb 1-7: Product: Virtual Hub kernel: usb 1-7: Manufacturer: American Megatrends Inc. kernel: usb 1-7: SerialNumber: serial kernel: hub 1-7:1.0: USB hub found kernel: hub 1-7:1.0: 5 ports detected kernel: usb 1-7.3: new high-speed USB device number 6 using xhci_hcd kernel: usb 1-7.3: New USB device found, idVendor=046b, idProduct=ffb0, bcdDevice= 1.00 kernel: usb 1-7.3: New USB device strings: Mfr=1, Product=2, SerialNumber=3 kernel: usb 1-7.3: Product: Virtual Ethernet kernel: usb 1-7.3: Manufacturer: American Megatrends Inc. kernel: usb 1-7.3: SerialNumber: 1234567890 kernel: cdc_ether 1-7.3:2.0 usb0: register 'cdc_ether' at usb-0000:02:00.0-7.3, CDC Ethernet Device, 00:00:00:00:00:00 systemd-udevd: Using default interface naming scheme 'v247'. systemd-udevd: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable. kernel: cdc_ether 1-7.3:2.0 enXXXXX: renamed from usb0 systemd-udevd: Using default interface naming scheme 'v247'. systemd-udevd: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable. kernel: usb 1-7.4: new low-speed USB device number 7 using xhci_hcd kernel: usb 1-7.4: New USB device found, idVendor=046b, idProduct=ff10, bcdDevice= 1.00 kernel: usb 1-7.4: New USB device strings: Mfr=1, Product=2, SerialNumber=0 kernel: usb 1-7.4: Product: Virtual Keyboard and Mouse kernel: usb 1-7.4: Manufacturer: American Megatrends Inc. kernel: input: American Megatrends Inc. Virtual Keyboard and Mouse as /devices/pci0000:00/0000:00:01.2/0000:02:00.0/usb1/1-7/1-7.4/1-7.4:1.0/0003:046B:FF10.0003/input/input5 kernel: hid-generic 0003:046B:FF10.0003: input,hidraw0: USB HID v1.10 Keyboard [American Megatrends Inc. Virtual Keyboard and Mouse] on usb-0000:02:00.0-7.4/input0 kernel: input: American Megatrends Inc. Virtual Keyboard and Mouse as /devices/pci0000:00/0000:00:01.2/0000:02:00.0/usb1/1-7/1-7.4/1-7.4:1.1/0003:046B:FF10.0004/input/input6 kernel: hid-generic 0003:046B:FF10.0004: input,hidraw1: USB HID v1.10 Mouse [American Megatrends Inc. Virtual Keyboard and Mouse] on usb-0000:02:00.0-7.4/input1 systemd-logind: Watching system buttons on /dev/input/event2 (American Megatrends Inc. Virtual Keyboard and Mouse) Here is the reset again, but reflecting the new device number assigned after the IPMI has been reset. kernel: usb 1-7.4: reset low-speed USB device number 7 using xhci_hcd This is the output from lsusb. The IPMI components are the only USB devices. Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 007: ID 046b:ff10 American Megatrends, Inc. Virtual Keyboard and Mouse Bus 001 Device 006: ID 046b:ffb0 American Megatrends, Inc. Virtual Ethernet Bus 001 Device 005: ID 046b:ff01 American Megatrends, Inc. Virtual Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub The message is filling up the logs. Is there a way to a) make a change to the system software to correct for what is happening? or b) suppress the log message?
From a comment by Utkonos: The issue appears to have gone away after flashing the IPMI firmware.
Low-speed USB Device Reset Using xhci_hcd
1,398,327,992,000
I'm currently trying to install netmap, which needs to modify the kernel. However, after installing the kernel headers, I noticed that in /lib/linux-kbuild-5.10/scripts, some of the scripts that are expected to be there are missing. This includes pahole-flags.sh as well as mkmakefile and as a result I get "not found" and "No such file" errors respectively. I found this recent bug report, stating that this is a regression on stable. It also states, that it has been fixed for a previous version via this commit. However, all this commit does is modifying the make file. Is there an easy fix like "just copy the script from repo xy into this directory" or do I need to recompile/upgrade parts of the kernel somehow? I'm not so versatile using linux systems yet, so any help is appreciated.
The commit you referred to modifies the part of Makefile that is used for building the Debian kernel headers packages (linux-headers-<kernel version>-<package version>_<arch>.deb). In other words, it commit makes the pahole-flags.sh script be included to the appropriate package in future kernel versions. Since the kernel configuration & build process does not appear to make any changes to the script, you could just grab the script from the Linux kernel source of the appropriate version (e.g. here for 5.10 series kernels) and place it into your /lib/linux-kbuild-5.10/scripts where netmap expects to find it. The same applies to the mkmakefile script (here for 5.10 series kernels). However, you should be aware that the mkmakefile's functionality that was needed for building the kernel was merged into the main kernel Makefile at about the 5.15 kernel series (here's the diff) and the mkmakefile ceased to exist as a separate script. So if netmap still relies on that, it will need to implement the functionality it needs on its own. However, it seems that even in kernel version 5.10, mkmakefile just outputs a two-line Makefile, and one of those lines is just a comment, so it should be trivial to replace mkmakefile.
How can I fix scripts in /lib/linux-kbuild on Debian 11: pahole-flags.sh not found
1,398,327,992,000
I have initrd image compressed with xz. This is how I created it from image file initrd: e2image -ar initrd - | xz -9 --check=crc32 > initrd.xz now I need same image compressed using zstd algorithm. What command/parameters do I have to use, for the kernel to be able boot from this initrd image? I have CONFIG_RD_ZSTD=y enabled in my kernel.
The equivalent with zstd shall be : e2image -ar initrd - | zstd -19 --check > initrd.zst
create initrd image compressed with zstd
1,398,327,992,000
I've compiled Linux kernel 5.18.4 from source, enabling all the EFI-related options, without any built-in parameters, nor a default init path, also, it's worth mentioning that i'm not making use of initramfs/initrd I'm trying to boot this kernel through VirtualBox 6.1.34, on a VM with EFI support The installation disk (/dev/sda) has two partitions: /dev/sda1 a 512mb EFI system partition formatted as FAT32 mounted at /boot /dev/sda2 a 15.5gb root partition formatted as ext4 mounted at / The kernel is located at /boot/EFI/BOOT/boox64.efi, seems like this naming convention makes it boot automatically, skipping the UEFI shell, and removing the need for creating a boot entry through efibootmgr, but i'm not sure if that's the norm across different UEFI implementations on different hardware Whenever i try to boot it, i get the following error: On previous attempts, by making minor adjustments like changing the kernel path and doing some other tweaks, the result was similar, whenever i tried to execute vmlinuz.efi through the EFI shell, the machine would hang forever, without displaying a single error message (this is the case even when passing root=/dev/sda2 and init=/bin/init as parameters)
I had to enable framebuffer on kernel .config in order to prevent running into a blank screen: CONFIG_FB_EFI=y CONFIG_FRAMEBUFFER_CONSOLE=y Thanks to d9ngle for his answer
Unable to boot Linux kernel directly through EFISTUB
1,398,327,992,000
I checked CentOS 8/Redhat 8, and Ubuntu 22.04, their default kernel setting for CONFIG_PREEMPT are all not set: CONFIG_PREEMPT is not set In my understanding, the kernel should be able to preempt by default. isn't it?
You can read a description of the CONFIG_PREEMPT configuration item here, which says: This option reduces the latency of the kernel by making all kernel code (that is not executing in a critical section) preemptible. This allows reaction to interactive events by permitting a low priority process to be preempted involuntarily even if it is in kernel mode executing a system call and would otherwise not be about to reach a natural preemption point. This allows applications to run more 'smoothly' even when the system is under load, at the cost of slightly lower throughput and a slight runtime overhead to kernel code. Normally, only user space code is preemptible. With CONFIG_PREEMPT enabled, then code executing in kernel space is also preemtible.
Without CONFIG_PREEMPT, kernel can't preempt?
1,398,327,992,000
ERROR: type should be string, got "\nhttps://elixir.bootlin.com/linux/v5.19/source/fs/pipe.c#L247\nWhen a pipe was full, shouldn't it wake up the reader to read out data?\n /*\n * We only wake up writers if the pipe was full when we started\n * reading in order to avoid unnecessary wakeups.\n *\n * But when we do wake up writers, we do so using a sync wakeup\n * (WF_SYNC), because we want them to get going and generate more\n * data for us.\n */\n was_full = pipe_full(pipe->head, pipe->tail, pipe->max_usage);\n\n"
This code runs when the pipe is read; since the pipe is being read, it’s not going to be full after the read completes, so there will be room for more writes. If the pipe was full before the start of the read, that implies that there may be writers blocked because of the pipe being full; those writers are useful to wake up now to minimise the wait time. If the pipe wasn’t full, then any blocked writers aren’t blocked because of a full pipe, so freeing room up in the pipe isn’t going to help them and waking them up isn’t useful. Readers are woken up when the pipe is written to.
Why wake up writer when pipe is full?
1,398,327,992,000
I have kernel configuration file available via /proc: IKCONFIG_PROC=y and I can show the config of the running kernel as: zcat /proc/config.gz However, how do I cat the config of a vmlinuz image that is not running? I have vmlinuz image stored on the disk. How do I extract the config?
In theory the extract-ikconfig script from the kernel source can extract the embedded configuration from a kernel built with CONFIG_IKCONFIG=y. I don't have access to a kernel built this way so I can't test it. See also this answer on Stack Overflow for what to do if your kernel was built with CONFIG_IKCONFIG=m.
extract kernel config (/proc/config.gz) from linux image file
1,398,327,992,000
My assumption is that sysfs is built using ioctl queries, meaning all the information you would want (or at least most of it) is already available by simply reading files on sysfs. I notice some programs (e.g., hdparm) still use ioctl calls rather than simply hitting sysfs , and I'm curious if there's a reason for that. Is sysfs unreliable? If you're only interested in hardware info, is there a reason to use ioctl over sysfs?
As rightly asserted by Tilman in comments, sysfs and ioctl both provide userland access to kernel data structures. Since the kernel does not need system calls to access to its own data, neither is the sysfs tree built resorting to ioctl calls, nor any user action on its files will translate into ioctl calls. You write " … information is already available by simply reading files…" and this is, I believe, the answer for your final question : Why can it appear simpler to resort to the sysfs interface ? First because considered in front of your basic ASCII terminal running some shell, the sysfs tree gives access to (binary) kernel data via the most basic cat and echo commands. Thanks to other basic shell commands, (ls, cd) you can also, following the symlinks get some deep understanding of the relationships between kernel objects. On top of this the user can take benefits from some (at least minimal) control over the validity of the changes you would wish to commit. This indeed makes sysfs the right way to go when, under console, wishing to tune your system, write scripts or rules, confortabely debug some driver from userspace (the initial destination of sysfs… just think that before… /dev/mem was your only friend for that later purpose.) All right then, however, there are cases you just don't care of all these facilities, cases where accessing kernel objects via the sysfs interface would just constrain you to… write (much) more code : When writing a C program : Just imagine : You want to open some file, transcode your data, manage additional error conditions, deal with race conditions ? When a simple ioctl system call is enough (providing you know what you are doing of course). So you had the answer to your question : When should you prefer this or that way ? Simply because for you, here and now, achieving what you want to achieve will be much simpler using this rather than that.
Is there ever a reason to query ioctl for hardware info when we have sysfs?
1,398,327,992,000
Having a bsd.rd extracted from an installation image and mounted as a vnode I can see there is 0.2MB free space available for additional files such as used during unattended installation. I want to copy a file 1MB in size but it obviously won't fit. Having that said, is there any way to increase the size of the ramdisk kernel without building it from source? My idea was to copy its content to newcontent.d, move my additional file into it, run makefs newcontent.fs newcontent.d on it, then rdsetroot bsd.rd.uc newcontent.fs and finally compress it and put back on an installation media. Sadly, while the size of original bsd.rd is 3.3MB the copy of it takes 180MB... I measure the size of directories using du -hs /path/to/directory.
Someone came up with a similar question some time ago on the [email protected] mailing list. Quoting directly Stuart Henderson's answer: Hello, I want to build "bigger" bsd.rd image. Does rebuilding it only way to increase it? Can I somehow increase its size and just rdsetroot new disk.fs? You'll need to build a release(8) after adjusting at least FSSIZE in the relevant Makefiles under src/distrib, maybe also MINIROOTSIZE in kernel config. So, apparently not, you can't do it without rebuilding the kernel.
Is it possible to increase the size of OpenBSD's bsd.rd without building it from source?
1,649,323,123,000
tlp-stat tells me to do the following: +++ Recommendations * Reconfigure your Linux kernel with PM_RUNTIME=y to reduce your laptop's power consumption. And i have no idea how I am supposed to do this. Machine Info: CPU: Intel Core i5-1135G7 Model: Lenovo Ideapad Slim 5 RAM: 16 Gigs OS: KDE Neon 5.24 Kernel: 5.11.0-43-generic (64-bit)
That’s a long-obsolete recommendation — PM_RUNTIME was removed in 2014. Check your kernel for CONFIG_PM, most distribution kernels enable it by default.
How to reconfigure kernel with PM_RUNTIME=y?
1,649,323,123,000
When seeing a call trace, WARNING: CPU: 1 PID: 0 at arch/x86/kernel/cpu/mce/core.c:1490 mcheck_cpu_init+0x71/0x420 1490 is the source code line number. what about the +0x71/0x420 here?
The first number is the offset inside the function, the second is the size of the function. The warning was produced as a result of something which happened when the instruction pointer was 0x71 bytes after the start of mcheck_cpu_init (this gives the line in the source code), and mcheck_cpu_init is a 0x420-byte-long function.
what is the number +0x71/0x420 meaning in call trace?
1,649,323,123,000
i want to increase the number of openfile, and on google there are so many ex about it, but there are plenty of number too. so, are there any limit in increase number of openfile in linux?
/proc/sys/fs/file-max contains the currently set number of system-wide maximum open files. On my x86_64 systme, that's 9223372036854775807 (which is an incredibly large number, namely 2⁶³-1, the largest integer that you can represent in a signed 64 bit int). You can increase that number (if it's problematically small) until your kernel complains the value you set can't be applied, e.g. echo 1000000 > /proc/sys/fs/file-max (as root). But usually, that limit is already very high (even on smaller machines, half a million), unless you're using a very old kernel or Linux distro.
are there any limit in increase number of openfile in linux?
1,649,323,123,000
I don't quite understand this picture from the CS:APP book. It shows how the kernel virtual memory of a process has a region different from other processes. Does this mean, that the kernel in the context of process A, won't be able to see the process specific data in the context of process B? Is the only way the kernel can access this data, by context switching to process B and using B's page table, or can it be accessed from process A?
Your confusion is understandable; the diagram is mistaken, as indicated in the errata for the book: p. 829, Figure 9.26. The kernel portion of the address space is identical for each process. There is no part of the kernel virtual memory that is different for each process. (On x86-64 specifically, which is the architecture used in the diagram — as indicated by the reference to %rsp — the kernel has a full mapping of physical memory, so any page in memory appears in the kernel’s virtual memory anyway. See What's inside the kernel part of virtual memory of 64 bit linux processes?)
Does a process' kernel virtual memory contain process specific data?
1,649,323,123,000
I feel like this should be a pretty simple question, but I haven't been able to find answers anywhere online. I installed the lowlatency kernel on my laptop (Ubuntu 20.04.2 LTS) a while back in order to use my laptop as an amplifier for an electric guitar, but I'm not doing that on this laptop anymore, and the laptop often gets really overworked from doing basic tasks like running Firefox and PyCharm at the same time, so I want to switch back to only using the generic kernel. How do I uninstall the lowlatency kernel, tell Ubuntu not to install the lowlatency kernel in future, and ensure that it only boots into the generic kernel? I realise that I will need to reboot the laptop into the generic kernel before doing these steps, that's not a problem, I just want to know which steps to take to permanently uninstall the lowlatency kernel, and for some reason I couldn't find any instructions for that. I'm happy to provide the output of any commands if needed. Thanks!
Remove lowlatency kernel sudo apt remove linux-image-<version number>-lowlatency Install generic kernel sudo apt install linux-image-<version number>-generic
Remove lowlatency kernel
1,649,323,123,000
I'm trying to learn more about the swapping system of the linux kernel. I figured out that if a r/o or a code part of a binary in memory needs to get swapped, it shouldn't be moved to the swap file/partition, because it is already backed by a file in the disk. Does it actually works that way? pages from r/o or rx allocations backed by a file gets swapped to the dedicated file? If so, can someone please point me to the code that handles this? I can't seem to find it
Yes, it works that way. Pages whose content is available on disk are discarded, they don’t even need to be swapped to “the dedicated file”. Dirty pages with a non-swap backing store (e.g., memory-mapped files) are written out to that backing store. Swap is only used for evictable pages with no backing store. Most of the time this is handled by kswapd, doing what is known as reclaim: It will asynchronously scan memory pages and either just free them if the data they contain is available elsewhere, or evict to the backing storage device (remember those dirty pages?). See mm/vmscan.c for the implementation.
What parts of a process memory can get swapped?
1,649,323,123,000
I've turned process accounting on with /usr/sbin/accton on soon after boot and nothing is getting logged. Default file is used: /var/log/account/pacct All the commands to get output from this file are empty because the file is empty: dump-acct /var/log/account/pacct empty. ditto with lastcomm and sa -a. empty. The 5.8 kernel has CONFIG_BSD_PROCESS_ACCT turned on in /boot/config* CONFIG_BSD_PROCESS_ACCT=y CONFIG_BSD_PROCESS_ACCT_V3=y What's missing? This is a fresh install of Ubuntu 20.04.2 with Mate desktop.
The accton command's main role is to run a single syscall: acct(2) as seen when stracing the command: acct("/var/log/account/pacct") = 0 The rest is handled by the kernel. As process accounting can generate a lot of logs kernel periodically checks the percentage of free space in filesystem and suspends accounting until enough space appears. There is the kernel.acct sysctl for this. By default the triggers values are: 4 2 30 That is, suspend accounting if free space drops below 2%; resume it if it increases to at least 4%; consider information about amount of free space valid for 30 seconds. From OP's comments, it appears 4.8Gb were free, but these represented only 1% on the filesystem: kernel process accounting got suspended. So this was a case of near filesystem full. Accounting would resume within the next 30s once available space would reach ~ 9.6Gb ( ~ 4%). Or maybe just with a new syscall and > 2%? It's still have been possible to halve the percent to 1% to get for OP's case accounting working until <2.4Gb but not much more can be done. It would be ill-advised to use 0%. OP chose to increase the filesystem size to solve the problem.
why is kernel process accounting not working?
1,649,323,123,000
I want to run qemu on Debian Buster, and have therefore installed the qemu-system-x86_64 package. The issue is that when I run: qemu-system-x86_64 \ -m 128M \ -cpu kvm64,+smep,+smap \ -kernel vmlinuz \ -initrd initramfs.cpio.gz \ -hdb flag.txt \ -snapshot \ -nographic \ -monitor /dev/null \ -no-reboot \ -append "console=ttyS0 kaslr kpti=1 quiet panic=1" \ I get the following error: qemu-system-x86_64: symbol lookup error: /lib/x86_64-linux-gnu/libvirglrenderer.so.0: undefined symbol: drmPrimeHandleToFD How can I fix this error? According to apt, I've the latest version of libvirglrenderer0 and there are no versions available in debian-backports. Versions: qemu-system-x86/stable,stable,now 1:3.1+dfsg-8+deb10u8 amd64 [installed] libvirglrenderer0/stable,now 0.7.0-2 amd64 [installed] Debian: uname -a Linux debian-parallels 4.19.0-16-amd64 #1 SMP Debian 4.19.181-1 (2021-03-19) x86_64 GNU/Linux Update: $ nm -D /lib/x86_64-linux-gnu/libvirglrenderer.so.0 | grep drm U drmPrimeHandleToFD $ ldd /lib/x86_64-linux-gnu/libvirglrenderer.so.0 linux-vdso.so.1 (0x00007ffedad5f000) libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f0d0bed8000) libgbm.so.1 => /lib/x86_64-linux-gnu/libgbm.so.1 (0x00007f0d0bcd3000) libepoxy.so.0 => /lib/x86_64-linux-gnu/libepoxy.so.0 (0x00007f0d0bba1000) libX11.so.6 => /lib/x86_64-linux-gnu/libX11.so.6 (0x00007f0d0ba60000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f0d0ba3f000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f0d0b87e000) /lib64/ld-linux-x86-64.so.2 (0x00007f0d0c0ee000) libPrlDRI.so.1 => /lib/x86_64-linux-gnu/libPrlDRI.so.1 (0x00007f0d0b581000) libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f0d0b3fd000) libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f0d0b3e3000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f0d0b3de000) libxcb.so.1 => /lib/x86_64-linux-gnu/libxcb.so.1 (0x00007f0d0b3b4000) libXau.so.6 => /lib/x86_64-linux-gnu/libXau.so.6 (0x00007f0d0b1ae000) libXdmcp.so.6 => /lib/x86_64-linux-gnu/libXdmcp.so.6 (0x00007f0d0afa8000) libbsd.so.0 => /lib/x86_64-linux-gnu/libbsd.so.0 (0x00007f0d0af8e000) librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f0d0af84000) Update 1: Before and after installing Parallels Tools:
The sign that something is amiss is the libPrlDRI.so.1 => /lib/x86_64-linux-gnu/libPrlDRI.so.1 (0x00007f0d0b581000) line in ldd’s output: there’s no such library in Debian, so a library requiring it can’t come from a Debian package. If libvirglrenderer.so.0 library isn’t the version provided by Debian, sudo apt install --reinstall libvirglrenderer0 would fix that; but apparently that’s not the issue here. libPrlDRI.so.1 comes from Parallel Tools; that ships a number of replacement libraries: libEGL.so.1, libgbm.so.1, and libGL.so.1, along with two Parallels-specific libraries, libPrlDRI.so.1 and libPrlWl.so.1. The guilty party here is probably libgbm.so.1: QEMU requires that library, and if it loads Parallels’ version (either through LD_LIBRARY_PATH, or updated ld.so configuration), it ends up loading libPrlDRI.so.1 instead of libdrm.so.2. To fix that, you should be able to override the override: LD_PRELOAD=/lib/x86_64-linux-gnu/libdrm.so.2 qemu-system-x86_64 ...
How to fix "qemu-system-x86_64: symbol lookup error: /lib/x86_64-linux-gnu/libvirglrenderer.so.0: undefined symbol: drmPrimeHandleToFD"?
1,649,323,123,000
I want to run some tests using linux network bonding. I am using qemu VMs and I am using a custom-built kernel to run them where I set it up to have bonding statically linked. I see in the documentation that it's possible to set up parameters for bonding (like miimon) when loading the module. But how can these values be set when it's statically linked?
There are two APIs to handle bonding interfaces, in addition to the obsolete commands (on Linux) ifconfig and ifenslave, which probably aren't able to create a new bond interfaces (and thus require the bonding module's max_bonds parameter to be non zero). kernel's (rt)netlink API: through most of the modern commands provided by iproute2 including the ip link command. kernel's sysfs API via pseudo-files: usually mounted in /sys/. I can note that "recent" versions of ifenslave actually rely on the the sysfs API and a little on the rtnetlink API through ip link. You should probably configure bonding to create zero interface instead of the default one or two: this is an historical feature that became obsolete with these two APIs. These interface can be created or removed dynamically later. Of course for a very minimal system leaving it or setting it to create the required needed number is still an option. Adding this to the kernel command line options (usually found in /etc/default/grub's GRUB_CMDLINE_LINUX= entry, but doing this correctly might depend on the distribution) should do it: bonding.max_bonds=0 Configuring Bonding Manually with iproute2 (ip link): Warning: the linked documentation is quite incomplete/obsolete and doesn't show that most if not all of the bonding features are available through ip link. You can create delete alter settings enslave etc. with the right ip link command. This command should include the keywords type bond whenever it has to specify a specific bond option after them. command syntax reminder, to display specific bond settings: ip link add type bond help example (interfaces must be down before being enslaved, but can be set such in the same single command in case former state is not known): ip link add dev mybond0 type bond mode active-backup miimon 100 ip link set dev eth0 down master mybond0 ip link set dev eth1 down master mybond0 ip link set dev mybond0 up ip link set dev eth0 up ip link set dev eth1 up changing settings: ip link set dev mybond0 type bond miimon 200 ifenslave equivalent: ip link set dev mybond0 type bond active_slave eth1 some settings can't be changed while there are any enslaved devices. Eg, redefining the bond type: # ip link set dev mybond0 type bond mode balance-rr RTNETLINK answers: Directory not empty enslaved devices can be set free with: ip link set dev eth0 nomaster and the bond device can be deleted with: ip link delete dev mybond0 various information related to a bond device (or to a bond_slave device) can be seen with the additional -details option. At some point using JSON output can be easier for scripts (eg, this Q/A: iproute2: How to display the TYPE of a network devices?). ip -detail link show dev mybond0 ip -detail link show dev eth0 ip -detail link show dev eth1 Configuring Bonding Manually via Sysfs I don't think using this API makes much sense today unless a specific feature wasn't made available through ip link. It was probably created before iproute2 tools could handle all of it. Maybe reading from it could still help in case the tools on a restricted/embedded system don't have anymore access to an ip link command, but even this seems unlikely. As described in the linked documentation, you can create and delete bonding interfaces through /sys/class/net/bonding_masters which is present whenever the bonding module is present (here as built-in). Here are the equivalent commands from above, simply using echo ... > ... from a root shell (or having to work around redirections with sudo echo ... | tee ...): echo +mybond0 > /sys/class/net/bonding_masters echo active-backup > /sys/class/net/mybond0/bonding/mode echo 100 > /sys/class/net/mybond0/bonding/miimon I don't see any way to set interfaces up or down through sysfs anyway. To stay safe: ip link set dev eth0 down ip link set dev eth1 down echo +eth0 > /sys/class/net/mybond0/bonding/slaves echo +eth1 > /sys/class/net/mybond0/bonding/slaves Again, there's no other way: ip link set dev mybond0 up ip link set dev eth0 up ip link set dev eth1 up Then continuing like the previous example: echo 200 > /sys/class/net/mybond0/bonding/miimon echo eth1 > /sys/class/net/mybond0/bonding/active_slave # echo balance-rr > /sys/class/net/mybond0/bonding/mode bash: echo: write error: Directory not empty echo -eth0 > /sys/class/net/mybond0/bonding/slaves echo -mybond0 > /sys/class/net/bonding_masters Specific bonding information is available there: cat /sys/class/net/bonding_masters grep ^ /sys/class/net/mybond0/bonding/* grep ^ /sys/class/net/eth0/bonding_slave/* grep ^ /sys/class/net/eth1/bonding_slave/*
linux bonding - changing params if statically linked
1,649,323,123,000
I have never experienced with doing anything with kernels so i need an advice how to do it. The reason why is to implement this: https://www.pclinuxos.com/forum/index.php/topic,143371.msg1225171.html?PHPSESSID=feu49cfn3vvi6qpjqbh8e4mht3#msg1225171 because I have the same issue with Toshiba Satellite A300-1eb laptop as in this thread. Should I do the downgrade in Debian Buster or Stretch? How to to the downgrade to 4.4 kernel?
Through apt, the Linux kernel 4.4 can be installed on Debian Jessie using the backported version from snapshot. On Debian Buster you need to compile the 4.4* kernel. Here is some documentation: nixcraft: How to compile and install Linux Kernel 5.6.9 from source code Kernel handbook: Chapter 4. Common kernel-related tasks Link to Linux kernel source V4.x
Downgrade Debian to 4.4 kernel
1,649,323,123,000
If I write a few times in a quick succession on a socket (with the POSIX function write), usually all the data I wrote gets sent in a single TCP packets. Unless I write too much or unless I wait too long between the writes. Is the kernel buffering the data I write on a socket and sending packets out at regular intervals? Or does the libc handle this? How long does the kernel wait before sending a packet? Can I request to send a mostly empty packet immediately? Is UDP or other protocols handled differently? I'm curious to understand how all of this work, but I struggled at finding information on the topic.
Your OS likely does some short term buffering before sending. E.g. the Linux man page tcp(7) mentions the TCP_NODELAY option to disable that (see also setsockopt(2)): TCP_NODELAY If set, disable the Nagle algorithm. This means that segments are always sent as soon as possible, even if there is only a small amount of data. When not set, data is buffered until there is a sufficient amount to send out, thereby avoiding the frequent sending of small packets, which results in poor utilization of the network. This option is overridden by TCP_CORK; however, setting this option forces an explicit flush of pending output, even if TCP_CORK is currently set. Nagle's algorithm being the actual buffering algorithm. If I followed the breadcrumbs correctly, RFC 1122 appears to be the currently-valid definition for it. It's the kernel, not libc. The C library does buffering for stdio streams (FILE *), but write() and send() go directly to the kernel. With UDP, the sizes of the datagrams are significant and visible to the upper layers, so similar mangling is not possible there.
How does buffering for TCP packets work?
1,649,323,123,000
I'm on a 32 bit CentOS 7 machine. I just do these commands and a kernel panic is observed: cd repos/ git clone https://github.com/SergioBenitez/Rocket cd Rocket/ cd examples/hello_world/ cargo run -v kernel BUG at kernel/auditsc.c:1532! invalid opcode: 0000 [#1] SMP What should I do? Where to report? I have no idea how to react.
Solution Dumped 32 bit CentOS 7 and installed 32 bit Ubuntu 16.04 LTS which seems to be the last 32 bit Ubuntu LTS. No kernel panic is observed with 32 bit Ubuntu 16.04 LTS while installing Rust or building/running Rust applications. History This 32 bit machine previously had Ubuntu 12.04 LTS and 14.04 LTS and the experience was smooth. So, 16.04 LTS looks to be a sensible choice. Service/update The only problem is that Ubuntu 16.04 LTS is going to be out-of-service in April 2021. So no more updates! To work around that, another solution might be to install 32 bit Debian on the machine. The machine didn't have any Debian before, so anything might happen :( Final solution openSUSE Tumbleweed 32 bit Eventually I installed openSUSE Tumbleweed 32-bit which is updated regularly due to being a rolling release. It works great =)
kernel bug at kernel/auditsc.c:1532
1,649,323,123,000
after command make I call command sudo insmod example1.ko , I am insertig password and then I had error like this insmod error: could not insert module example1.ko: Invalid module format. My version of ubuntu is 5.8.0-44-generic and version of my example1.ko is 5.8.0-38-generic. Maybe these version should me same? I have been trying change in my Makefile from /lib/modules/($ shell uname -r)/build to /lib/modules/5.8.0.-38-generic/build but still was the same error. I have no idea how to fix this problem. I used command dmesg to search more information about problem and I found some errors example1: version magic '5.8.0-38-generic SMP mod_unload' should be '5.8.0-44-generic SMP mod_unload' and there are more six lines with same problem and two errors Failed to send host log message. My Makefile file looks in this way : obj-m := example1.o default: make -C /lib/modules/5.8.0-44-generic/build M=$(PWD) modules clean make -C /lib/modules/5.8.0-44-generic/build M=$(PWD) clean I ran make clean ; make but it had no effect.
The comments on the question led to a solution that I'll summarize here. The OP wanted to build a Linux kernel module; they built it against the wrong version of the kernel source (i.e., against a version of the kernel sources different from the running kernel). This was indicated by a log of the dmesg buffer. To resolve that problem, you need to point to the kernel sources that correspond to the kernel into which you want to load the module. You can do that explicitly on the command line: $ make -C <path_to_kernel_src> M=$PWD Or create a Makefile that does the same. Be sure to clean up any existing artifacts from a previous build before trying to build against the correct kernel sources.
Invalid module format error
1,649,323,123,000
Background I'm trying to download about 150GB to a newly-created Linux box (AWS EC2) with 100gbps network connection at full speed (12.5GB/s) or close to that. The network end is working well. However, I'm struggling to find anywhere on the box that I can put all the data fast enough to keep up, even though the box has 192GB of RAM. My most successful attempt so far is to use the brd kernel module to allocate a RAM block device large enough, and write into that in parallel. This works at the required speed (using direct io) when the block device has already been fully written to, for example using dd if=/dev/zero ... Unfortunately, when the brd device is newly created, it will only accept a write rate of around 2GB/s. My guess is that this is because brd hooks into 'normal' kernel-managed memory, and therefore when each new block is used for the first time, the kernel has to actually allocate it, which it does no faster than 2GB/s. Everything I've tried so far has the same problem. Seemingly, tmpfs, ramfs, brd, and everything else that provides RAM storage hooks into the normal kernel memory allocation system. Question Is there any way in Linux to create a block device out of real memory, without going through the normal kernel's memory management? I'm thinking that perhaps there is a kernel module out there that will split off an amount of memory at boot time, to be treated like a disk. This memory would not be considered normal memory to the kernel and so there would be no issue with it wanting to use it for anything else. Alternatively, is there some way to get the kernel to fully initialise a brd ramdisk (or similar) quickly? I tried writing to the last block of the disk alone, but unsurprisingly that didn't help. Non-RAM alternative In theory, a RAID of NVMe SSDs could achieve the required write speed, although it seems likely there would be some kind of bottleneck preventing such high overall I/O. My attempts to use mdadm RAID 0 with 8 NVMe SSDs have been unsuccessful, partly I think because of difficulties around block sizes. To use direct io and bypass the kernel's caching (which seems necessary), the only block size that can be used is 4096, and this is apparently far too small to make efficient use of the SSDs themselves. Any alternative here would be appreciated. Comments I know 2GB/s sounds like a lot, and it only takes a couple of minutes to download the lot, but I need to go from no EC2 instance at all to an EC2 instance with 150GB loaded in less than a minute. In theory it should be completely possible: the network stack and the physical RAM are perfectly capable of transferring data that fast. Thanks!
On a tmpfs filesystem I can copy 64 files of 1.6 GB (in total 100GB) in 7.8 sec by running 64 jobs in parallel. That is pretty close to your 100 Gbit/s. So if you run this in parallel (meta code): curl byte 1G..2G | write_to file.out position 1G..2G ẁrite_to could be implemented with mmap. Maybe you can simply write to different files, use loop-devices, and use RAID in linear mode: https://raid.wiki.kernel.org/index.php/RAID_setup#Linear_mode If you control both ends, then set up the source as 150 1 GB files used as loop-devices and RAID in linear mode. Then you should copy those in parallel and set up the RAID linear again.
Allocate RAM block device faster than Linux kernel can normally allocate memory
1,649,323,123,000
I am a software developer and I come into contact with a lot of unstable software. Recently I made a small game, which for some reason memleaks until the system hangs and is unresponsive. Usually, REISUB helps, but sometimes not even that and you need to do a hard poweroff. Then it happened to me again with another program, so I thought to myself that this could easily be prevented by writing a script that monitors mem usage and if it crosses a certain value per PID over a certain amount of time, it gets a SIGKILL to take it down immediately. Any ideas? Thanks
Simple idea: check if memory exceeds a given value, and check again after some time. Hand out strikes, three strikes in a row will lead to a kill. Need to know: PID of process #!/bin/bash pid=$1 strike=0 #as long as process exists while (kill -0 $pid 2>/dev/null) ; do #get RAM usage in kB ram=$(pmap -x $pid | tail -1 | awk '{print $3}') #compare to threshold, 1,000,000kB if [[ 1000000 -lt $ram ]] ; then strike=$((strike+1)) if [[ strike -eq 3 ]] ; then kill $pid exit fi else strike=0 fi sleep 5 done
Kill process if it takes more than xGiB of RAM in a given amount of time
1,649,323,123,000
From assembler point of view, when we make a code that just jumps a few commands back, does not jump to any control function that sheduler might use, how can unix interrupt such a code? I assume it is using timer and interruptions. So the question is then can we implement unix system on a hardware without interruptions, and still solve the infinite loop code, in finite time? Or in other words, am I right to assume that the only way unix can deal with code like 'while(true){}' is through hardware timer with interrupts? And if so, what is a minimum requirement for implementing unix-like system on a hardware without hardware timer+interrupts?
What you discuss is the difference between non-preemptive and preemptive scheduling. non-preemptive (also called cooperative) is a bit simpler (no timer needed), and it does not need locks when threads communicate. Examples Apple Mac OS9 and earlier, I think early MS-Windows, many embedded systems, and micro-threads. So yes a timer is needed. However for your question about simplest hardware. Unix needs an MMU and this is far more complex than a timer. (Actually there are Unix like systems that have no MMU and in a lot of situations they work the same (differences are: no security, no swap/paging) Another way to allow task switching for the while true case. Is to use code injection. The compiler or loader will inject code to cooperatively yield. This can be hard to do: For a loader there may not be enough information to know where is is needed. It may break assumptions of atomicity. Given the right language and compiler, it probably could be done well. However I don't know of any examples.
infinite loop VS sheduler
1,649,323,123,000
I am trying to get my own custom Real-time Linux on a Raspberry Pi 4B. My status is this: I built the Linux 5.9.1 version, and have my own version of U-Boot, RFS with which I am able to successfully load and start the kernel, mount RFS, as well as reach the Kernel console also. I need to apply the Real-time patch on top of the Linux Kernel that I am building and so I used the corresponding patch for Linux 5.9.1. Since I am building a 64-bit kernel, I use the following command to get into the Kernel config and update the preemptible option: make ARCH=arm64 CROSS_COMPILE=aarch64-rpi3-linux-gnu- menuconfig But I do not see the fully preemptible kernel option here: .config - Linux/arm64 5.9.1 Kernel Configuration General setup ───────────────────────────────────────────────────────────────── ┌────────────────────── Preemption Model ───────────────────────┐ │ Use the arrow keys to navigate this window or press the │ │ hotkey of the item you wish to select followed by the <SPACE │ │ BAR>. Press <?> for additional information about this │ │ ┌───────────────────────────────────────────────────────────┐ │ │ │ ( ) No Forced Preemption (Server) │ │ │ │ ( ) Voluntary Kernel Preemption (Desktop) │ │ │ │ (X) Preemptible Kernel (Low-Latency Desktop) │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └───────────────────────────────────────────────────────────┘ │ ├───────────────────────────────────────────────────────────────┤ │ <Select> < Help > │ When I run: make menuconfig I do see that option though for the x86 option: .config - Linux/x86 5.9.1 Kernel Configuration General setup ───────────────────────────────────────────────────────────────── ┌────────────────────── Preemption Model ───────────────────────┐ │ Use the arrow keys to navigate this window or press the │ │ hotkey of the item you wish to select followed by the <SPACE │ │ BAR>. Press <?> for additional information about this │ │ ┌───────────────────────────────────────────────────────────┐ │ │ │ ( ) No Forced Preemption (Server) │ │ │ │ ( ) Voluntary Kernel Preemption (Desktop) │ │ │ │ (X) Preemptible Kernel (Low-Latency Desktop) │ │ │ │ ( ) Fully Preemptible Kernel (Real-Time) │ │ │ │ │ │ │ │ │ │ Linux Kernel: 5.9.1 Linux RT patch used: patch-5.9.1-rt19.patch.xz I have enabled the expert mode also, as instructed in another post in unix.stackexchange .config - Linux/x86 5.9.1 Kernel Configuration General setup ───────────────────────────────────────────────────────────────── ┌────────────────────────────── General setup ───────────────────────────────┐ │ Arrow keys navigate the menu. selects submenus ---> (or empty │ │ submenus ----). Highlighted letters are hotkeys. Pressing includes, │ │ excludes, modularizes features. Press to exit, <?> │ │ for Help, </> for Search. Legend: [] built-in [ ] excluded module │ │ ┌────^(-)────────────────────────────────────────────────────────────────┐ │ │ │ [] Support initial ramdisk/ramfs compressed using LZMA │ │ │ │ [] Support initial ramdisk/ramfs compressed using XZ │ │ │ │ [] Support initial ramdisk/ramfs compressed using LZO │ │ │ │ [] Support initial ramdisk/ramfs compressed using LZ4 │ │ │ │ [] Support initial ramdisk/ramfs compressed using ZSTD │ │ │ │ [ ] Boot config support │ │ │ │ Compiler optimization level (Optimize for performance (-O2)) --│ │ │ │ -- Configure standard kernel features (expert users) ---> │ │ │ │ -- Enable membarrier() system call │ │ │ │ -- Load all symbols for debugging/ksymoops │ │ │ │ -- Include all symbols in kallsyms │ │ │ │ [] Enable bpf() system call │ │ │ │ [ ] Enable userfaultfd() system call │ │ │ │ [] Enable rseq() system call │ │ │ │ [ ] Enabled debugging of rseq() system call │ │ │ │ [*] Embedded system │ │ │ │ [ ] PC/104 support │ │ │ │ Kernel Performance Events And Counters ---> │ │ I see that this problem does not happen in the previous RT-patch that was released for Linux 5.6.19. Is there something missing for the 64-bit case from my side?
I raised it in Kernel.org, and then got a response that apparently it was intended to behave that way. https://lore.kernel.org/linux-rt-users/[email protected]/ This basically meant that when we use 5.9.1 version with arm64 architecture, we need to disable KVM and then the Fully preemptible option comes up immediately. I was able to test it successfully.
Real-time patch on Linux 5.9.1 does not show fully-preemptible option for arm64 option
1,649,323,123,000
I made a small linux distro to use on my projects involving an orange pi one H3, but the HDMI output never works To know if the device was supported by the linux kernel, i tested another distro (armbian), which worked fine. With that in mind, i tried to change my kernel config based on their, adding every relevant feature, but my version was still not working I decided to take a look at dmesg after every try, and found that there's one error that i can't get rid of [ 0.827899] sun4i-drm display-engine: bound 1100000.mixer (ops 0xc0851c2c) [ 0.835081] sun4i-drm display-engine: bound 1c0c000.lcd-controller (ops 0xc084e2dc) [ 0.842821] sun8i-dw-hdmi 1ee0000.hdmi: supply hvcc not found, using dummy regulator [ 0.851453] sun8i-dw-hdmi 1ee0000.hdmi: Detected HDMI TX controller v1.32a with HDCP (sun8i_dw_hdmi_phy) [ 0.861330] sun8i-dw-hdmi 1ee0000.hdmi: registered DesignWare HDMI I2C bus driver [ 0.869108] sun4i-drm display-engine: bound 1ee0000.hdmi (ops 0xc0851228) [ 0.875927] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013). [ 0.882941] [drm] Initialized sun4i-drm 1.0.0 20150629 for display-engine on minor 0 [ 0.995934] random: fast init done [ 1.001697] sun4i-drm display-engine: [drm] *ERROR* fbdev: Failed to setup generic emulation (ret=-12) [ 1.013330] lima 1c40000.gpu: gp - mali400 version major 1 minor 1 I couldn't find anything useful about this specific error on the internet, and i couldn't find the explanation for the return code on the kernel source, what could i do to try to fix that problem? I'm using - Linux version 5.8.13 (arm-linux-musleabihf-gcc (GCC) 10.2.0, GNU ld (GNU Binutils) 2.35) - No modules, no initrd/initramfs - Machine model: Xunlong Orange Pi One - U-boot (orangepi_one_defconfig)
This error is an ENOMEM (out of memory error), because CMA size needs to be bigger than one raw frame of the resolution that the display will use 1920x1080 32bpp needs about 8MB, and the default is 16MB so it was working, but 3840x2160 32bpp needs a bit more than 32MB Armbian changes the default size to 128M on the kernel configuration, using CONFIG_CMA_SIZE_MBYTES=128 But setting CMA size to 64M with the bootarg cma=64M, made it work without any patch or change in configuration
Why the hdmi output doesn't work on orange pi one?
1,649,323,123,000
I recently installed Arch on one of my machines. I installed grub in UEFI mode. While setting up Arch, I had installed linux-lts. I used it for some days, and later decided to use both LTS and regular kernel. So, I installed linux (regular) package. After its installation, I assumed grub to boot into the latest linux. But, it continued to boot into older linux-lts. I tried to regenerate initramfs and updated grub a few times but didn't succeed. To get grub to boot in latest linux, I had to edit grub menu entries using grub-customizer. Is this normal behavior of grub ? I had read somewhere that grub actually prioritize latest kernel if found and boots in it directly. Then, in my case why is this different ? Have I misconfigured something ?
I found the expected behaviour of grub's default kernel priority. As I stated in question, grub is actually able to detect the greater version number of a kernel and set it as the default kernel. When grub-mkconfig is called, it loads shell scripts in /etc/grub.d. One of the script is /etc/grub.d/10_linux. This script has a function version_find_latest that helps to actually detect the new version. More info here : https://askubuntu.com/questions/1254758/how-does-update-grub-decide-which-kernel-to-set-as-the-default I was not able to figure out what was wrong with my Arch system, though. I had to reinstall the system due to some critical error caused by my mistake and later I switched my distribution. But, as Arch uses the same command grub-mkconfig and it also has same scripts in /etc/grub.d, it should show the same behaviour. See : https://archlinux.org/packages/core/x86_64/grub/
Grub's default kernel priority
1,649,323,123,000
I saw this error in dmesg: abrt-hook-ccpp: Failed to create core_backtrace: dwfl_getthread_frames failed: No DWARF information found Is that a config problem? Do I need to install something to fix this error?
As far as I know, DWARF (= Debugging With Attributed Record Formats, see this link for more details) is a type of debugging information embedded into an executable program. Sometimes, to minimize disk space use and/or to avoid disclosure of the program's internal workings, this debugging information may be stripped away, or not added in the program in compilation time in the first place. So the message probably means that a program was crashing but the abrt subsystem failed to create a call backtrace listing from the crash state because the necessary debug information was not available. If the program that is crashing is commercial software, the debugging information might be available for the software vendor's own development team only. In that case, the best you can do is to find the core dump information collected by abrt and send it to the software vendor for further analysis. But if it's open-source software, it might be possible to install a separate debug information package corresponding to the software package that contains the failing program. That might allow abrt to generate the backtrace (i.e. a sort of "how did we get here?" information) if the program crashes again. You might even be able to use the debug information to generate backtraces for older crashes from which abrt has saved the core dump information, if the core dump is still available. But if you are not a programmer and have no interest in trying to understand the internal workings of whichever program seems to be failing in your system, you can certainly ignore the message.
No DWARF information found
1,649,323,123,000
I have an internal SSD (NVMe) on which I've installed Ubuntu 18.04 with Full Encryption using LUKS. Recently, I replaced the motherboard of my laptop which caused the signature verification of the kernel to get failed during boot. error: /boot/vmlinuz-****-generic has invalid signature error: you need to load the kernel first If I tried to boot without secure boot then it gets past the signature verification, but later during boot process, I get an error that says "cryptsetup: lvm is not available". So, in order to fix the signature verification issue that occurs with secure boot, I read that I need to add a newly signed kernel into the boot partition using a live USB. However, after booting into the Live OS (Ubuntu) I couldn't find the NVMe drive at all. I checked the /dev location and used several tools such as gparted, fdisk, lsblk. I just couldn't get the drive listed with any of these tools. May I know why the drive is not getting detected? And how to get this drive mounted? I am also not sure why the "cryptsetup: lvm is not available" issue occurs. PS: I tried booting up WindowsToGo (Windows 10) and from it, I was able to find the SSD drive being listed under "Disk Management" utility. So, I don't think it is an issue with the SSD. It is functional. I am just not able to get this drive listed with Live Linux.
I finally fixed the issue. The issue occurred since the BIOS settings of SATA Operation was defaulted to RAID On mode instead of AHCI mode, with the newly replaced motherboard. So, basically changing the SATA Operation to AHCI and then disabling Secure Boot(to get rid of Invalid Kernel Signature) fixed the issue.
LUKS encrypted drive is missing. "cryptsetup: lvm is not available"
1,649,323,123,000
I'm currently developing a Linux Security Module which is stored in the security directory of the kernel source tree. When I compile and install the kernel using the following commands, the module is loaded and everything is working fine: fakeroot make -j9 -f debian/rules.gen binary-arch_amd64_none_amd64 apt remove linux-image-4.19.0-9-amd64-unsigned dpkg -i linux-image-4.19.0-9-amd64-unsigned_4.19.118-2_amd64.deb If I make the changes to the module and rebuild the kernel using the commands above however, they won't be included in the new image, unless I delete all build output and recompile the whole kernel. Is there a way to only rebuild a specific part of the kernel i.e. only the security directory?
I found it out thanks to the help of a university professor. You have to delete the file debian/stamps/build_amd64_none_amd64. # The next line make sure only the required parts are rebuild rm debian/stamps/build_amd64_none_amd64 # Rebuild the kernel fakeroot debian/rules source fakeroot make -j9 -f debian/rules.gen binary-arch_amd64_none_amd64
How can I recompile only a specific part of the Linux kernel on Debian Buster?
1,649,323,123,000
Since we have the following from dmesg [37785.390633] XFS (dm-2): Metadata corruption detected at xfs_dir3_block_read_verify+0x5e/0x110 [xfs], block 0x7f8af18 [37785.390634] XFS (dm-2): Unmount and run xfs_repair Dm-2 is the /var We start to perform the xfs_repair according to the document - https://access.redhat.com/solutions/1194613 First we force umount on /var umount -l /var and the we start the procedure according to - https://access.redhat.com/solutions/1194613 xfs_repair -v /dev/mapper/vg_var 2>&1 |tee /tmp/xfs_repair.out xfs_repair: /dev/mapper/vg_var contains a mounted filesystem xfs_repair: /dev/mapper/vg_var contains a mounted and writable filesystem fatal error -- couldn't initialize XFS library as we can see from above xfs_repair is complain about /dev/mapper/vg_var contains a mounted file-system , in spite we force the umount
You have asked for a lazy unmount (umount -l). This will only unmount the filesystem when there are no more processes accessing it. The documentation (man umount) itself says -l Lazy unmount. Detach the filesystem from the filesystem hierarchy now, and cleanup all references to the filesystem as soon as it is not busy anymore. In this scenario you shouldn't use the -l flag because you need to be sure the filesystem really is unmounted. Verify that the filesystem really is unmounted before proceeding. If you have lsof that may help identify the unexpected processes.
xfs_repair , complain about contains a mounted and writable filesystem
1,649,323,123,000
I'm cross-compiling a kernel, configuring it with $ make sunxi_arm64_defconfig ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- which happens to be what I need to do, and then want to make a few adjustments. But running make menuconfig, not making any changes, saving and exiting produces a completely different configuration (for starters, the architecture is x86), so I can't use it. Why could that be? Surely, this isn't the expected behaviour?
The default behaviour is to apply configuration settings for the current architecture (which isn’t the architecture used for the last configuration). When configuring for cross-building, you need to specify the architecture again: make menuconfig ARCH=arm64
`make menuconfig` overrides configuration with architecture defaults
1,649,323,123,000
man 7 netdevice states: SIOCGIFMTU, SIOCSIFMTU Get or set the MTU (Maximum Transfer Unit) of a device using ifr_mtu. Setting the MTU is a privileged operation. Setting the MTU to too small values may cause kernel crashes. I don't see any connection between MTU and a kernel crash. Under what circumstances could a small MTU value lead to a kernel crash?
Small MTU leads to more packets. Many more packets? Too much work for the kernel, hence a crash/. Is this intellectual curiosity at work, or do you have a problem which involves this?
How can a small MTU cause a Linux kernel crash?
1,649,323,123,000
I am running Jenkins with lots of jobs which require lots of open files so I have increased file-max limit to 3 million. It still hits 3 million sometimes so I am wondering how far I can go. Can I set /proc/sys/fs/file-max to 10 million? How do I know what the hard limit of file-max is? I am running CentOS 7.7 (3.10.X kernel)
The kernel itself doesn’t impose any limitation on the value of file-max, beyond that imposed by its type (unsigned long, so 4,294,967,295 on typical 32-bit systems, and 18,446,744,073,709,551,615 on typical 64-bit systems). However each open file consumes around one kilobyte of memory, so you’ll be limited by the amount of physical RAM installed; ten million open files would consume approximately ten gigabytes of memory. The kernel initialises file-max to 10% of the usable memory at boot, which means the “hard” limit on any given system is approximately ten times the default value.
How to find max limit of /proc/sys/fs/file-max
1,649,323,123,000
There are two groups of LSM hooks under Security hooks for inode operations: inode_* and path_*. Many of them look identical. For example, inode_link and path_link. What is the difference between the inode and path hooks? When each should be used?
Path hooks were added by TOMOYO maintainers, to allow file path calculation in LSM module. These hooks receive a pointer to path struct. inode hooks resides on a lower level, and receive a pointer to inode struct. The file path cannot be retrieved from this struct. Generally speaking, if you don't need the file path you should use inode hooks since they are called on a lower level. It means that your hook will be called less frequently. Note that path hooks are compiled only if the kernel is compiled with CONFIG_SECURITY_PATH.
LSM Hooks - What is the difference between inode hooks to path hooks
1,649,323,123,000
the x86_64 architecture allows 32-bit code run natively while operating in Long mode. Therefore a submode called "compatibility mode" was added. Now the the memory managment mode goes through the following tables to calculate physical address: PML4 (Linux: PGD) -> PDPT (Linux: PUD) -> PD (Linux: PMD) -> PT -> physical page Each of the tables mentioned above consists of 512 entries with the size of 64 bit, so you need 9 bit as an index for each table and 12 bit as an offset that is added to tha last address retrieved from PT. This sums up to 48 bit. Now it seems pretty clear that you can't achive the same with an 32-bit address. Others already tried to explain how this is done (here or here) but in my opinion this can't be right. There it is explained that the PDPT and the PD only have one entry each but this behavior would cause some trouble in the way I understand it. The MMU would take the first most significant 9 bits from the address to get the address for the PDPT. Now in the PDPT there is only one entry but the MMU has a strict procedure and will take the next 9 bits from the address as the index. Now in only one from 512 cases the fist entry would be selected. The same problem exists for the PD. And that is not the only problem I see. As already stated 32 bit are not enough for a complete translation, so some tables have to be skipped somehow. Hopefully I could explain my problem und someone can help me.
The Compatibility sub-mode of the Long Mode in x86-64 does not affect paging, which functions as in 64-bit Long Mode. 32 implicit zero bits are inserted in bit positions 63-32 of the virtual address that is fed into the paging unit. You only need to set up page tables that map these zero bits to something valid, like the answer in the first question you linked to explained. See 2.1.4.1 Long-Mode Memory Management (PDF) in AMD64 Architecture Programmer’s Manual Volume 1: Application Programming
How does the Linux kernel arrange the Page tables when running 32-bit code in x86_64 Long mode?
1,549,043,928,000
I have an application that sends lot of traffic over an UDP socket, every packet is sent on 2 interfaces: enp2s0 (1Gbit ethernet device) and enx00808a8eba78 (100Mbit usb-ethernet device). The maximum socket send buffer is the default (212992 bytes), and it is full most of the time when the traffic is running: root@punk:~# netstat -a Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State udp 0 211968 0.0.0.0:x11-2 0.0.0.0:* Data in qdisc queue of the two interfaces is about 40k: root@punk:~# tc -s qdisc show dev enp2s0 qdisc pfifo_fast 0: root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 Sent 1697909529 bytes 1136534 pkt (dropped 0, overlimits 0 requeues 12) backlog 0b 0p requeues 12 root@punk:~# tc -s qdisc show dev enx00808a8eba78 qdisc pfifo_fast 0: root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 Sent 1675952337 bytes 1121840 pkt (dropped 0, overlimits 0 requeues 55) backlog 43326b 29p requeues 55 Since 200k of data is pending in the socket but only 40k is queued in the second qdisc, I assume that the remaining 160k are pending inside the slow interface driver (enx00808a8eba78). Is there a way to check how many packets (or data) is pending for transmission in a USB device or, more generically, in a network device? Something like the number of DMA buffers ready for TX but not sent yet.
Seems there isn't a way to retrieve the device queue length from userspace. BTW, some details if someone is interested: usbnet devices keep track of the queued TX packets using the field txq.qlen of the struct usbnet. Maximum TX queue length is defined by field tx_qlen of struct usbnet. In my example I have 60 (tx_qlen) packets queued in the USB driver and (more or less) 30 packets in the qdisc, each one carrying 1500 bytes of data. Since socket buffer is calculated considering the skb->truesize (i.e. skb data + skb structure size), each packet is 2.3k: 2.3k * (60 + 30) ~= 200k This confirm that 138k of the socket buffer are consumed by packets queued in the network driver, while 69k of the socket buffer are in the qdisc queue: there aren't other packets queued somewhere else in the kernel.
How many packets are pending inside a network interface?
1,549,043,928,000
What qdisc is controlled via the tc command versus sysctl net.core.default_qdisc? Consider $ tc qdisc show dev eth2 qdisc mq 0: dev eth2 root $ sysctl net.core.default_qdisc net.core.default_qdisc = pfifo_fast On this system, the default qdisc is set to pfifo_fast but the qdisc in use is mq (Multi-queue) after a reboot. It's rather obvious that they are not directly related, or at the least, not in a manner which makes sense "out of the box". This link about queuing in the Linux Network Stack makes it clear that tc qdisc ... applies to the Queue which sits between the IP Stack and the Driver Queue. Can anyone disambiguate these two for me?
The multiqueue ("mq") scheduler enables the Linux kernel to support a feature called Receive-Side-Scaling (RSS) where the load for packet processing is distributed across multiple CPU cores. On my Ubuntu 18.04.1 Desktop system with net.core.default_qdisc set to pfifo_fast and I execute the following command: $ tc qdisc show dev eth0 This is what is output: qdisc mq 0: root qdisc pfifo_fast 0: parent :2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 qdisc pfifo_fast 0: parent :1 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 The mq scheduler has configured two pfifo_fast queues on device eth0 by default. Hope this clears things up.
Changing qdisc algorithm sysctl and tc
1,549,043,928,000
I have dual boot configuration with Windows 10 and Debian. For Debian I have multiple kernels installed and as a "main" grub suggest the newest one and lists olders in "Advanced options for Debian GNU/Linux". Now, I'd like to have Windows as default option selected on computer start but also some older kernel under the main Debian in menu. How can I achieve that? I found information with editing GRUB_DEFAULT in /etc/default/grub but this is set as following: GRUB_DEFAULT="Windows Boot Manager (on /dev/nvme0n1p1)" That's the part I don't want to edit but as a result after update-grub I want to have menuentry pointing to version 4.9 instead of 4.18. How can I achieve it?
You need to add new script to /etc/grub.d/ directory or edit the existing rule (10_linux most probably in your case). However, please be aware that those automatic tools have some limitations, so I'm pretty sure that if you start playing with grub configuration sooner or later you will end-up editing /boot/grub/grub.cfg directly, as a most flexible method. If you are fine with adding new menu entry for particular kernel then probably the easiest is to edit /etc/grub.d/40_custom and add something like menuentry "Kernel 4.9" { set root='hd0,gpt2' linux /vmlinuz-4.9 root=/dev/sda1 ro init=/usr/lib/systemd/systemd } Change settings and kernel parameters to yours, look at your current grub.cfg to check them. You may also need to add other lines like initrd or get rid of systemd if you don't use it.
Change Kernel version under option in GRUB
1,549,043,928,000
If you try to run a random perf binary that does not match your currently running Linux kernel, it says: $ perf WARNING: perf not found for kernel 4.13.0-45 Of course, if I get the perf for this version, it works. Looking at some popular resources like Linux perf Examples and the Perf wiki I couldn't find the answer for this specific question: why perf strictly needs to be in the same version as the kernel?
perf and the kernel are tied fairly closely together, in fact perf is part of the kernel source code. At heart you should think of it as a kernel-specific tool; but packaging practice and requirements in Linux distributions means users end up thinking of it as a “standard” tool. There is no special perf-private interface between perf and the kernel, so the perf-supporting parts of the kernel have to follow the usual userspace-facing rules — i.e. maintain backward-compatibility; so in theory, it would be possible to run an older version of perf with a newer kernel, since the newer kernel is supposed to support whatever interface the older version of perf uses to communicate with it. However, in practice it turns out that if you need to use perf to investigate performance of a workload on a given kernel, you also need to be able to investigate all the performance-affecting features of that particular kernel; an older version of perf can’t support features which were added after it was released, so you would typically end up needing the matching version of perf anyway. As a result of all this, the pragmatic option is to require a version of perf matching the running kernel. Depending on your distribution’s packaging choices, perf may be a front-end which checks for an implementation of perf matching your running kernel, and fails if it can’t find one; or it may be some version of perf itself. I haven’t tested going back too far, but current versions of perf work fine on older kernels, for example perf 5.6.14 and 5.7.7 work fine with a 4.19 kernel.
Why 'perf' needs to match the exact running Linux kernel version?
1,549,043,928,000
I have "page allocation failure" reported on my system: [some_app]: page allocation failure: order:4, mode:0x2040d0 Could someone please explain what that mode exactly stands for? Am I right that this is for the following GFP flags: GFP_NOTRACK | GFP_COMP | GFP_WAIT | GFP_IO | GFP_FS ? Kernel version is 3.10.0-693.21.1.el7.AV1.x86_64.
The flags seem to be defined in file <kernel source directory>/include/linux/gfp.h, and at least on kernel 4.9.105, mode 0x2040d0 would seem to map to: GFP_NOTRACK | GFP_COMP | GFP_FS | GFP_IO | GFP_RECLAIMABLE But if I Google for flag definitions, I see in some sources value 0x10 defined as GFP_WAIT instead of GFP_RECLAIMABLE, which seems to match your source. This LWN discussion might be useful reading, but the best description I can see is in the comments in include/linux/gfp.h file. In general, these mode flags modify the working of the page allocator. GFP_NOTRACK: avoids tracking with kmemcheck. GFP_COMP: address compound page metadata GFP_FS: indicates that the allocator can call down the the low-level filesystem to reclaim pages if necessary; if this is cleared, I think would indicate the allocation is for some filesystem code that may be holding locks... which might be important when using a swap file, for example. GFP_IO: indicates that the allocator can start physical I/O to reclaim pages to satisfy this request. GFP_RECLAIMABLE: "[this] is used for slab allocations that specify SLAB_RECLAIM_ACCOUNT and whose pages can be freed via shrinkers." This flag is apparently used by memory allocations for filesystems. Basically it seems to mean that there is a kernel function (a shrinker) that can be called to free or minimize this allocation if necessary.
Allocation mode explanation
1,549,043,928,000
I have 3 types of network connection and I set manyally metric for each connection: LAN cable, metric = 1 - enp0s31f6, WiFi, metric = 100 - wlp2s0, 4G/LTE modem, metric = 1000 - wwp0s20f0u8c3. The reason for setting of metric is that I want to prioritise my connectivity based on the list above (the modem connection is metered). I also set autoconnect priority for each connection but according to KDE#394364 this will not work as I expected so that’s why I tried to go the way by setting of the metric. When each connection is activated, the system adds a strange value of 20000 to the metric. Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 100.87.7.97 0.0.0.0 UG 1000 0 0 wwp0s20f0u8c3 0.0.0.0 192.168.88.1 0.0.0.0 UG 20200 0 0 wlp2s0 100.87.7.96 0.0.0.0 255.255.255.224 U 1000 0 0 wwp0s20f0u8c3 192.168.88.0 0.0.0.0 255.255.255.0 U 200 0 0 wlp2s0 . Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.1.254 0.0.0.0 UG 20001 0 0 enp0s31f6 192.168.1.0 0.0.0.0 255.255.255.0 U 1 0 0 enp0s31f6 This value 20000 is not added always, or to be more precise, it gets changed after some time to the value which I set for the particular connection. Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.1.254 0.0.0.0 UG 1 0 0 enp0s31f6 192.168.1.0 0.0.0.0 255.255.255.0 U 1 0 0 enp0s31f6 Any idea why this is happening? My system is Manjaro KDE running on 4.16.
This is the behavior of Network Manager from version 1.8, see commit: https://github.com/NetworkManager/NetworkManager/commit/6b7e9f9b225e81d365fd95901a88a7bc59c1eb39 It says: This makes it possible to retain Internet connectivity when multiple devices have a default route, but one with the link type of a higher priority can not reach the Internet.
Metric on connections and magic 20000 value added
1,549,043,928,000
The prototype of ioctl in linux driver modules is int ioctl(struct inode *i, struct file *f, unsigned int cmd, unsigned long arg); or long ioctl(struct file *f, unsigned int cmd, unsigned long arg); but inside sys/ioctl.h it is int ioctl(int fd, int request, void *argp); The first argument type is different, is there any module between ioctl calling program and driver that converts this argument(From file descriptor to file structure pointer)? How this mapping works?(From file descriptor to file).
In ${kernel_root}/fs/ioctl.c (in 4.13) there's: SYSCALL_DEFINE3(ioctl, unsigned int, fd, unsigned int, cmd, unsigned long, arg) That SYSCALL_DEFINE3 is a macro that takes those parameters and expands it to the appropriate signature for the system call. That function is the logical entry point for the ioctl system call from user space. That function, in turn, looks up the struct fd corresponding to the given file descriptor and calls do_vfs_ioctl passing the struct file associated with the struct fd. The call will wind through the VFS layer before it reaches a driver, but that should give you a place to start looking.
mapping of ioctl to its definition
1,549,043,928,000
In (or after?) 2.4 the sys_call_table symbol was no longer exported to make it harder to hook system calls. Ignoring that you can still obtain this information from the IDT or by reading /boot/System.map-<kernel-version>, I was wondering why this address seems to be constant across reboots and machines (with the same kernel version)? Is it just not worth making it dynamic? Is there a reason requiring it to be static? What I am aiming at is this: sys_call_table is no longer exported to modules in order to make hooking syscalls a little harder, but modules (i.e. kernel-space programs) can still easily get this info from reading System.map or simply guessing based on the kernel release information as the address seems to be identical for all machines running the same version of the kernel.
Since at least version 4.8 of the kernel, at least on x86, the kernel address space is randomised, including the system call table’s address. See RANDOMIZE_BASE in the kernel configuration for the basic details. This means that the address given in System.map is useless, and the address of the system call table changes at every boot. If you need to debug something and want to use System.map, you need to boot with the nokaslr kernel command-line parameter to disable KASLR.
Why is sys_call_table predictable?
1,549,043,928,000
I am trying to compile Linux kernel on Odroid C2 and install DVB-T drivers using media_build. I followed steps described on the official wiki and forum. According to the forum, it is necessary to make Device Drivers -> Amlogic Device Drivers -> Multimedia Support compile as modules if I want to use the backported media_build DVB-T drivers. First, I obtained Linux source: git clone --depth 1 https://github.com/hardkernel/linux.git -b odroidc2-3.14.y Then I set the configuration for Odroid C2: make odroidc2_defconfig And finally, I followed the steps for kernel compilation and disabled V4L dependencies as follows: $make menuconfig Device Drivers Amlogic Device Drivers ION Support ION memory management support = no Amlogic ion video support videobuf2-ion video device support = no Amlogic ion video devic support = no V4L2 Video Support Amlogic v4l video device support = no Amlogic v4l video2 device support = no Amlogic Camera Support Amlogic Platform Capture Driver = no Multimedia support = m This configuration compiles without errors, and then I can compile and install media_build drivers. The problem is that with this configuration, Odroid C2 is not capable of playing video files in Kodi with hardware decoding. My impression is that I disabled a dependency which is necessary for amlogic hardware decoding. I tried to modify the config and marked as modules all the dependencies which I previously disabled. Unfortunately, with this configuration, the kernel could not be compiled, and the compilation failed with following error: drivers/built-in.o: In function `v4l2_device_release': odroid-battery.c:(.text+0x1731c0): undefined reference to `media_device_unregister_entity' odroid-battery.c:(.text+0x1731c0): relocation truncated to fit: R_AARCH64_CALL26 against undefined symbol `media_device_unregister_entity' drivers/built-in.o: In function `__video_register_device': odroid-battery.c:(.text+0x173c4c): undefined reference to `media_device_register_entity' odroid-battery.c:(.text+0x173c4c): relocation truncated to fit: R_AARCH64_CALL26 against undefined symbol `media_device_register_entity' drivers/built-in.o: In function `v4l2_device_register_subdev': odroid-battery.c:(.text+0x1797b0): undefined reference to `media_device_register_entity' odroid-battery.c:(.text+0x1797b0): relocation truncated to fit: R_AARCH64_CALL26 against undefined symbol `media_device_register_entity' drivers/built-in.o: In function `v4l2_device_unregister_subdev': odroid-battery.c:(.text+0x179a58): undefined reference to `media_entity_remove_links' odroid-battery.c:(.text+0x179a58): relocation truncated to fit: R_AARCH64_CALL26 against undefined symbol `media_entity_remove_links' odroid-battery.c:(.text+0x179a60): undefined reference to `media_device_unregister_entity' odroid-battery.c:(.text+0x179a60): relocation truncated to fit: R_AARCH64_CALL26 against undefined symbol `media_device_unregister_entity' drivers/built-in.o: In function `subdev_close': odroid-battery.c:(.text+0x180c10): undefined reference to `media_entity_put' odroid-battery.c:(.text+0x180c10): relocation truncated to fit: R_AARCH64_CALL26 against undefined symbol `media_entity_put' drivers/built-in.o: In function `subdev_open': odroid-battery.c:(.text+0x1814f4): undefined reference to `media_entity_get' odroid-battery.c:(.text+0x1814f4): relocation truncated to fit: R_AARCH64_CALL26 against undefined symbol `media_entity_get' odroid-battery.c:(.text+0x181540): undefined reference to `media_entity_put' odroid-battery.c:(.text+0x181540): relocation truncated to fit: R_AARCH64_CALL26 against undefined symbol `media_entity_put' Makefile:831: recipe for target 'vmlinux' failed make: *** [vmlinux] Error 1 How can I compile the kernel with Multimedia Support as modules and still be able to use hardware decoding for video files?
I finally make it work. I created a git repository with a script, patches, and instructions. If someone is also dealing with this issue please clone this repository and do the following steps (these are also described in the README in the repository): Linux Clone Hardkernel Linux repository git clone --depth 1 https://github.com/hardkernel/linux.git -b odroidc2-3.14.y cd linux Apply patch which allows you to compile aml video driver as a module (I took this step from LibreELEC media_build edition) patch -p1 < ../odroidC2-kernel/allow_amlvideodri_as_module.patch Apply default Odroid C2 config make odroidc2_defconfig Now modify the config make menuconfig And set the following values (press Y to select, N to remove and M to select it as a module) Device Drivers Amlogic Device Drivers ION Support ION memory management support = Yes Amlogic ion video support videobuf2-ion video device support = M Amlogic ion video devic support = no V4L2 Video Support Amlogic v4l video device support = M Amlogic v4l video2 device support = no Amlogic Camera Support Amlogic Platform Capture Driver = no Multimedia support = M Compile the kernel make -j5 LOCALVERSION="" The LOCALVERSION parameter is only to avoid "+" sign in the name of the kernel. After the successful compilation, install the modules, kernel and reboot the system sudo make modules_install sudo cp -f arch/arm64/boot/Image arch/arm64/boot/dts/meson64_odroidc2.dtb /media/boot/ sudo sync sudo reboot Media build Clone media_build repository and try to build it. git clone https://git.linuxtv.org/media_build.git cd media_build ./build The build command probably fails. Ignore this error and continue with following steps. Following script is also inspired by LibreELEC media_build edition and it just includes the video driver into media module. ../odroidC2-kernel/add_video_driver_module.sh To avoid potential issues with compilation, try to disable Remote controller support and all the USB adapter you don't need to. Try to run: make menuconfig It will probably result in an error similar to the following one: ./Kconfig:694: syntax error ./Kconfig:693: unknown option "Enable" ./Kconfig:694: unknown option "which" You need to edit the file v4l/Kconfig and align with spaces the lines printed in the error. The lines need to be aligned with the previous ones. Then, run the make menuconfig again. Probably, you need to do this step multiple times. If you see a menu instead of the error, you can modify the config in the following way: Remote Controller support = no Multimedia support Media USB Adapters ## Disable all driver you don't need ## Apply the following patch patch -p1 < ../odroidC2-kernel/warning.patch Make the following change to avoid error and compile kernel sed -i 's/#define NEED_PM_RUNTIME_GET 1/\/\/#define NEED_PM_RUNTIME_GET 1/g' v4l/config-compat.h make -j5 Possibly, you need to run previous step (both sed and make) multiple times before it succeeds. After the compilation, install the modules and reboot the system sudo make install sudo reboot The final step is to add amlvideodri module into /etc/modules to make it load on boot. sudo echo "amlvideodri" >> /etc/modules That's all. You can now enjoy your DVB-T TV and HW accelerated videos in Kodi.
How to compile Linux kernel with multimedia as module on Odroid C2
1,549,043,928,000
We got a few new machines: x3850 x6. All could pxe boot fine, except one machine, that gives the following kernel panic, looks like an exciting issue: We cannot even scroll up after the kernel panic occurs, after 30-40 seconds. It hungs so bad, that I cannot even type anything. Anybody have any clue, what could the problem possibly be? If it is a HW bug, then what to replace? CPUs? Motherboard? the BIOS settings are the exact same vs. working ones the firmware/bios versions are the exact same vs. working ones tried cold boot, the same kernel panic tried to boot with kernel parameters: "acpi=off" - it just did the same kernel panic at ~18 sec, not the usual panic at 30-40 sec. tried: "noapic nomodeset xforcevesa" - panic after 30-40 sec. tried: "acpi=off noapic nomodeset xforcevesa" - panic after 30-40 sec. tried: "isolcpus=0" boot param, same kernel panic, after 30-40 sec. tried to boot a slacko-5.6-PAE.iso - it booted normally! 3.10.5 SMP PAE. But we have to use SLES. The PAE kernel only sees ~65 GByte RAM, if that is a useful info. tried: https://www.memtest86.com/downloads/memtest86-iso.zip to run a simple memtest, but after 59 seconds of run without memory error, the machine freezed. -> UPDATE: The Memtest86+ from: http://www.memtest.org/#downiso doesn't freezes. Once I seen: "Kernel panic - not syncing: Watchdog detected hard LOCKUP on cpu 18" - there are 4 CPUs in the machine, each has 18 cores, so don't know which one is this.. UPDATE: with the "maxcpus=0" kernel boot parameter, it finally booted, but still investigating, because still said: "A start job is running for Activation of LVM2 logical volumes (Xmin xs / no limit)" - but maybe CPU HW issue?
After an emulex card driver upgrade, it doesn't kernel panic any more. Version 11.0.270.24 to 11.4.1186.3
Fixing recursive fault but reboot is needed on x3850 x6 SLES12 [closed]
1,549,043,928,000
I'm totally new to the Linux kernel. In "Understanding the Linux Kernel", page 279, the author says the following, where prev refers to the process that called schedule(): schedule() examines the state of prev. If it is not runnable and it has not been preempted in Kernel Mode (see the section “Returning from Interrupts and Exceptions” in Chapter 4), then it should be removed from the runqueue. However, if it has nonblocked pending signals and its state is TASK_INTERRUPTIBLE, the function sets the process state to TASK_RUNNING and leaves it into the runqueue. Why must prev stay in the run queue instead of going into sleep? What if prev is not runnable and preempted in kernel mode?
From what you said above: ... if it has nonblocked pending signals and its state is TASK_INTERRUPTIBLE ... By leaving it in the run queue, it'll give the process an opportunity to handle its pending signals. A process ought to handle pending signals before it sleeps.
Why must a process in TASK_INTERRUPTIBLE state that is preempted stay in run queue?
1,549,043,928,000
Some months ago I realized when I am writing a new post on my blog (with Hugo), the feature to reload the contents as files changes stopped working. I waited in order to see if it was a problem with hugo, but the problem is with my Gentoo. For example, if I rename a file, the file manager does not see it immediately, I have to press F5 to be able to see the renamed file. The same happens if I download a file with the folder in which the file is being downloaded opened in the file manager. I though the problem may be I didn't have installed inotify-tools, but it is installed. In my kernel configuration I have inotify enabled: grepr inot .config CONFIG_INOTIFY_USER=y Any ideas in which package I accidentally could have removed?
I found the problem!, accidentally  I am working with ensime, and in its logs I saw this exception:  java.lang.Exception: java.io.IOException: User limit of inotify watches reached So I just needed to increase the number of files that can be watch in systemctl: fs.inotify.max_user_watches=32768 Now everything is working right
System unable to detect renamed/new files
1,549,043,928,000
blockdev has this option --getmaxsect to "get max sectors per request". BLOCKDEV(8) manual page however doesn't state what max sectors per request means. I ran this command on my system and I got the following results: # blockdev --getmaxsect /dev/sda 2560
blockdev is a basic interface to the block device ioctls; in --getmaxsect’s case, it uses BLKSECTGET, which returns the maximum number of sectors for a request in the queue associated with the block device. There doesn’t seem to be much documentation on this; include/linux/blkdev.h is relevant. It is mentioned briefly in chapter 12 of Linux Device Drivers, 2nd edition: BLKSECTGET BLKSECTSET These commands retrieve and set the maximum number of sectors per request (as stored in max_sectors). In summary, blockdev --getmaxsect returns the maximum number of sectors which can be used in a single request to the block device.
blockdev command: what is maximum sectors per request?
1,549,043,928,000
I have downloaded the latest linux kernel and the Next tree, I want to run sparse on 'drivers/staging' tree, I tried enabling all the drivers via make menuconfig and then did make C=1 M=drivers/staging But the above command only builds some of the drivers, not all. How do I enable more staging drivers to be built ?
There is a special symbol in the Kconfig files called BROKEN. Code which doesn't work correctly at all (usually catastrophically fails) is marked in the Kconfig files with a dependency on this symbol, which is not defined anywhere by Kconfig itself, and therefore is not set by allyesconfig or any other automatic config targets. A reasonable percentage of the drivers in the staging tree fall into this category, and thus make allyesconfig will not include many of them. I'm not 100% certain, but I believe that you can manually add BROKEN=y at the end of the .config file in your build directory, then manually enable the Kconfig symbols either by adding them by hand in a similar manner to BROKEN, or through make menuconfig. You may also need to enable the COMPILE_TEST symbol, but that one has an entry in the menuconfig UI (it's in the first sub-menu, near the top), and even then there is a possibility that some of the drivers may be architecture dependent.
How to compile all drivers in the staging tree of the Linux Kernel?
1,549,043,928,000
Recently on a few machines, mount points have started to disappear(One mountpoint per server, at random intervals and random machines) and I can find nothing in the logs. I have five mountpoints and randomly any of them will go away. There is no relation between the disappearance and the mountpoint protocol (both TCP and UDP mounts will disappear). What I have not tried Run tcp dump continiously (and am reluctant to do so, since this issue happens once every 2-3 days...) Info about the machines: NFS booted, the boot server is FreeBSD 11.0. (nothing in its logs btw) rootfs options are: (rw,noatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=ADDRESS,mountvers=3,mountport=677,mountproto=udp,local_lock=all,addr=ADDRESS) OS is CentOs7, running the 4.11.0-1 ML kernel. Example mount options: rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=ADDRESS,mountvers=3,mountport=4002,mountproto=udp,local_lock=none,addr=ADDRESS) (rw,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=udp,timeo=11,retrans=3,sec=sys,mountaddr=ADDRESS,mountvers=3,mountport=4002,mountproto=udp,local_lock=none,addr=ADDRESS,_netdev) Info about the NFS shares and server I have in total 5 distinct NFS servers, load balancing is done over DNS, all export the same mountpoints(a few shares UDP, a few TCP). The server running the NFS server is RHEL 6.7(Santiago), kernel version 2.6.32-573.el6, snfs-common/server/client is 4.7.3. As you have also guessed nothing in the server logs relating to this problem. Example of export options: (rw,async,root_squash,no_wdelay,no_subtree_check,fsid=ID) Things I have tried so far: My first assumption is that I have a proccess which calls umount or umount2, through some bizzare reason on a random NFS share, after tracing with sysdig for both unlinkat, unlink, unmount and remove systemcalls, I can only see systemd-logind doing unmount2 when a user session is destroyed, but not on the mountpoints. The sysdig filter I used in a chisel is posted below: function on_init() local filename = path for i in string.gmatch(path, "[^/]+") do filename = i end print("PID\tPROC_NAME\tPROC_EXEC\tPROC_SID\tPROC_PNAME\tPROC_PPID\tPROC_EXELINE\tPROC_PCMDLINE") chisel.set_event_formatter("%proc.pid\t%proc.name\t%proc.exec\t%proc.sid\t%proc.pname\t%proc.ppid\t%proc.exeline\t%proc.pcmdline ") chisel.set_filter( "(evt.type=unlinkat and evt.arg.name=" .. path .. ") or \ (evt.type=unlink and evt.arg.path=" .. path .. ") or \ (evt.type=umount) or \ (evt.type=remove and evt.arg.path=" .. path .. ")") return true end The unmount happened again randomly, but the filter was unable to see that. Thinking the filter is flawed I created a program that unmounts a share both with umount and umount2 (tried both the lazy and the force umount flags) and the filter detected them correctly, so this leaves me to believe that the kernel is umounting things. I have nothing in my logs not even the usual "nfs not responding" message when there is a problem with the share. If I login on a machine and remount, the remount is successful without any problem. I have numerous clients running from the same setup and this does not happen there. The only thing this group of machines have in common is their network segment and the NFS boot server. But I fail to see why absolutely nothing will be reported if communication between the server and the client died.
If anybody cares, this was an NFS bug in the kernel. Should be fixed by commit cc89684c9a265828ce061037f1f79f4a68ccd3f7. NFS: only invalidate dentrys that are clearly invalid Since commit bafc9b754f75 ("vfs: More precise tests in d_invalidate") in v3.18, a return of '0' from ->d_revalidate() will cause the dentry to be invalidated even if it has filesystems mounted on or it or on a descendant. The mounted filesystem is unmounted. ...
NFS mounts getting unmounted, possibly by the kernel
1,549,043,928,000
I'm trying to setup a device tree source file for the first time on my custom platform. On the board is a NXP PCA9555 gpio expander. I'm attempting to setup node for the device and am a bit confused. Here is where I'm at with the node in the dts file: ioexp0: gpio-exp@21 { compatible = "nxp,pca9555"; reg = <21>; interrupt-parent = <&gpio>; interrupts = <8 0>; gpio-controller; #gpio-cells = <2>; /*I don't understand the following two lines*/ interrupt-controller; #interrupt-cells = <2>; }; I got to this point by using the armada-388-gp.dts source as a guide. My confusion is on what code processes the #interrupt-cells property. The bindings documentation is not very helpful at all for this chip as it doesn't say anything regarding interrupt cell interpretation. Looking at the pca953x_irq_setup function in the source code for the pca9555 driver - I don't see anywhere that the #interrupt-cells property is handled. Is this handled in the linux interrupt handling code? I'm just confused as to how I'm suppose to know the meaning of the two interrupt cells. pca953x_irq_setup for your convenience: static int pca953x_irq_setup(struct pca953x_chip *chip, int irq_base) { struct i2c_client *client = chip->client; int ret, i; if (client->irq && irq_base != -1 && (chip->driver_data & PCA_INT)) { ret = pca953x_read_regs(chip, chip->regs->input, chip->irq_stat); if (ret) return ret; /* * There is no way to know which GPIO line generated the * interrupt. We have to rely on the previous read for * this purpose. */ for (i = 0; i < NBANK(chip); i++) chip->irq_stat[i] &= chip->reg_direction[i]; mutex_init(&chip->irq_lock); ret = devm_request_threaded_irq(&client->dev, client->irq, NULL, pca953x_irq_handler, IRQF_TRIGGER_LOW | IRQF_ONESHOT | IRQF_SHARED, dev_name(&client->dev), chip); if (ret) { dev_err(&client->dev, "failed to request irq %d\n", client->irq); return ret; } ret = gpiochip_irqchip_add_nested(&chip->gpio_chip, &pca953x_irq_chip, irq_base, handle_simple_irq, IRQ_TYPE_NONE); if (ret) { dev_err(&client->dev, "could not connect irqchip to gpiochip\n"); return ret; } gpiochip_set_nested_irqchip(&chip->gpio_chip, &pca953x_irq_chip, client->irq); } return 0; } This is my first time working with device tree so I'm hoping it's something obvious that I'm just missing. UPDATE: As a clarification - I am working with kernel version 4.12-rc4 at the moment. I now understand that I was misinterpreting some properties of the device tree. I was previously under the impression that the driver had to specify how all properties were handled. I now see that linux will actually handle many of the generic properties such as gpios or interrupts (which makes a lot of sense). Here is a bit more of a detailed explanation of how the translation from intspec to IRQ_TYPE* happens: The function of_irq_parse_one copies the interrupt specifier integers to a struct of_phandle_args here. This arg is then passed to irq_create_of_mapping via a consumer function (e.g. of_irq_get). This function then maps these args to a struct irq_fwspec via of_phandle_args_to_fwspec and passes it's fwspec data to irq_create_fwspec_mapping. These functions are all found in irqdomain.c. At this point the irq will belong to an irq_domain or use the irq_default_domain. As far I can tell - the pca853x driver uses the default domain. This domain is often setup by platform specific code. I found mine by searching for irq_domain_ops on cross reference. A lot of these seem to do simple copying of intspec[1] & IRQ_TYPE_SENSE_MASK to the type variable in irq_create_fwspec_mapping via irq_domain_translate. From here the type is set to the irq's irq_data via irqd_set_trigger_type.
Read section 2 in this: Specifying interrupt information for devices ... 2) Interrupt controller nodes A device is marked as an interrupt controller with the "interrupt-controller" property. This is a empty, boolean property. An additional "#interrupt-cells" property defines the number of cells needed to specify a single interrupt. It is the responsibility of the interrupt controller's binding to define the length and format of the interrupt specifier. The following two variants are commonly used: a) one cell The #interrupt-cells property is set to 1 and the single cell defines the index of the interrupt within the controller. Example: vic: intc@10140000 { compatible = "arm,versatile-vic"; interrupt-controller; #interrupt-cells = <1>; reg = <0x10140000 0x1000>; }; sic: intc@10003000 { compatible = "arm,versatile-sic"; interrupt-controller; #interrupt-cells = <1>; reg = <0x10003000 0x1000>; interrupt-parent = <&vic>; interrupts = <31>; /* Cascaded to vic */ }; b) two cells The #interrupt-cells property is set to 2 and the first cell defines the index of the interrupt within the controller, while the second cell is used to specify any of the following flags: bits[3:0] trigger type and level flags 1 = low-to-high edge triggered 2 = high-to-low edge triggered 4 = active high level-sensitive 8 = active low level-sensitive Example: i2c@7000c000 { gpioext: gpio-adnp@41 { compatible = "ad,gpio-adnp"; reg = <0x41>; interrupt-parent = <&gpio>; interrupts = <160 1>; gpio-controller; #gpio-cells = <1>; interrupt-controller; #interrupt-cells = <2>; nr-gpios = <64>; }; sx8634@2b { compatible = "smtc,sx8634"; reg = <0x2b>; interrupt-parent = <&gpioext>; interrupts = <3 0x8>; #address-cells = <1>; #size-cells = <0>; threshold = <0x40>; sensitivity = <7>; }; }; So for the two cell variant, the first number is an index and the second is a bit mask defining the type of the interrupt input. This part of the device tree is handled by code in drivers/of/irq.c (e.g. of_irq_parse_one()). The two lines you refer to in the quoted example declare the device (gpio-exp@21) to be an interrupt controller and any other device that wants to use it must provide two cells per interrupt. Just above those lines is an example of a device specifying an interrupt in another interrupt controller (not this one, but the device with alias gpio), via the two properties interrupt-parent and interrupts (or you could use the new interrupts-extended which allows different interrupt controllers for each interrupt by specifying the parent as the first cell of the property).
Confusion regarding #interrupt-cells configuration on PCA9555 expander
1,549,043,928,000
My system contains 2 Nvidia cards. What i'm trying to achieve is one card driven by nouveau driver while the other by the official nvidia blob driver. Both drivers successfully cohabit if the nvidia one is launched automatically on boot, using a specific nvidia driver option "nvidia_340.NVreg_AssignGpus=0:02:00." that make the driver to probe only a specific device, and the nouveau driver is launched manually with modprobe, probing the other unused device. I would like to automate things by making both modules to load on boot but i have not managed to tell nouveau driver to probe only one of the two graphic cards. Loading order of modules seems nondeterministic and when nouveau module is loaded before nvidia module it probes both and prevents the official nvidia to access to the other. I know i could do a systemd service task to execute modprobe nouveau during boot phase (that is executed well after the load of nvidia module) but i guess there is a better way to do that. I think of udev but as i don't know it much i'm not sure it is the way to go. What is the proper way to handle this ?
So, the path to the solution was not easy, but the solution in itself is surprisingly straightfoward: The idea is to use the install directive in a /etc/modprobe.d/ configuration file that redefine the way the nvidia driver is run through modprobe. I set the following inside a file /etc/modprobe.d/nvidia-with-nouveau.conf: install nvidia_340 /sbin/modprobe --ignore-install nvidia_340; /sbin/modprobe nouveau which instructs the kernel how to start the module nvidia (mine is version 340). Through this instruction i tell it to start nvidia first, then nouveau. --ignore-install is needed to prevent the kernel to reuse the install directive to launch the nvidia module that could result in some kind of infinite loop i presume. install and others available directives in /etc/modprobe.d configuration files is well explained in man modprode.d. It is important to stay the nouveau driver blacklisted to prevent it to be started by its own. On Ubuntu, Nvidia drivers, when installed through deb packages from official Ubuntu repositories, blacklist the nouveau module by installing the file /etc/modprobe.d/nvidia-340_hybrid.conf (it applies for me , it can be different on other OS and driver version). This file contains the following: blacklist nouveau blacklist lbm-nouveau alias nouveau off alias lbm-nouveau off The following lines create an alias for nouveau to off and must be commented: #alias nouveau off #alias lbm-nouveau off Finally, i guess, updating the initramfs is required for theses changes to be taken in account: sudo update-initramfs -u I can now enjoy a Multi-seat config with one seat on nouveau and the other on nvidia-driver.
How to prevent a kernel module video driver to probe a specific graphic card device
1,549,043,928,000
I use Debian jessie on a Xen server and now I am concerned about the issue CVE-2016-10229: udp.c in the Linux kernel before 4.5 allows remote attackers to execute arbitrary code via UDP traffic that triggers an unsafe second checksum calculation during execution of a recv system call with the MSG_PEEK flag. I want to check if the issue is solved on my server and VMs On the Dom0: $ uname -a Linux xen-eclabs 4.5.0-1-amd64 #1 SMP Debian 4.5.1-1 (2016-04-14) x86_64 GNU/Linux $ dpkg -l |grep linux- ii linux-base 3.5 all Linux image base package ii linux-image-3.16.0-4-amd64 3.16.39-1+deb8u2 amd64 Linux 3.16 for 64-bit PCs ii linux-image-4.3.0-1-amd64 4.3.3-7 amd64 Linux 4.3 for 64-bit PCs ii linux-image-4.5.0-1-amd64 4.5.1-1 amd64 Linux 4.5 for 64-bit PCs ii linux-image-amd64 3.16+63 amd64 Linux for 64-bit PCs (meta-package) ii xen-linux-system-4.3.0-1-amd64 4.3.3-7 amd64 Xen system with Linux 4.3 on 64-bit PCs (meta-package) ii xen-linux-system-4.5.0-1-amd64 4.5.1-1 amd64 Xen system with Linux 4.5 on 64-bit PCs (meta-package) ii xen-linux-system-amd64 4.5+72 amd64 Xen system with Linux for 64-bit PCs (meta-package) The site sais, it is fixed in the package called "linux", in jessie 3.16.39-1 But which is this package "linux"? I don't have that package installed simply called "linux"? How do I understand this connection?
You have got installed two XEN linux-enable kernel versions, exactly 4.3.3-7 and 4.5.1-1, and the regular non-XEN production kernel 3.16.0-4, 4.3.3-7 and 4.5.1-1. The regular kernel packages for amd64 (64-bit PCs) are linux-image*-amd64, the XEN enabled ones are xen-linux-system*-amd64. The corresponding XEN packages per your listing are: xen-linux-system-4.3.0-1-amd64, 4.3.3-7 xen-linux-system-4.5.0-1-amd64, 4.5.1-1 It seems from the output of your uname, that the 4.5 version is active, which means you are not vulnerable. Nonetheless, while the kernel logs say it was fixed by v4.5-rc1, if Debian logs say only 3.16.39-1 is vulnerable, it means the fixes were back ported to older versions source code, as they use to do. Nonetheless, you can always deinstall the older kernel version with the command: sudo dpkg --purge xen-linux-system-4.3.0-1-amd64
Check if CVE-2016-10229 is fixed in my XEN Debian Linux Server
1,549,043,928,000
According to Wikipedia and many other sources, Since PCB contains the critical information for the process, it must be kept in an area of memory protected from normal user access. In some operating systems the PCB is placed in the beginning of the kernel stack of the process since that is a convenient protected location. It makes a lot of sense: when a switch occurs, the current context has to be saved somewhere and a (kernel) stack looks a good place to do that. However, Tanenbaum states that To implement the process model, the operating system maintains a table (an array of structures), called the process table, with one entry per process . (Some authors call these entries process control blocks.) Later, Tanenbaum mention that the process context is saved onto a stack. Clearly, the process table and the stack are different beasts and now I am confused: what is the relationship between the stack and the process table?
Tanenbaum is just saying that there are two common ways of storing information about a process. How a particular OS chooses to do that — on some kernel stack or in a table/array — is just one of the myriad freedoms available to the OS designer. The OS designer doesn't even have to call them process control blocks.
What is the relationship between the stack and the process table?
1,549,043,928,000
One day I choose to install kernel 4.8.0-39 but it does not and return error, I didn't give it a value. But now I want to install updates and terminal show errors related with it, checking /var/lib/dkms/ndiswrapper/1.59/build/make.log file I found next: /var/lib/dkms/ndiswrapper/1.59/build/wrapndis.c: In function ‘tx_worker’: /var/lib/dkms/ndiswrapper/1.59/build/wrapndis.c:707:16: error: ‘struct net_device’ has no member named ‘trans_start’ wnd->net_dev->trans_start = jiffies; ^ scripts/Makefile.build:289: recipe for target «/var/lib/dkms/ndiswrapper/1.59/build/wrapndis.o» failed make[1]: *** [/var/lib/dkms/ndiswrapper/1.59/build/wrapndis.o] Error 1 Makefile:1491: recipe for target «_module_/var/lib/dkms/ndiswrapper/1.59/build» failed make: *** [_module_/var/lib/dkms/ndiswrapper/1.59/build] Error 2 make: exit from directory «/usr/src/linux-headers-4.8.0-39-generic» If I understand correctly kernel can not compile and return error because of what all upgrade crashes. What do I need to do to remove all mention about 4.8.0-39 kernel? I've already tried to run: sudo apt-get install --reinstall linux-headers-4.8.0-39-generic sudo apt autoremove sudo dpkg --configure -a sudo apt-get install -f sudo apt remove linux-headers-4.8.0-39 and everytime I got this: Reading package lists... Done Building dependency tree Reading state information... Done Package 'linux-headers-4.8.0-39' is not installed, so not removed The following packages will be REMOVED: linux-image-extra-4.8.0-39-generic 0 upgraded, 0 newly installed, 1 to remove and 43 not upgraded. 2 not fully installed or removed. After this operation, 162 MB disk space will be freed. Do you want to continue? [Y/n] (Reading database ... 383195 files and directories currently installed.) Removing linux-image-extra-4.8.0-39-generic (4.8.0-39.42~16.04.1) ... depmod: FATAL: could not load /boot/System.map-4.8.0-39-generic: No such file or directory run-parts: executing /etc/kernel/postinst.d/apt-auto-removal 4.8.0-39-generic /boot/vmlinuz-4.8.0-39-generic run-parts: executing /etc/kernel/postinst.d/dkms 4.8.0-39-generic /boot/vmlinuz-4.8.0-39-generic Error! echo Your kernel headers for kernel 4.8.0-39-generic cannot be found at /lib/modules/4.8.0-39-generic/build or /lib/modules/4.8.0-39-generic/source. run-parts: executing /etc/kernel/postinst.d/initramfs-tools 4.8.0-39-generic /boot/vmlinuz-4.8.0-39-generic update-initramfs: Generating /boot/initrd.img-4.8.0-39-generic Warning: No support for locale: ru_RU.utf8 depmod: WARNING: could not open /var/tmp/mkinitramfs_jTYeTT/lib/modules/4.8.0-39-generic/modules.order: No such file or directory depmod: WARNING: could not open /var/tmp/mkinitramfs_jTYeTT/lib/modules/4.8.0-39-generic/modules.builtin: No such file or directory gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-4.8.0-39-generic with 1. run-parts: /etc/kernel/postinst.d/initramfs-tools exited with return code 1 dpkg: error processing package linux-image-extra-4.8.0-39-generic (--remove): subprocess installed post-removal script returned error exit status 1 Errors were encountered while processing: linux-image-extra-4.8.0-39-generic E: Sub-process /usr/bin/dpkg returned an error code (1) My system: Linux PCNAME 4.4.0-63-generic #84-Ubuntu SMP Wed Feb 1 17:20:32 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Distributor ID: LinuxMint Description: Linux Mint 18.1 Serena Release: 18.1 Codename: serena EDITED: ~$ ll -d /var/tmp drwxrwxrwt 42 root root 4096 мар 2 02:12 /var/tmp/ df -h Filesystem Size Used Avail Use% Mounted on udev 7,7G 0 7,7G 0% /dev tmpfs 1,6G 9,6M 1,6G 1% /run /dev/sda2 48G 13G 34G 28% / tmpfs 7,7G 207M 7,5G 3% /dev/shm tmpfs 5,0M 4,0K 5,0M 1% /run/lock tmpfs 7,7G 0 7,7G 0% /sys/fs/cgroup /dev/sda3 268M 261M 0 100% /boot /dev/sda4 149G 138G 8,1G 95% /home cgmfs 100K 0 100K 0% /run/cgmanager/fs tmpfs 1,6G 44K 1,6G 1% /run/user/1000 @Bruno9779 Yeah, seems like you absolutely right, my /boot haven't enough space, I forgot, that /boot on another partition, but I can't delete something because apt-get tries to delete 4.8.0-39 kernel first and crashes, I wanted to delete some kernels manually, but decided to don't touch anything while it works. For now I have 4.4.0-53 4.4.0-59 4.4.0-62 4.4.0-63 4.4.0-64 installed kernels and boot from 4.4.0-64
The real problem is: gzip: stdout: No space left on device Confirm the issue with: df -h The error messages generated as a result of "out of disk space" are often misleading. EDIT: Apparently your boot partition is full. /dev/sda3 268M 261M 0 100% /boot You need to make some space there before you can install/reinstall any kernel. Get a list of installed kernels: dpkg --list | grep linux-image Get the version number of the running kernel: uname -r Now remove some unused kernels with the package manager
Error when removing kernel 4.8.0-39
1,549,043,928,000
My Microsoft Designer Bluetooth Mouse will stop working after a short time. UPDATE: It is a general bluetooth problem. I tried sending files from my mobile to my laptop and received only 1 file. The Bluetooth connection stalled afterwards. I am running the latest openSUSE Tumbleweed on a Lenovo T460s. First I was running openSUSE Leap 42.2 with Kernel 4.0.36 and the Bluetooth Mouse worked flawlessly, but on older Kernels there is an issue with Skylake Processors so that my system would freeze - https://forums.opensuse.org/showthread.php/521718-Frequent-lockups-freezes . I was able to pin down the problem to the following: sudo systemctl stop NetworkManager The mouse will work flawlessly. As soon as I start the NetworkManager the mouse will stop working and the Bluetooth Icon in Gnome Shell will signal Bluetooth connection forever. I could then run: sudo systemctl restart bluetooth which will make the mouse work for only a few seconds. Here comes the even stranger part. If I exclude the wlan0 interface in /etc/NetworkManager/NetworkManager.conf [keyfile] unmanaged-devices=interface-name:wlan0 The mouse will work again flawlessly but of course I don't have WiFi managed by NetworkManager which is undesirable. So something from NetworkManager is interfering with bluetooth as long as wlan0 device is managed. Specs: mike@think:~> cat /etc/issue Welcome to openSUSE Tumbleweed 20161226 - Kernel \r (\l). mike@think:~> uname -a Linux think.suse 4.9.0-2-default #1 SMP PREEMPT Fri Dec 16 19:51:27 UTC 2016 (6fbc0c0) x86_64 x86_64 x86_64 GNU/Linux mike@think:~> sudo dmidecode -t bios # dmidecode 3.0 Getting SMBIOS data from sysfs. SMBIOS 2.8 present. Handle 0x000C, DMI type 0, 24 bytes BIOS Information Vendor: LENOVO Version: N1CET52W (1.20 ) <-- latest ... mike@think:~> sudo systemctl status bluetooth ● bluetooth.service - Bluetooth service Loaded: loaded (/usr/lib/systemd/system/bluetooth.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2017-01-06 11:27:31 CET; 17min ago Docs: man:bluetoothd(8) Main PID: 1191 (bluetoothd) Status: "Running" Tasks: 1 (limit: 512) CGroup: /system.slice/bluetooth.service └─1191 /usr/lib/bluetooth/bluetoothd Jan 06 11:27:31 think systemd[1]: Starting Bluetooth service... Jan 06 11:27:31 think bluetoothd[1191]: Bluetooth daemon 5.43 Jan 06 11:27:31 think bluetoothd[1191]: Starting SDP server Jan 06 11:27:31 think systemd[1]: Started Bluetooth service. Jan 06 11:27:31 think bluetoothd[1191]: Bluetooth management interface 1.14 initialized Jan 06 11:27:32 think.suse bluetoothd[1191]: Failed to obtain handles for "Service Changed" characteristic Jan 06 11:27:32 think.suse bluetoothd[1191]: Sap driver initialization failed. Jan 06 11:27:32 think.suse bluetoothd[1191]: sap-server: Operation not permitted (1) Jan 06 11:27:34 think.suse bluetoothd[1191]: Endpoint registered: sender=:1.26 path=/MediaEndpoint/A2DPSource Jan 06 11:27:34 think.suse bluetoothd[1191]: Endpoint registered: sender=:1.26 path=/MediaEndpoint/A2DPSink mike@think:~> nmcli -v nmcli tool, version 1.4.4 I have also tried udev rules using vendor and productid for my mouse and bluetooth hub leveraging NM_UNMANAGED (man NetworkManager) without success. I tried turning on DEBUG logging in NetworkManager.conf (man NetworkManager.conf) but can see nothing interesting when the mouse failure occurs. The same applies if I start usr/lib/bluetooth/bluetoothd -n --debug 2>&1 debugging. Nothing to see. I am out of options. Any help is appreciated, because I would like to have a mouse and internet access at the same time :) UPDATE lspci mike@think:~> sudo lspci -nnk | grep -iA2 net 00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection I219-LM [8086:156f] (rev 21) Subsystem: Lenovo Device [17aa:2233] Kernel driver in use: e1000e -- 04:00.0 Network controller [0280]: Intel Corporation Wireless 8260 [8086:24f3] (rev 3a) Subsystem: Intel Corporation Device [8086:0130] Kernel driver in use: iwlwifi I was able to improve the situation by disabling bt_coex in iwlwifi module: cat /etc/modprobe.d/50-iwlwifi.conf options iwlwifi bt_coex_active=0 The only issue left right now is that after wakeup from suspend I have to restart bluetooth service to make it work again. Restarting NetworkManager still kills bluetooth, but when I connect my mouse after WiFi is established the connection will not stall anymore and disconnects/reconnect (turning off the mouse) are handled without errors.
Since I deactivated bt_coex the problem is solved for me. If I don't connect the mouse too fast after wake up from suspend (i.e.: wait for WiFi to be established), everything works fine.
Bluetooth Mouse stops working after a few seconds (NetworkManager issue)
1,549,043,928,000
I would like to know what is the order followed by kbuild when configuring the kernel and what is the order that it's more convenient to use when writing CONFIG_ options in the .config file . I have read the docs about kbuild but so far no specs on the order of the operations .
You should strive to not have order dependencies! The system starts at the first line of the top level Kconfig file, and processes each line in turn. When it sees a 'source' line, it suspends reading the current file, processes the specified file. When it gets to the end of a file it resumes where it was in the previous file.
Scanning order of the kbuild / kconfig kernel build system?