date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,394,457,423,000
last three days I am experiencing random freezes. If i am looking on youtube when this happens Audio keeps playing but screen is froze and keyboard or cursor do not do anything. I trying to look in sudo journalctl and this is what I found: led 04 10:44:02 arch-thinkpad kernel: i915 0000:00:02.0: [drm] *ERROR* Atomic update failure on pipe C (start=113031 end=113032) time 340 us, min 1073, max 1079, scanline start 1062, end 1085 led 04 11:09:15 arch-thinkpad kernel: i915 0000:00:02.0: [drm] *ERROR* Atomic update failure on pipe C (start=203838 end=203839) time 273 us, min 1073, max 1079, scanline start 1072, end 1090 led 04 11:15:47 arch-thinkpad kernel: i915 0000:00:02.0: [drm] *ERROR* Atomic update failure on pipe C (start=227329 end=227330) time 278 us, min 1073, max 1079, scanline start 1066, end 1085 uname -a returns: Linux arch-thinkpad 5.10.4-arch2-1 #1 SMP PREEMPT Fri, 01 Jan 2021 05:29:53 +0000 x86\_64 GNU/Linux I use: i3wm, picom, pulseaudio. I have lenovo x390 yoga with intel processor. How can I diagnose and solve this problem? EDIT: Upgrading linux kernel to 5.10.16 solved my problem. Still I will accept answer of @Sylvain POULAIN for its complex view on the problemand offering alternative solution.
5.10.15 doesn't solve this problem. I still have same error. Intel bugs are really annoying since kernel > 4.19.85 (November 2019 !) As a workaround, i915 guc need to be enable as mentionned in Archlinux Wiki : https://wiki.archlinux.org/index.php/Intel_graphics#Enable_GuC_/_HuC_firmware_loading and loaded before others modules To resume : Add guc paramters to kernel parameters by editing /etc/default/grub GRUB_CMDLINE_LINUX="i915.enable_guc=2" Add guc option to i915 module by adding /etc/modprobe.d/i915.conf file with : options i915 enable_guc=2 Add i915 to /etc/mkinitcpio.conf : MODULES=(i915) Rebuild kernel initramfs (needs reboot after successfull build) : # mkinitcpio -P Remove xf86-video-intel (driver is already in kernel) : # pacman -Rscn xf86-video-intel
Arch linux randomly freezes after updating to kernel 5.10
1,394,457,423,000
I'm developing for a specific TI ARM processor with custom drivers that made it to the kernel. I'm trying to migrate from 2.6.32 to 2.6.37, but the structure changed so much I will have weeks of work to upgrade my code. For example, my chip is the dm365, which comes with video processing drivers. Now most of the old drivers which were directly exposed to me go through v4l2, which might make more sense. TI provides very little information for those upgrades. How am I supposed to keep up with the changes? When I google for specific file names, I seldom get a few patches with fewer comments on what changed and why and how old relates to new.
If you select a kernel to track, be sure to select one that is tagged for long-term support. But sooner or later you will have to move on...
How am I supposed to keep up with kernels as a developer?
1,394,457,423,000
My dmesg output contains the following line: [ 0.265021] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Having gone to the above-mentioned site and having read up on MDS a little, I ran/received the following: $ cat /sys/devices/system/cpu/vulnerabilities/mds Mitigation: Clear CPU buffers; SMT vulnerable According to the site, this translates to: 'Mitigation: Clear CPU buffers' ... The processor is vulnerable and the CPU buffer clearing mitigation is enabled. 'SMT vulnerable' ... SMT is enabled I don't have a lot of experience in computing, but from what I can tell (and please correct me if I'm wrong), my system is doing what it can to protect against MDS. My question is: Can I do anything further to protect my system, and if so, what should my next steps be?
Can I do anything further to protect my system, and if so, what should my next steps be? You can do something further to protect your system: you can disable SMT (hyperthreading). This is usually possible in your system’s firmware setup. Do I need to take action regarding my Microarchitectural Data Sampling (MDS) status? That depends on what you use your system for. As a general rule, if you only run trusted applications with trusted content, you don’t need to take further action. (The jury is still out regarding web browsers’ vulnerability to MDS with SMT.) If you run VMs or containers with unvetted contents, you might be at risk.
Do I need to take action regarding my Microarchitectural Data Sampling (MDS) status?
1,394,457,423,000
After 30 minutes of uptime using Ubuntu 14.04 with a hybrid SSD I see many processes blocking IO using iotop. This is during disk writes, for example if I open and close an empty file in gedit it can take 2 seconds to close down due to dconf writing settings, this effects other apps in a similar way; slowing the whole system down quite severely. Using strace I managed to trace this back to an fsync call and from there managed to reproduce it using the sync command. So to recap, simply running sync from the terminal repeatedly can take on the order of 1 - 2 seconds but ONLY after 30 minutes uptime. To prove this I made a script that outputs uptime in seconds against time taken to execute sync, and ran it every second : while true; do cat /proc/uptime | awk '{printf "%f ",$1}'; /usr/bin/time -f '%e' sync; sleep 1; done; I ran the above script, waited around an hour (system was left idle) and then plotted the results in gnuplot (y = time in seconds to execute sync, x = uptime in seconds): The point in time where the graph spikes is around 1780 (1780/60 = roughly 30 minutes). Nothing should be writing to the disk at this time apart from the script, so there should be next to nothing in the page cache after the first sync each subsequent sync will be writing exactly what's being written to the script which will be roughly 100 bytes or so. This issue persists after reboots; for example - if I wait 30 minutes for the slowdown then reboot, the slowdown will still be there. If I powerdown then reboot the issue disappears until 30 minutes later. Another curiosity is that when I examined the above graph and zoomed in on an area where the slowdown is occurring I got this: The peaks and troughs repeat - this occurs almost exactly every 10 seconds from trough to trough and also the peak kinks as it comes down. I've also ran hdparm tests (hdparm -t /dev/sda and hdparm -T /dev/sda) before the slowdown : /dev/sda: Timing cached reads: 23778 MB in 2.00 seconds = 11900.64 MB/sec /dev/sda: Timing buffered disk reads: 318 MB in 3.01 seconds = 105.63 MB/sec and during the slowdown: /dev/sda: Timing cached reads: 2 MB in 2.24 seconds = 915.50 kB/sec /dev/sda: Timing buffered disk reads: 300 MB in 3.01 seconds = 99.54 MB/sec Showing that actual disk reads aren't being effected but cached reads are, could that mean that this is to do with the system bus and not the HD after all? Here's the solutions I've tried : Change the spindown settings of the HD maybe the HD was going into power savings mode: hdparm /dev/sda -S252 #(set it to 5 hours before spindown) Change the filesystem's journalling type to writeback rather than ordered so that we get performance improvements - this isn't solving the problem though as it doesn't explain the 30 minutes slowdown-free uptime. Disabled CRON as it seems to be occuring after a round 30 minutes. CPU usage is fine and is completely idle so no processes can be blamed however I've tried shutting down every service including the session manager (lightdm) this does nothing as I believe the issue is lower level. Analysing any new processes coming in at 30 minutes indicates no changes - I've diffed the output of PS before and after and there's no difference. This only started occuring about 2 weeks ago, nothing was installed and no updates were done around that time. I'm thinking this issue is much lower level so would really appreciate some help here as I'm clueless, even pointing me in the right direction would be helpful - for example is there a way to examine what's being flushed out the page cache? Write caching is enabled on the disk in question, I've also tried disabling write barriers. SMART data on the HD indicates no problems with the HD itself however I have my suspicions it's the HD doing something mysterious as it persists after reboots. EDIT: I've done : watch -n 1 cat /proc/meminfo ... to see how the memory changes particularly looking at the dirty row and the writeback row which I believe is the HDs disk buffer. They all stay at zero for the most part highest being probably 300kb. Calling sync flushes these as expected back to 0 but during the slowdown calling sync when there is zero dirty pages and zero kb in the disk buffer still locks IO. What else could sync be doing if there's nothing to flush out the page cache and write cache?
The symptoms are very consistent with a mostly saturated IO system, however having for the most part ruled out IO load from the OS/userspace side, another possibility is the drive running self-tests on itself, which may include reading from all the sectors. This should be queryable/tunable from smartctl (At least one place being smartctl -c for querying). As for why it's coming and going and started suddenly now: The drive has passed a certain stage in it's life (number of sectors written, time spun up, etc.) and the firmware on the drive have triggered one of these scans I believe this also can be triggered via smartctl, so it's possible some automated process triggered it Having one of these scans triggered and flagged as either in progress or started, when the drive has spent a certain amount of time powered on, it's re-triggered either from the beginning or to resume where it left off
Calls to sync/fsync slow down after 30 minutes uptime
1,394,457,423,000
I have a small "rescue" system (16 MB) that I boot into RAM as ramdisk. The initrd disk that I am preparing needs to be formatted. I think ext4 will do fine, but obviously, it doesn't make any sense to use journal or other advanced ext4 features. How can I create the most minimal ext4 filesystem? without journal without any lazy_init without any extended attributes without ACL without large files without resizing support without any unnecessary metadata The most bare minimum filesystem possible?
Or you could simply use ext2 For ext4: mke2fs -t ext4 -O ^has_journal,^uninit_bg,^ext_attr,^huge_file,^64bit [/dev/device or /path/to/file] man ext4 contains a whole lot of features you can disable (using ^).
Minimalistic ext4 filesystem without journal and other advanced features
1,394,457,423,000
May i know the max partition size supported by an Linux system. And how much logical and primary Partition as we can create in an disk installed by linux system?
How Many Partitions I believe other, faster and better people have already answered this perfectly. :) There Is Always One More Limit For the following discussion, always remember that limits are theoretical. Actual limitations are often less than the theoretical limits because either other theoretical limits constrain things. (PCs are very, very complex things indeed these days) there are always more bugs. (this answer not excluded) When Limits are Violated What happens when these limits are violated isn't simple, either. For instance, back in the days of 10GB disks, you could have multi-gigabyte partitions, but some machines couldn't boot code stored after the 1,024th cylinder. This is why so many Linux installers still insist on a separate, small /boot partition in the beginning of the disk. Once you managed to boot, things were just fine. Size of partitions: MS-DOS Partition Table (MBR) MS-DOS stores partitions in a (start,size) format, each of which is 32 bits wide. Each number used to encode cylinder-head-sector co-ordinates in the olden days. Now it simply includes an arbitrary sector number (the disk manages the translation from that to medium-specific co-ordinates). The kernel source for the ‘MS-DOS’ partition type suggests partition sizes are 32 bits wide, in sectors. Which gives us 2^32 * 512, or 2^41 bytes, or 2^21 binary Megabytes, or 2,097,152 Megabytes, or 2,048 Gigabytes, or 2 Terabytes (minus one sector). GUID Partition Table (GPT) If you're using the GUID Partition Table (GPT) disk label, your partition table is stored as a (start,end) pair. Both are 8 bytes long (64 bits), which allows for quite a lot more than you're likely to ever use: 2^64 512-byte sectors, or 2^73 bytes (8 binary zettabytes), or 2^33 terabytes. If you're booting off of a UEFI ROM rather than the traditional CP/M-era BIOS, you've already got GPT. If not you can always choose to use GPT as your disklabel. If you have a newish disk, you really should. Sector Sizes A sector has been 512 bytes for a long while. This is set to change to 4,096 bytes. Many disks already have this, but emulate 512 byte sectors. When the change comes to the foreground and the allocation unit becomes 4,096 byte sectors, and LBAs address 4,096 byte sectors, all the sizes above will change by 3 binary orders of magnitude: multiply them all by 8 to get the new, scary values. Logical Volume Manager If you use LVM, whatever volume you make must also be supported by LVM, since it sits between your partitions and filesystems. According to the LVM2 FAQ, LVM2 supports up to 8EB (exabytes) on Linux 2.6 on 64-bit architectures; 16TB (terabytes) on Linux 2.6 running on 32-bit architectures; and 2TB on Linux 2.4. Filesystem Limits Of course, these are the size limits per partition (or LVM volume), which is what you're asking. But the point of having partitions is usually to store filesystems, and filesystems have their own limits. In fact, what types of limits a filesystem has depends on the filesystem itself! The only global limits are the maximum size of the filesystem and the maximum size of each file in it. EXT4 allows partitions up to 16TB per file and 1EB (exabyte) per volume. However, it uses 32-bit block numbers, so you'd need to increase the default 4,096-byte block size. This may not be possible on your kernel and architecture, so 16TB per volume may be more realistic on a PC. ZFS allows 16EB files and 16EB volumes, but doubtless it has its own other, unforeseen limits too. Wikipedia has a very nice table of these limits for most filesystems known to man. In Practice If you're using Linux 2.6 or newer on 64-bit machines and GPT partitions, it looks like you should only worry about the choice of filesystem and its limits. Even then, it really shouldn't worry you that much. You probably shouldn't be creating single files of 16TB anyway, and 1 exabyte (1,048,576 TB) will be a surreal limitation for a while. If you're using MBR, and need more than 2 binary terabytes, you should switch to UEFI and GPT because you're operating under a 2TB-per-partition limit (this may be less than trivial on an already deployed computer) Please note that I'm an old fart, and I use binary units when I'm calculating multiples of powers of two. Disk manufacturers like to cheat (and have convinced us they always did this, even though we know they didn't) by using decimal units. So the largest ‘2TB’ disk is still smaller than 2 binary terabytes, and you won't have trouble. Unless you use Logical Volume Manager or RAID-0.
What is the max partition supported in linux?
1,394,457,423,000
I tried to make an operating system with my on custom built kernel. It didn't work out too well. I am using Ubuntu and have downloaded Linux 3.2.7 from kernel.org . I am not trying to change the kernel in my Ubuntu system. I want to make my own OS with Grub, the Linux kernel and I want to be able to have this homebrew OS in a file type (such as iso) that I can put on a cd and boot on another computer. My question is: what exactly do I need to make this OS? Any comments or tutorials would be helpful.
Here's what you're looking for: http://www.linuxfromscratch.org/
How do I begin with building a Linux system from scratch?
1,394,457,423,000
I am reading a 550MB file into /dev/null and I am getting dd: writing '/dev/null': No space left on device I was surprised. I thought /dev/null is a black hole where you can send as much as you want ( because its a virtual fs). Yes my disk is almost full when I get this error. What can I do other than deleting content from the disk? ls -l /dev/null -rw-r--r-- 1 root root 0 July 7 21:58 /dev/null Instead of crw-rw-rw- 1 root root 1, 3 July 7 02:58 /dev/null Command I am using: time sh -c "dd if=$filename of=/dev/null"
/dev/null is a special file, of type character device. The driver for that character device ignores whatever you try to write to the device and writes are always successful. If a write to /dev/null fails, it means that you've somehow managed to remove the proper /dev/null and replace it by a regular file. You might have accidentally removed /dev/null; then the next … >/dev/null would have recreated it as a regular file. Run ls -l /dev/null and check that the line looks something like crw-rw-rw- 1 root root 1, 3 Sep 13 2011 /dev/null It must begin with crw-rw-rw-: c for a character device, and permissions that allow everyone to read and write. The file should be owned by root, though it isn't very important. The two numbers after the owner and group identify the device (major and minor device number). Above I show the values under Linux; different unix variants have different values. The date is typically either the date when the system was installed or the date of the last reboot and doesn't matter. If you need to recreate the file, some systems provide a MAKEDEV commmands, either in root's PATH or in /dev. Run cd /dev; ./MAKEDEV std or something like this to recreate the standard basic devices such as /dev/null. Or create the device manually, supplying the correct device numbers; on Linux, that's mknod -m 666 /dev/null c 1 3
dd: writing '/dev/null': No space left on device
1,394,457,423,000
Having been directed to initramfs by an answer to my earlier question (thanks!), I've been working on getting initramfs working. I can now boot the kernel and drop to a shell prompt, where I can execute busybox commands, which is awesome. Here's where I'm stuck-- there are (at least) two methods of generating initramfs images: By passing the kernel a path to a prebuilt directory hierarchy to be compressed By passing the kernel the name of a file that lists the files to be included. The second method seemed a little cleaner, so I've been using that. Just for reference, here's my file list so far: dir /dev 755 0 0 nod /dev/console 644 0 0 c 5 1 nod /dev/loop0 644 0 0 b 7 0 dir /bin 755 1000 1000 slink /bin/sh busybox 777 0 0 file /bin/busybox /home/brandon/rascal-initramfs/bin/busybox 755 0 0 dir /proc 755 0 0 dir /sys 755 0 0 dir /mnt 755 0 0 file /init /home/brandon/rascal-initramfs/init.sh 755 0 0 Unfortunately, I have learned that busybox requires a long list of links to serve as aliases to all of its different commands. Is there a way to generate the list of all these commands so I can add it to my file list? Alternatively, I could switch to method 1, using the prebuilt directory hierarchy, but I'm not sure how to make the /dev nodes in that case. Both of these paths seem messy. Is there an elegant solution to this?
It's not the kernel that's generating the initramfs, it's cpio. So what you're really looking for is a way to build a cpio archive that contains devices, symbolic links, etc. Your method 2 uses usr/gen_init_cpio in the kernel source tree to build the cpio archive during the kernel build. That's indeed a good way of building a cpio archive without having to populate the local filesystem first (which would require being root to create all the devices, or using fakeroot or a FUSE filesystem which I'm not sure has been written already). All you're missing is generating the input file to gen_init_cpio as a build step. E.g. in shell: INITRAMFS_SOURCE_DIR=/home/brandon/rascal-initramfs exec >initramfs_source.txt echo "dir /bin 755 0 0" echo "file /bin/busybox $INITRAMFS_SOURCE_DIR/bin/busybox 755 0 0" for x in sh ls cp …; do echo "slink /bin/$x busybox 777 0 0" done # etc … If you want to reflect the symbolic links to busybox that are present in your build tree, here's a way (I assume you're building on Linux): ( cd "$INITRAMFS_SOURCE_DIR/bin" && for x in *; do if [ "$(readlink "$x")" = busybox ]; then echo "slink /bin/$x busybox 777 0 0" fi done ) Here's a way to copy all your symbolic links: find "$INITRAMFS_SOURCE_DIR" -type l -printf 'slink %p %l 777 0 0\n' For busybox, maybe your build tree doesn't have the symlinks, and instead you want to create one for every utility that you've compiled in. The simplest way I can think of is to look through your busybox build tree for .*.o.cmd files: there's one per generated command. find /path/to/busybox/build/tree -name '.*.cmd' -exec sh -c ' for x; do x=${x##*/.} echo "slink /bin/${x%%.*} busybox 777 0 0" done ' _ {} +
How to generate initramfs image with busybox links?
1,394,457,423,000
Does linux kernel make use of virtual memory for its data structures (page tables, descriptors, etc.)? More specifically: Are kernel space addresses translated in the MMU (pagetable walking)? Could kernel memory get swapped out? Could a memory access to a kernel data structure cause a page fault? Are there differences between linux and other unix in this respect?
Are kernel space addresses translated in the MMU (pagetable walking)? Yes, all addresses are translated in the MMU; see Is the MMU inside of Unix/Linux kernel? or just in a hardware device with its own memory? for details. Could kernel memory get swapped out? A kernel could theoretically be designed so that it can be swapped out. In practice it’s difficult; the Linux kernel in particular can’t be swapped out. Some code paths in the kernel do have to deal with page-ins however; see Why are `copy_from_user()` and `copy_to_user()` needed, when the kernel is mapped into the same virtual address space as the process itself? for example. Could a memory access to a kernel data structure cause a page fault? In most if not all cases, if this were to happen, it would lead to a kernel panic. So yes, it could happen, but it would be a bug. Are there differences between linux and other unix in this respect? As far as I’m aware other (current) Unix-style implementations are similar. Early Unix didn’t support virtual-memory-based swapping (i.e. paging out arbitrary pages) anyway so it wasn’t a concern there.
Does linux kernel use virtual memory (for its data)?
1,394,457,423,000
When I'm on my Linux Box I use bash as a shell. Now I wondered how bash handles the execution of an ELF file, that is when I type ./program and program is an ELF file. I grepped the bash-4.3.tar.gz, there does not seem to be some sort of magic number parser to find out if the file is an ELF nor did I find an exec() syscall. How does the process work? How does bash pass the execution of the ELF to the OS?
Bash knows nothing about ELF. It simply sees that you asked it to run an external program, so it passes the name you gave it as-is to execve(2). Knowledge of things like executable file formats, shebang lines, and execute permissions lives behind that syscall, in the kernel. (It is the same for other shells, though they may choose to use another function in the exec(3) family instead.) In Bash 4.3, this happens on line 5195 of execute_cmd.c in the shell_execve() function. If you want to understand Linux at the source code level, I recommend downloading a copy of Research Unix V6 or V7, and going through that rather than all the complexity that is in the modern Linux systems. The Lions Book is a good guide to the code. V7 is where the Bourne shell made its debut. Its entire C source code is just a bit over half the size of just that one C file in Bash. The Thompson shell in V6 is nearly half the size of the original Bourne shell. Yet, both of these simpler shells do the same sort of thing as Bash, and for the same reason. (It appears to be an execv(2) call from texec() in the Thompson shell and an execve() call from execs() in the Bourne shell's service.c module.)
How does bash execute an ELF file?
1,394,457,423,000
I just to want to know the flow of activities happening after loading the linux kernel image into the RAM after boot process.
As of Linux 2.6: Kernel After loaded into RAM, the kernel executes the following functions. setup(): Build a table in RAM describing the layout of the physical memory. Set keyboard repeat delay and rate. Initialize the video adapter card. Initialize the disk controller with hard disk parameters. Check for IBM Micro Channel bus. Check for PS/2 pointing devices (bus mouse). Check for Advanced Power Management (APM) support. If supported, build a table in RAM describing the hard disks available. If the kernel image was loaded low in RAM, move it to high. Set the A20 pin (a compatibility hack for ancient 8088 microprocessors). Setup a provisional Interrupt Descriptor Table (IDT) and a provisional Global Descriptor Table (GDT). Reset the floating-point unit (FPU). Reprogram the Programmable Interrupt Controllers (PIC). Switch from Real to Protected Mode. startup_32(): Initialize segmentation registers and a provisional stack. Clear all bits in the eflags register. Fill the area of uninitialized data with zeros. Invokes decompress_kernel() to decompress the kernel image. startup_32() (same name, other function): Initialize final segmentation registers. Fill bss segment with zeros. Initialize provisional kernel Page Tables. Enable paging. Setup Kernel Mode stack for process 0. Again, clear all bits in the eflags register. Fill the IDT with null interrupt handlers. Initialize the first page frame with system parameters. Identify the model of the processor. Initialize registers with the addresses of the GDT and IDT. start_kernel(): Nearly every kernel component gets initialized by this function, these are only a few. Scheduler Memory zones Buddy system allocator IDT SoftIRQs Date and Time Slab allocator Create process 1 (/sbin/init) The complete "list" is available in the sources at linux/init/main.c Init Init starts all the necessary user process to bring the system into the desired state, this routine highly depends on the distribution and the runlevel invoked. Type runlevel into the console, this gives you the current runlevel of your system. Take a look into /etc/rcX.d/ (or /etc/rc.d/rcX.d/), replacing the X with your runlevel. These are symlinks ordered by execution priority. S01.... means, this scripts gets started very early, while S99.... runs at the very end of the boot process. The KXX.... symlinks do the same but for the shutdown sequence. Generally, these scripts handle disks, networking, logging, device control, special drivers, environment and many other required sequences.
What happens after loading the linux kernel image into RAM
1,394,457,423,000
I have an old laptop here with only 512 MB of RAM. Since a few kernel releases, I am using zram to convert 256 MB of it to a compressed ramdisk which is then used as swap. This has proved to be very successful and the system is much more responsive, (hard-disk-backed) swap usage has gone down considerably, which slowed the system down before. Since linux 3.0, the kernel also includes cleancache which, using something like zram as a backend is supposed to transparently compress pages from the page cache. As far as I can see this is different from zram. Should I enable both on my laptop? Or does cleancache actually supersede the zram solution? Edit: I have found this gentoo forum link, where it seems that I also have to enable CONFIG_ZCACHE which then makes cleancache use zram to obtain something similar to what I had before. So it seems that I enable all of this and do not use zram explicitly afterwards. Can anybody confirm this?
Zram creates a block device backed by compressed ram. You can use that block device for swap. Normally memory pressure first results in the cache being discarded, and only after most of the cache has been freed up and memory is still tight does the system start swapping. CleanCache allows pages from the page cache to be migrated to a back end, such as xen tmem, which is memory managed by the hypervisor and shared between multiple VM guests. The goal of this is to allow multiple VM guests caching the same data to do so using the same ram, instead of each having their own cache with their own copy of the same data. ZCache is another CleanCache back end. Instead of passing the memory to the hypervisor to hold ( which only applies if you are using a Xen VM environment ), it stores the cache pages compressed in ram, similar to Zram. The difference is that ZCache transparently stores cache pages, but Zram creates a block device that you can use for swap. If you have memory hungry applications, then you will need swap space to support them, so you will still want to use zram ( likely with a very high swappiness value ). This is because CleanCache only compresses cache pages; application memory has to be sent to swap. If you aren't using all of your memory on applications, then you can use CleanCache with the ZCache backend to make more effective use of the remaining memory for disk caching by compressing the disk cache. You might even use a mix of the two techniques.
Cleancache vs zram?
1,394,457,423,000
I know those two mechanisms (let's call them A and B) limit the resource for a process. I want to know the cooperation of those two. If A limits a resource for a process, then what happens when B also limits the same resource?
All limits apply independently. When a process makes a request that would require going over some limit, the request is denied. This holds whether the limit is for a cgroup, per process, or per user. Since cgroup sets limits per groups of processes, and setrlimit sets limits per user or per process, the mechanisms are generally not redundant. It's possible for a given request to exceed both cgroup and setrlimit limits, or only one of them. Keep in mind that all limits are maximum allowed values, not guaranteed minimums. For example, if there's a limit to 1GB of memory per process, a process with 200MB of memory may still get its request to allocate 100MB denied if there's no more available memory in the system, regardless of any applicable limits. If a setrlimit and a cgroup limit both apply, then that's at least three maximums that can be exceeded: the setrlimit maximum, the cgroup maximum, and the currently available resource maximum.
About ulimit/setrlimit and cgroup
1,394,457,423,000
Where can I find a technical description of the kernel parameters listed in /proc/sys (ob Linux)?
The directory /proc/sys gives easy access to sysctl settings through the shell. You can read and write these settings either by reading and writing these files, or by calling the sysctl utility or the underlying sysctl system call. The various settings are described in the kernel documentation, in Documentation/sysctl/*. Start with README. This is fairly low-level stuff, so sometimes the documentation isn't completely precise and you'll need to turn to the source. Each sysctl setting usually corresponds to a variable with a resembling name inside the kernel (but this is a convention, not a rule). Many settings are declared in kernel/sysctl.c, but additional kernel components and modules can define their own. In the source (on a local copy or online at LXR), search for the name of the sysctl setting between quotes (e.g. "xfrm_larval_drop") to find its declaration.
Where are the Linux kernel parameters present in /proc/sys documented?
1,394,457,423,000
I ask this question because I'm curious as to whether there is some sort of performance advantage offered from the binary blobs that are in the Linux kernel. Since many of these blobs have been replaced with code in linux-libre, why has that same code not been incorporated into the Linux kernel at kernel.org?
The Linux-libre project is an extension of efforts by distributions aimed at people who wish to use completely free operating systems, as defined by the Free Software Foundation. Currently it is maintained by FSFLA, the Latin American Free software Foundation. According to the about page for the project: Linux-libre is a project to maintain and publish 100% Free distributions of Linux, suitable for use in Free System Distributions, removing software that is included without source code, with obfuscated or obscured source code, under non-Free Software licenses, that do not permit you to change the software so that it does what you wish, and that induces or requires you to install additional pieces of non-Free Software. A quick reading of the lastest version of the "deblobbing" script shows that it mostly removes the binary blobs and some documentation. In many of the cases the binary blobs are either hardware drivers are firmware for hardware. Firmware is code that needs to be loaded onto the device itself and is often needed even when a free software driver exists. As far as I understand, there is no clear performance benefit from these blobs (although, without them, many people would have no performance) and most kernel developers would love to replace them with well-written, Free code. In your question you claim that "many of these blobs have been replaced with code in linux-libre" and ask why this code hasn't been accepted. In my reading of the scripts I could see very little code that was replaced. Rather the majority of the script is removing code. The code that is added is intended to "replace the requests for non-Free firmware with messages that inform users that the hardware in question is a trap." (Linux Libre Release Accouncement) If you have specific code in mind, please mention it in your question. Most patches for Linux are discussed either on the Linux Kernel Mailing List or one of the many subsystem specific lists. Often the reasons for non-inclusion can be found by searching through these lists.
Why does the linux kernel use linux-libre code to get rid of binary blobs?
1,394,457,423,000
I have written one driver for one device in Linux. How can I create (using gcc) a .ko file so that I can insert it into the kernel?
Create a Makefile like this. ifneq ($(KERNELRELEASE),) obj-m := mymodule.o else KDIR := /lib/modules/$(shell uname -r)/build PWD := $(shell pwd) all: $(MAKE) -C $(KDIR) SUBDIRS=$(PWD) modules install: $(MAKE) -C $(KDIR) SUBDIRS=$(PWD) modules_install %: $(MAKE) -C $(KDIR) SUBDIRS=$(PWD) $@ endif Assuming your module's source is in mymodule.c, running make will create mymodule.ko.
How to create .ko files in Linux
1,394,457,423,000
I was playing a game on Steam and all a sudden I got a kernel panic. I manually shut down the computer and booted back into Linux Mint 17.1 (Cinnamon) 64-bit, and went to go check through my log files in /var/log/, but I couldn't find any references or any kind of messages relating to the kernel panic that happened. It's strange why it never dumped the core or even made any note of it into the log files. How can I make sure that a core is always dumped in case a kernel panic happens again? Doesn't make any sense why nothing was logged when a kernel panic happened. Looking around on Google, people suggest to read through /var/log/dmesg, /var/log/syslog, /var/log/kern.log, /var/log/Xorg.log etc… but nothing. Not even in .Xsession-errors file either. Here are some photographs of the screen: I could always take a photo of the screen when and if it happens again, but I just want to make sure that I can get it to dump the core and create a log file on a kernel panic.
To be sure that your machine generates a "core" file when a kernel failure occurs, you should confirm the "sysctl" settings of your machine. IMO, following should be the settings (minimal) in /etc/sysctl.conf: kernel.core_pattern = /var/crash/core.%t.%p kernel.panic=10 kernel.unknown_nmi_panic=1 Execute sysctl -p after making changes in the /etc/sysctl.conf file.  You should probably also mkdir /var/crash if it doesn’t already exist. You can test the above by generating a manual dump using the SysRq key (the key combination to dump core is Alt+SysRq+C).
Kernel Panic dumps no log files
1,394,457,423,000
It sounds quite counter-productive to me to cache pages that are swapped out. If you swap pages in, what is the advantage to first cache them in memory, only to have to than move them to the right place? Even if pages are swapped in proactively, doesn't it make more sense to "just" swap them in? Doesn't in fact caching swap is just a waste of resources?
After some more research, I have found that the term SwapCached in /proc/meminfo is misleading. In fact, it relates to the number of bytes that are simultaneous in memory and swap, such that if these pages are not dirty, they do not need to be swapped out.
Why does it makes sense to cache swap?
1,394,457,423,000
According to http://www.linfo.org/kernel_mode.html in paragraph 7: When a user process runs a portion of the kernel code via a system call, the process temporarily becomes a kernel process and is in kernel mode. While in kernel mode, the process will have root (i.e., administrative) privileges and access to key system resources. The entire kernel, which is not a process but a controller of processes, executes only in kernel mode. When the kernel has satisfied the request by a process, it returns the process to user mode. It is quit unclear to me about the line, While in kernel mode, the process will have root (i.e., administrative) privileges and access to key system resources. How come a userspace process running not as root will have root privilege? How does it differ from userspace process running as root?
(I'll try to be brief.) In theory, there are two dimensions of privileges: The computer's instruction set architecture (ISA), which protects certain information and/or functions of the machine. The operating system (OS) creating an eco-system for applications and communication. At its core is the kernel, a program that can run on the ISA with no dependencies of any kind. Today's operating systems perform a lot of very different tasks so that we can use computers as we do today. In a very(, very, very) simplified view you can imagine the kernel as the only program that is executed by the computer. Applications, processes and users are all artefacts of the eco-system created by the OS and especially the kernel. When we talk about user(space) privileges with respect to the operating system, we talk about privileges managed, granted and enforced by the operating system. For instance, file permissions restricting fetching data from a specific directory is enforced by the kernel. It looks at the some IDs assodicated with the file, interpretes some bits which represents privileges and then either fetches the data or refuses to do so. The privileges hierarchy within the ISA provides the tools the kernel uses for its purposes. The specific details vary a lot, but in general there is the kernel mode, in which programs executed by the CPU are very free to perform I/O and use the instructions offered by the ISA and the user mode where I/O and instructions are constrained. For instance, when reading the instruction to write data to a specific memory addres, a CPU in kernel mode could simply write data to a specific memory address, while in user mode it first performs a few checks to see if the memory address is in a range of allowed address to which data may be written. If it is determined that the address may not be written to, usually, the ISA will switch into kernel mode and start executing another instruction stream, which is a part of the kernel and it will do the right thing(TM). That is one example for an enforcement strategy to ensure that one program does not interfere with another program ... so that the javascript on the webpage you are currently visiting cannot make your online banking application perform dubious transactions ... Notice, in kernel mode nothing else is triggered to enforce the right thing, it is assumed the program running in kernel mode is doing the right thing. That's why in kernel mode nothing can force a program to adhere to the abstract rules and concepts of the OS's eco-system. That's why programs running in kernel mode are comparibly powerful as the root user. Technically kernel mode is much more powerful than just being the root-user on your OS.
Process in user mode switch to kernel mode. Then the process will have root privileges?
1,394,457,423,000
I am using Crunchbang 64 bit O.S. with a ASUS N150 wireless adapter. Every time I close my laptop and it enters sleep mode, when I "wake it up" I am unable to connect back using the wireless adapter; I have to restart. My questions are: Is there a way to find the specific driver name? I know it's an ASUS N150 adapter with a Realtek chipset. How can I reload the driver for the adapter without resetting the system? How can I find my current kernel version via terminal (sidenote)?
way to find the specific driver name lspci | grep -i network I am not sure whether that device is on the PCI or USB bus but you can try the following. Use lsusb or lspci to find information about the device Lookup that device for the corresponding module ("driver") Make sure that module is loaded and available with lsmod and modprobe Another Idea would be to use lsmod and diff to find out which modules are going missing when your laptop uses sleep mode. It could be more than one module that has a problem. restart machine make sure that the wifi adapter is working use lsmod to get all loaded modules lsmod > loaded-modules-before-sleep.txt put computer to sleep mode wake machine up make sure that the wifi adapter ISN'T working use lsmod to get all loaded modules lsmod > loaded-modules-after-sleep.txt use diff to see what has changed! diff loaded-modules-before-sleep.txt loaded-modules-after-sleep.txt reload driver without resetting system Once you know the module to load, simply use modprobe as root modprobe wifi_module_name find current kernel version via terminal uname to the rescue! uname should tell you what you want to know. uname -a
How to Find and reload specific driver from kernel?
1,394,457,423,000
A simple example. I'm running a process that serves http request using TCP sockets. It might A) calculate something which means CPU will be the bottleneck B) Send a large file which may cause the network to be the bottleneck or C) Complex database query with semi-random access causing a disk bottleneck Should I try to categorize each page/API call as one or more of the above types and try to balance how much of each I should have? Or will the OS do that for me? How do I decide how many threads I want? I'll use 2 numbers for hardware threads 12 and 48 (intel xeon has that many). I was thinking of having at 2/3rds of the threads be for heavy CPU (8/32), 1 thread for heavy disk (or 1 heavy thread per disk) and the remaining 3/15 be for anything else which means no trying to balance the network. Should I have more than 12/48 threads on hardware that only supports 12/48 threads? Do I want less so I don't cause the CPU to go into a slower throttling mode (I forget what it's called but I heard it happens if too much of the chip is active at once). If I have to load and resource balance my threads how would I do it?
Linux: The Linux kernel have a great implementation for the matter and have many features/settings intended to manage the ressources for the running process (over CPU governors, sysctl or cgroup), in such situation tuning those settings along with swap adjustment (if required) is recommended, basically you will be adapting the default functioning mode to your appliance. Benchmark, stress tests and situation analysis after applying the changes are a must especially on production servers. The performance gain can be very important when the kernel settings are adjusted to the needed usage, on the other hand this require testing and a well understanding of the different settings which is time consuming for an admin. Linux does use governors to load balance CPU ressources between the running application, many governors are available; depending on your distro's kernel some governor may not be available (rebuilding the kernel can be done to add missing or non upstream governors). you can check what is the current governor, change it and more importantly in this case, tune its settings. Additional documentations: reading, guide, similar question, frequency scaling, choice of governor, the performance governor and cpufreq. SysCtl: Sysctl is a tool for examining and changing kernel parameters at runtime, adjustments can be made permanent with the config file /etc/sysctl.conf, this is an important part of this answer as many kernel settings can be changed with Sysctl, a full list of available settings can be displayed with the command sysctl -a, details are available on this and this article. Cgroup: The kernel provide the feature: control groups, which are called by their shorter name cgroups in this guide. Cgroups allow you to allocate resources such as CPU time, system memory, network bandwidth, or combinations of these resources among user-defined groups of tasks (processes) running on a system. You can monitor the cgroups you configure, deny cgroups access to certain resources, and even reconfigure your cgroups dynamically on a running system. The cgconfig (control group config) service can be configured to start up at boot time and reestablish your predefined cgroups, thus making them persistent across reboots. Source, further reading and question on the matter. Ram: This can be useful if the system have a limited amount of ram, otherwise you can disable the swap to mainly use the ram. Swap system can be adjusted per process or with the swappiness settings. If needed the ressources (ram) can be limited per process with ulimit (also used to limit other ressources). Disk: Disk I/O settings (I/O Scheduler) may be changed as well as the cluster size. Alternatives: Other tools like nice, cpulimit, cpuset, taskset or ulimit can be used as an alternative for the matter.
Should I attempt to 'balance' my threads or does linux do this?
1,394,457,423,000
At this page you can download a configuration file that lets you target a particular notebook architecture during the compilation of a new 32-bit Linux kernel. I need a 64 bit version. What do I have to do? I compiled a kernel 2-3 times in my life but I never touched a config file, I always have used an interactive menu.
The recommended answer, as the comment suggests, is to save it as .config in the top-level source directory, and then run make xconfig (GUI, easier) or make menuconfig (TUI) on a 64-bit system. That said, to simply switch from 32-bit to 64-bit without changing anything else, a little editing at the beginning is all that's needed. Compare: Original (32-bit) # CONFIG_64BIT is not set CONFIG_X86_32=y # CONFIG_X86_64 is not set CONFIG_OUTPUT_FORMAT="elf32-i386" CONFIG_ARCH_DEFCONFIG="arch/x86/configs/i386_defconfig" "Converted" 64-bit CONFIG_64BIT=y # CONFIG_X86_32 is not set CONFIG_X86_64=y CONFIG_OUTPUT_FORMAT="elf64-x86-64" CONFIG_ARCH_DEFCONFIG="arch/x86/configs/x86_64_defconfig" Note that CONFIG_X86=y is not touched.
How do I convert a kernel .config file from 32-bit to 64-bit?
1,394,457,423,000
I'm experimenting with generating some custom kernels using genkernel. However, each iteration leaves a file in /boot called System.map-genkernel-<arch>-<version>. Is it safe to rename and/or delete the System.map-* files?
The System.map file is mainly used to debug kernel crashes. It's not actually necessary, but it's best to keep it around if you're going to use that kernel. If you've decided you don't need that kernel, then it's safe to delete the corresponding map file. If you're really low on disk space, you could compress the map files. They aren't that big, so this won't save much space, but bzip2 will squeeze them down to about 25% of the original size. Then you can uncompress one if you discover that you need it.
Safe to delete System.map-* files in /boot?
1,394,457,423,000
Without initramfs/initrd support, the following kernel command line won't work: linux /bzImage root=UUID=666c2eee-193d-42db-a490-4c444342bd4e ro How can I identify my root partition via UUID without the need for an initramfs/initrd? I can't use a device name like /dev/sda1 either, because the partition resides on a USB-Stick and needs to work on different machines.
I found the answer burried in another thread: A UUID identifies a filesystems, whereas a PARTUUID identifies a partition (i.e. remains intact after reformatting). Without initramfs/initrd the kernel only supports PARTUUID. To find the PARTUUID of the block devices in your machine use sudo blkid This will print, for example /dev/sda1: UUID="XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" TYPE="ext2" PARTUUID="f3f4g3f4-02" You can now modify you linux command line as follows: linux /bzImage root=PARTUUID=f3f4g3f4-02 ro This will boot from the partition with PARTUUID f3f4g3f4-02, which in this case is /dev/sda1.
How to identify root partition via UUID without initramfs/initrd
1,394,457,423,000
I'm not even sure what the problem is, but I'm talking about the kernel attack described here. Down this list of comments somebody asked about renicing the process. The trick didn't improve the situation (the machine still runs in a very sluggish fashion) and the replying comment says something about kernel space vs user space. First of, is the replying comment correct? If so, why does renice work for things in user space and not for things in kernel space? Also, according to what I read, all programs that a user starts emself should be in user space, what did I miss? If it is incorrect, then why doesn't renice improve the situation?
There are services a kernel provides to user-space (such as opening sockets). There is a well-defined interface (API) that user-space programs can interact with the kernel through. In this case, the user-space program is repeatedly opening sockets and sending file descriptors through them, then closing the sockets. These actions are performed by the kernel. It will hold the file descriptor in a buffer until the other end of the socket reads it. The particular bug is that a garbage collector should eventually free the file descriptor, but it doesn't - the fd gets leaked. The leaked fds add up and sit there consuming resources. Killing the program doesn't free the resources because they are not owned by the program.
Why should a user program mess with kernel space?
1,323,741,085,000
Process IDs are strictly increasing, but if your system runs long enough and there is a lot of processes terminating and starting, you could at some point reach the limit of the underlying integral type (on my system it seems to be a signed int) where no larger pid would exists. Would this cause old unused ids (of processes that finished) to be recycled (i.e. handed out a second time)? What if somebody is waiting for that pid to terminate but didn't check in a looong time?
Process IDs are not strictly increasing on any UNIX-like operating system I know of. Your question is based on a false assumption. The only requirements on process IDs are: A process ID shall not be reused by the system until the process lifetime ends. In addition, if there exists a process group whose process group ID is equal to that process ID, the process ID shall not be reused by the system until the process group lifetime ends. A process that is not a system process shall not have a process ID of 1.
Will process ids be recycled? What if you reach the maximal id?
1,323,741,085,000
[Disclaimer: I was initially a little nervous about posting this here, so I asked on Meta if discussing homebrew / modding was acceptable. Based on the response I've gotten from several veteran members, I've gone ahead and posted this thread. Here is the link on Meta.] I'm currently trying to mod my original Xbox using xboxhdm and ndure 3.0. xboxhdm is built around a small bootable Linux distro, and it's giving me fits, so I figured that I'd ask here and see if anybody could give me a hand. (Note: Before anybody suggests a different board, xboxhdm boots from CD on a PC - the Xbox hardware is completely uninvolved in the process, so that's why I'm asking here.) The PC I'm using is relatively old - it's an old Compaq desktop with about 512mb RAM and a 2.5ghz processor (likely a P IV). I'm using it because it has 2 IDE ports on the motherboard. The age of the computer shouldn't be an issue, performance-wise - the xboxhdm + ndure hack has been around for years - it was designed to run on such hardware. Anyway - at one point in the process, I have to copy some files from the CD to the Xbox hard drive (which is a standard Seagate IDE drive, powered by a Molex). About halfway through the copy, everything just dies... I get an unable to handle kernel paging request error, and eventually a kernel panic. I couldn't find anything about this error and how it specifically relates to Xbox modding, but what information I could find suggested that I might have a bad stick of RAM. I've not been able to test this yet, but I'm going to run MEMTEST as soon as I get home. I don't have the setup with me - I'm at work, and it's at home - but if anybody's interested in lending a hand, I'll take pictures tonight and post them up. The only reason that I'm asking here is because I'm still a fairly new *nix convert, and I'm not quite sure how it all works. I'm assuming that unable to handle kernel paging request is a fairly standard error message, too... correct me if I'm wrong.
Well. How's that for fried RAM? Guess that was the culprit, after all. I'm pleased to report that, after removing the defective stick, everything is going quite smoothly.
Unable to handle kernel paging request?
1,323,741,085,000
I'm interested in the way Linux mmaps files into the main memory (in my context its for executing, but I guess the mmap process is the same for writing and reading as well) and which size it uses. So I know Linux uses paging with usually 4kB pagesize (where in the kernel can I find this size?). But what exactly does this mean for the memory allocated: Assume you have a binary of size of a few thousned bytes, lets just say 5812B and you execute it. What happens in the kernel: Does it allocate 2*4kB and then copy the 5812B into this space, wasting >3KB of main memory in the 2nd page? It would be great if anyone knew the file in the kernel source where the pagesize is defined. My 2nd question is also very simple I guess: I assumed 5812B as a filesize. Is it right, that this size is simply taken from the inode?
There is no direct relationship between the size of the executable and the size in memory. Here's a very quick overview of what happens when a binary is executed: The kernel parses the file and breaks it into section. Some sections are directly loaded into memory, in separate pages. Some sections aren't loaded at all (e.g. debugging symbols). If the executable is dynamically linked, the kernel calls the dynamic loader, and it loads the required shared libraries and performs link edition as required. The program starts executing its code, and usually it will request more memory to store data. For more information about executable formats, linking, and executable loading, you can read Linkers and Loaders by John R. Levine. In a 5kB executable, it's likely that everything is code or data that needs to be loaded into memory except for the header. The executable code will be at least one page, perhaps two, and then there will be at least one page for the stack, probably one page or for the heap (other data), plus memory used by shared libraries. Under Linux, you can inspect the memory mappings for an executable with cat /proc/$pid/maps. The format is documented in the proc(5) man page; see also Understanding Linux /proc/id/maps.
Memory size for kernel mmap operation
1,323,741,085,000
The memory resource controller for cgroups v1 allows for setting limits on memory usage on a particular cgroup using the memory.limit_in_bytes file. What is the Linux kernel's behavior when this limit is reached? In particular: Does the kernel OOM kill the process and if so is the oom_score of the process taken into account, or is it the process that asked for the memory that caused the limit to be hit that gets killed? Or would the request for memory just be rejected in which case the process would only die if it didn't deal with such an event?
By default OOM is overseeing cgroups. memory.oom_control contains a flag (0 or 1) that enables or disables the Out of Memory killer for a cgroup. If enabled (0), tasks that attempt to consume more memory than they are allowed are immediately killed by the OOM killer. The OOM killer is enabled by default in every cgroup using the memory subsystem; to disable it, write 1 to the memory.oom_control file: ~]# echo 1 > /cgroup/memory/lab1/memory.oom_control When the OOM killer is disabled, tasks that attempt to use more memory than they are allowed are paused until additional memory is freed. References Redhat docs - 3.7. MEMORY
What's the Linux kernel's behaviour when processes in a cgroup hit their memory limit?
1,323,741,085,000
I have a 3TB drive which I have partitioned using GPT: $ sudo sgdisk -p /dev/sdg Disk /dev/sdg: 5860533168 sectors, 2.7 TiB Logical sector size: 512 bytes Disk identifier (GUID): 2BC92531-AFE3-407F-AC81-ACB0CDF41295 Partition table holds up to 128 entries First usable sector is 34, last usable sector is 5860533134 Partitions will be aligned on 2048-sector boundaries Total free space is 2932 sectors (1.4 MiB) Number Start (sector) End (sector) Size Code Name 1 2048 10239 4.0 MiB 8300 2 10240 5860532216 2.7 TiB 8300 However, when I connect it via a USB adapter, it reports a logical sector size of 4096 and the kernel no longer recognizes the partition table (since it's looking for the GPT at sector 1, which is now at offset 4096 instead of 512): $ sudo sgdisk -p /dev/sdg Creating new GPT entries. Disk /dev/sdg: 732566646 sectors, 2.7 TiB Logical sector size: 4096 bytes Disk identifier (GUID): 2DE535B3-96B0-4BE0-879C-F0E353341DF7 Partition table holds up to 128 entries First usable sector is 6, last usable sector is 732566640 Partitions will be aligned on 256-sector boundaries Total free space is 732566635 sectors (2.7 TiB) Number Start (sector) End (sector) Size Code Name Is there any way to force Linux to recognize the GPT at offset 512? Alternatively, is there a way to create two GPT headers, one at 512 and one at 4096, or will they overlap? EDIT: I have found a few workarounds, none of which are very good: I can use a loopback device to partition the disk: $ losetup /dev/loop0 /dev/sdg Loopback devices always have a sector size of 512, so this allows me to partition the device how I want. However, the kernel does not recognize partition tables on loopback devices, so I have to create another loopback device and manually specify the partition size and offset: $ losetup /dev/loop1 /dev/sdg -o $((10240*512)) --sizelimit $(((5860532216-10240)*512)) I can write a script to automate this, but it would be nice to be able to do it automatically. I can run nbd-server and nbd-client; NBD devices have 512-byte sectors by default, and NBD devices are partitionable. However, the NBD documentation warns against running the nbd server and client on the same system; When testing, the in-kernel nbd client hung and I had to kill the server. I can run istgt (user-space iSCSI target), using the same setup. This presents another SCSI device to the system with 512-byte sectors. However, when testing, this failed and caused a kernel NULL pointer dereference in the ext4 code. I haven't investigated devmapper yet, but it might work.
I found a solution: A program called kpartx, which is a userspace program that uses devmapper to create partitions from loopback devices, which works great: $ loop_device=`losetup --show -f /dev/sdg` $ kpartx -a $loop_device $ ls /dev/mapper total 0 crw------- 1 root root 10, 236 Mar 2 17:59 control brw-rw---- 1 root disk 252, 0 Mar 2 18:30 loop0p1 brw-rw---- 1 root disk 252, 1 Mar 2 18:30 loop0p2 $ $ # delete device $ kpartx -d $loop_device $ losetup -d $loop_device This essentially does what I was planning to do in option 1, but much more cleanly.
Recognizing GPT partition table created with different logical sector size
1,323,741,085,000
I'm still confused about the concept of kernel and filesystem. Filesystems contain a table of inodes used to retrieve the different files and directories in different memories. Is this inode table part of the kernel? I mean, is the inode table updated when the kernel mounts another filesystem? Or is it part of the filesystem itself that the kernel reads by somehow using a driver and inode table address?
There is some confusion here because kernel source and documentation is sloppy with how it uses the term 'inode'. The filesystem can be considered as having two parts -- the filesystem code and data in memory, and the filesystem on disk. The filesystem on disk is self contained and has all the non-volatile data and metadata for your files. For most linux filesystems, this includes the inodes on disk along with other metadata and data for the files. But when the filesystem is mounted, the filesystem code also keeps in memory a cached copy of the inodes of files being used. All file activity uses and updates this in memory copy of the inode, so the kernel code really only thinks about this in memory copy, and most kernel documentation doesn't distinguish between the on disk inode and the in memory inode. Also, the in memory inode contains additional ephemeral metadata (like where the cache pages for the file are in memory and which processes have the file open) that is not contained in the on disk copy of the inode. The in memory inode is periodically synchronized and written back to disk. The kernel does not have all the inodes in memory -- just the ones of files in use and files that recently were in use. Eventually inodes in memory get flushed and the memory is released. The inodes on disk are always there. Because file activity in unix is so tightly tied to inodes, filesystems (like vfat) that do not use inodes still have virtual inodes in kernel memory that the filesystem code constructs on the fly. These in memory virtual inodes still hold file metadata that is synchronized to the filesystem on disk as needed. In a traditional unix filesystem, the inode is the key data structure for a file. The filename is just a pointer to the inode, and an inode can have multiple filenames linked to it. In other filesystems that don't use inodes, a file can typically only have one name and the metadata is tied to the filename rather than an inode.
How Linux kernel sees the filesystems
1,323,741,085,000
I am trying to learn operating system concepts. Here is two simple python code: while True: pass and this one: from time import sleep while True: sleep(0.00000001) Question: Why when running first code CPU usage is 100% but when running the second one it is about 1% to 2% ? I know it may sounds stupid but why we can not implement something like sleep in user space mode without using sleep system call ? NOTE: I have tried to understand sleep system call from linux kernel but TBH I didn't understand what happens there. I also search about NOP assembly code and turns out that it is not really doing nothing but doing something useless (like xchg eax, eax) and maybe this is that cause of 100% CPU usage. but I am not sure. What exactly assembly code for sleep system call that we can't do it in user space mode? Is it something like HLT I also tried to use HLT assembly in code like this: section .text global _start _start: hlt halter: jmp _start section .data msg db 'Hello world',0xa len equ $ - msg but after running this code I see kernel general protection fault like this: [15499.991751] traps: hello[22512] general protection fault ip:401000 sp:7ffda4e83980 error:0 in hello[401000+1000] I don't know maybe this is related to protection ring or my code is wrong? The other question here is that OS is using HLT or other protected assembly commands under beneath sleep system call or not?
Why when running first code CPU usage is 100% but when running the second one it is about 1% to 2% ? Because the first is a "busy loop": You are always executing code. The second tells the OS that this particular process wants to pause (sleep), so the OS deschedules the process, and if nothing else is using CPU, the CPU becomes idle. I also search about NOP assembly code and turns out that it is not really doing nothing Well, NOP = no operation: It is actively executing code that has no effect. Which you can use to pad code, but not to put the CPU in a low power state. What exactly assembly code for sleep system call that we can't do it in user space mode? Modern OS on x86 CPUs use mwait. Other CPU architectures use other commands. but after running this code I see kernel general protection fault like this That's because the OS is supposed to do this in supervisor mode. As I wrote above, the OS needs to be able to keep scheduling processes, so a process itself isn't allowed to put the CPU to idle mode. The other question here is that OS is using HLT or other protected assembly commands under beneath sleep system call Yes, it does. Though it's not executed during the sleep call, but inside the scheduler loop, when the scheduler detects that there are no processes that want to run. One question for the first part. If I use very little slot of time i.e: sleep(0.0000000000000001) is scheduler still go to the next process? For the actual OS syscalls, see man 3 sleep (resolution in seconds), man usleep (resolution in microseconds), and man nanosleep (resolution in nanoseconds`). No matter what floating point number you use in your python code, you won't get a better resolution than the syscall used by python (whichever variant it is). The manpages say "suspends execution of the calling thread for (at least) usec microseconds." etc., so I'd assume it gets descheduled even if the delay is zero (and then immediately rescheduled), but I didn't test that, nor did I read the kernel code. .
What is difference between sleep and NOP in depth?
1,323,741,085,000
Not so much asking what books (although if you know of any guides/tutorials that'd be helpful) but what is the best way to start doing kernel programming and is there a particular distribution that would be best to learn on? I'm mostly interested in the Device Drivers portion, but I want to learn how the Kernel is set up as well (Modules and such) I have around 4-5 years Experience with C/C++ but it's mostly knowledge from College (so it's not like 4-5 years work experience, if you know what I mean)
Firstly: For the baby stages, writing various variations on "hello world" modules, and virtual hardware drivers, are the best way to start (real hardware introduces real world problems best faced when you have more of an idea what you are doing). "Linux Device Drivers" is an excellent book and well worth starting with: http://lwn.net/Kernel/LDD3/ LDD (used to, at least) have exercises where you wrote virtual drivers, e.g. RAM disks, and virtual network devices. Secondly: subscribe to https://lkml.org/ or to the mailing list of a sub-system you will be hacking in. Lurk for a bit, scanning over threads, reading code review (replies to patches) to see what kind of things people stumble on or pick up on. See if you can obtain (cheap) hardware for a device that is not yet supported, or not yet supported well. Good candidates are cheap-ish USB NICs or similar, low-cost USB peripherals. Something with an out-of-date, or out-of-tree driver, perhaps vendor written, perhaps for 2.4.x, is ideal, since you can start with something that works (sort-of), and gradually adapt it/rewrite it, testing as you go. My first driver attempt was for a Davicom DM9601 USB NIC. There was a 2.4-series vendor-written kernel driver that I slowly adapted to 2.6. (Note: the driver in mainline is not my driver, in the end someone else wrote one from scratch). Another good way in is to look at the Kernel Newbies site, specifically the "kernel janitors" todo: http://kernelnewbies.org/KernelJanitors/Todo This is a list of tasks that a beginner should be able to tackle.
Best way to get into Kernel programming?
1,323,741,085,000
If a program is not allowed to handle or ignore SIGKILL and SIGSTOP, and must immediately terminate, why does the kernel even send the signal to the program? Can't the kernel simply evict the program from the CPU and memory? I assume the kernel would have the ability do directly do this.
This answer is partly correct, there is more to do to terminate a process than to free the memory. However a SIGKILL is not a tap on the shoulder and a request to do something, it is one of the few signals that a process can't ignore or handle. That means that a SIGKILL is always handled by the kernel's default handler, and this default action, as with most of the signals, is to terminate the process receiving the signal. The user space part of the program won't even see the signal, so there is no request to do something, no cooperation required, and therefor a program can't misbehave upon receiving SIGKILL, whether by malicious intent or by some programming error. Instead the kernel side of the process will handle the signal and terminate the process. So in a way the kernel is directly terminating the process, by telling another part of the kernel that the process shall be terminated. From a programming point of view, when the kernel wants to kill a program (which mostly happens because of missing resources, especially not enough free RAM), there are two possibilities, to duplicate the code that does this when a process has to be terminated, or just call a single function to deliver the signal and know that everything necessary to terminate the process will be handled. The second approach is not only less initial work, it means much less work in the long run because the duplicate code doesn't have to be maintained.
Why does the kernel even bother to send SIGKILL? [duplicate]
1,323,741,085,000
I've heard of lines of code that are distributed with the Linux Kernel that aren't open. Maybe some drivers or something like that. I'd like to know how much of that is true? Are there lines of code that are distributed with the Kernel (as when you download it from kernel.org) that aren't open at all? And how much that is of the total (if there's a way to know it, number of lines or percentage)? And where can I find more information about this? Maybe some articles to read... Thank you very much!
The Linux kernel itself is all free software, distributed under the GNU General Public License. Third parties may distribute closed-source drivers in the form of loadable kernel modules. There's some debate as to whether the GPL allows them; Linus Torvalds has decreed that proprietary modules are allowed. Many device in today's computer contain a processor and a small amount of volatile memory, and need some code to be loaded into that volatile memory in order to be fully operational. This code is called firmware. Note that the difference between a driver and a firmware is that the firmware is running on a different processor. Firmware makers often only release a binary blob with no code source. Many Linux distributions package non-free firmware separately (or in extreme cases not at all), e.g. Debian.
Proprietary or Closed Parts of the Kernel
1,323,741,085,000
Since Intel, AMD and ARM is affected by the Spectre and Meltdown cpu kernel memory leak bugs/flaws, could we say that Power architecture is safe from these?
No, you could not say it's safe. https://www.ibm.com/blogs/psirt/potential-impact-processors-power-family/ Complete mitigation of this vulnerability for Power Systems clients involves installing patches to both system firmware and operating systems. The firmware patch provides partial remediation to these vulnerabilities and is a pre-requisite for the OS patch to be effective. [...] Firmware patches for POWER7+, POWER8, and POWER9 platforms are now available via FixCentral. POWER7 patches will be available beginning February 7. [...] AIX patches will be available beginning January 26 and will continue to be rolled out through February 12. Update : patches available, http://aix.software.ibm.com/aix/efixes/security/spectre_meltdown_advisory.asc
Is AIX/Power safe from Spectre / Meltdown?
1,323,741,085,000
I am aware that this is simplified/generalized explanation, but top(1) utility divides memory in FreeBSD into six pools- Active, Inactive, Wired, Cache, Buffers and Free. Example from top(1) output: Mem: 130M Active, 42M Inact, 51M Wired, 14M Cache, 34M Buf, 648K Free Swap: 512M Total, 512M Free Active is used by running processes and Wired is used mainly for kernel. Inactive is memory from closed processes which is still cached in case it needs to be reused, Cache is cached data, Buffers is disk buffers(I guess it is similar to cached in Linux free(1) output(?)) and Free is completely unused memory. Am I correct that FreeBSD kernel automatically allocates space from Inactive, Cache and Buffers pools to Active or Wired if needed?
To make it short, active and wired is used memory that shouldn't or cannot be swapped out to free memory. While inactive can properly be swapped out, but is still owned (not freed) by a process or the kernel, so this is not heavily used memory, but still used. New is laundry which is a list of dirty memory pages, which might needs to be written to the swap device. Either if dirty memory needed to be swapped or not, they it is added back into the inactive queue. Wired memory is not supposed to be swapped, for safety (in case of the kernel) or for userland process optimisation (like ZFS). Wired memory is used for caches of filesystems, which might be freed by the kernel. At lest for ZFS this can be seen as mostly free memory. Free memory is definitely free. Cached (now deprecated, I guess) is ready to be freed, since it is already swapped out and only there for possible reallocation. Buffer is used as a cache by most filesystems (UFS, FAT, ...) and is the amount of memory used by the filesystems. It can be actice, inactive or wired. ARC (Adaptive Replacement Cache) is the cache used by ZFS and it is memory that can be freed when need. From the FreeBSD Wiki on Memory Memory Classes Active Contains pages "actively" (recently) referenced by userland Contains a mix of clean and dirty pages Pages are regularly scanned by the page daemon (each page is visited once every vm.pageout_update_period seconds) Scans check to see if the page has been referenced since the last scan If enough scans complete without seeing a reference, the page is moved to the inactive queue Implements pseudo-LRU Inactive Contains pages aged out of the active queue Contains pages evicted from the buffer cache Contains a mix of clean and dirty pages Pages are scanned by the page daemon (starting from the head of the queue) when there is a memory shortage: Pages which have been referenced are moved back to the active queue or the tail of the inactive queue Pages which are dirty are moved to the tail of the laundry queue Unreferenced, clean pages may be freed and reused immediately Implements second-chance LRU Laundry Queue for managing dirty inactive pages, which must be cleaned ("laundered") before they can be reused Managed by a separate thread, the laundry thread, instead of the page daemon Laundry thread launders a small number of pages to balance the inactive and laundry queues Frequency of laundering depends on: How many clean pages the page daemon is freeing; more frees contributes to a higher frequency of laundering The size of the laundry queue relative to the inactive queue; if the laundry queue is growing, we will launder more frequently Pages are scanned by the laundry thread (starting from the head of the queue): Pages which have been referenced are moved back to the active queue or the tail of the laundry queue Dirty pages are laundered and then moved close to the head of the inactive queue Free Memory available for use by the rest of the system. Wired Non-pageable memory: cannot be freed until explicitly released by the owner Userland memory can be wired by mlock(2) (subject to system and per-user limits) Kernel memory allocators return wired memory Contents of the ARC and the buffer cache are wired Some memory is permanently wired and is never freed (e.g., the kernel file itself) From The design and implementation of the FreeBSD operating system chapter 6.12 Page Replacement (Not fully accurate any more, but here for referenz of the old question): The kernel divides the main memory into five lists: Wired: Wired pages are locked in memory and cannot be paged out. Typically these pages are being used by the kernel or the physical-memory pager, or they have been locked down with mlock. In addition, all the pages being used to hold the thread stacks of loaded (i.e. not swapped-out) processes are also wired. Active: Active pages are being used by one or more regions of virtual memory. Although the kernel can page them out, doing so is likely to cause an active process to fault them back again. Inactive: Inactive pages may be dirty and have contents that are still known, but they are not usually part of any active region. If the contents of the page are dirty, the contents must be written to backing store before the page can be reused. Once the page has been cleaned, it is moved to the cache list. If the system becomes short of memory, the pageout daemon may try to move active pages to the inactive list in the hopes of finding pages that are not really in use. The selection criteria that are used by the pageout daemon to select pages to move from the active list to the inactive list are described later in this section. When the free-memory and cache lists drop to low, pageout daemon traverses the inactive list to create more cache and free pages. Cache: Cache pages have contents that are still known, but they are not part of an mapping. If they are faulted into active region, they are not part of any mapping. If they are faulted int an active region, they will be moved from the cache list to the active list. If they are used for a read or a write, they will be moved from the cache list first to the buffer cache and eventually released to the inactive list. An mlock system call can reclaim a page from the cache list and wire it. Pages on the cache list are similar to inactive pages except that they are not dirty, either because they are unmodified since they were paged in or because they have been written to their backing store. They can be claimed for a new use when a page is needed. Free: Free pages have no useful contents and will be used to fulfill new pagefault requests. To answer your original question Am I correct that FreeBSD kernel automatically allocates space from Inactive, Cache and Buffers pools to Active or Wired if needed? Active pages can become inactive if they were not used for some time. If the kernel swaps out an inactive page this page is moved to the cache list. Page in the cache list are not part of the virtual mapping of any process but can easily be reclaimed, as active or wired. Or when needed for I/O as a buffer cache. Wired memory can not be swaped out of main memory. If it is wired by a process it needs to be unwired with the munlock call to become active memory again. Active, inactive and wired memory can be freed by the process or kernel and added to the free list.
How does FreeBSD allocate memory?
1,323,741,085,000
I am writing code that relies on the output of /proc/meminfo, /proc/cpuinfo etc. Are the file contents always in English? For example, will MemTotal in /proc/meminfo always be MemTotal in all locales?
Yes, usually that is the case, as those messages are provided by the kernel itself, and including a hundred translations into the kernel image itself would serve no purpose other than increasing the kernel size dramatically. For many things there are front-ends, user space programs which read the kernel info and present it in a translated fashion.
Are the outputs of /proc/meminfo, /proc/cpuinfo etc always in English?
1,323,741,085,000
First off, the details. BEFORE: kernel: 3.2.0-2-amd64, nvidia driver: 295.59 AFTER: kernel: 3.2.0-3-amd64, nvidia driver: 302.17-3 My Debian wheezy is kept recent at all times. Actually, doing daily apt-get upgrade -s got me in this trouble in the first place. Evidently, after an apt-get upgrade, something "broke" on my Debian -- something related to the build ecosystem and/or DKMS itself. The NVIDIA driver cannot get build by ANY method recommended in the official Wikis. Including the NVIDIA official binary (log snippet from that at one of the updates). Here's the output of dpkg-reconfigure nvidia-kernel-dkms: # dpkg-reconfigure nvidia-kernel-dkms ------------------------------ Deleting module version: 302.17 completely from the DKMS tree. ------------------------------ Done. Loading new nvidia-302.17 DKMS files... Building only for 3.2.0-3-amd64 Building initial module for 3.2.0-3-amd64 Error! Build of nvidia.ko failed for: 3.2.0-3-amd64 (x86_64) Consult the make.log in the build directory /var/lib/dkms/nvidia/302.17/build/ for more information. A relevant snippet from /var/lib/dkms/nvidia/302.17/build/make.log follows. The problem is not in the compilation, I can guarantee that. LD [M] /var/lib/dkms/nvidia/302.17/build/nvidia.o Building modules, stage 2. MODPOST 0 modules make[1]: Leaving directory `/usr/src/linux-headers-3.2.0-3-amd64' make: Leaving directory `/var/lib/dkms/nvidia/302.17/build' And that's it. No explanation of any kind in any other files in the same directory (at least as far as I checked). Before I ask my questions: I am using nouveau driver now (it's not like I got any choice anyway), but it doesn't work too well for me. I got 3 desktops, constantly playing movies on 1 of them, and being a very busy developer on the other 2. The nouveau driver fails a little bit there (the movies on the second screen get horizontal stripes all the time, the XFCE consoles lag a bit on the scrolling, etc.) Questions: Should I change my kernel version? Tried 3.2.0-2-amd64 and 3.2.0-3-amd64, to no avail. Trying 3.2.0-3-rt-amd64 makes my machine freeze after few minutes of operation, thus I don't dare to install it again. Should I change a version of something in my build environment? (As pointed in the updates, it's not just NVIDIA problem, as it turns out). Should I assume that my linker is at fault (I am not using gold, I am using ld from the binutils package) and if so, what could I do do make the DKMS method finally work? Since the problem does seem to manifest itself on the linkage phase (and MODPOST shows 0 modules). On a personal note, this disturbs me on a lot deeper level I care to usually admit. I had a big respect to Debian, which at the moment is shattered. C'mon, a simple apt-get upgrade breaks all open-source kernel drivers compilations / linkages? Extremely disappointing. UPDATE #1: I did in fact try to install the official 304.22 NVIDIA drivers, here's the log file. Looks like the linking does indeed fail, does it? Also, if I try to also enable DKMS integration, I get a message of the sorts that the script cannot determine the current kernel version (text in the 3rd update). nvidia-installer log file '/var/log/nvidia-installer.log' creation time: Sat Jul 21 22:59:30 2012 installer version: 304.22 PATH: /usr/local/rvm/gems/ruby-1.9.3-p194/bin:/usr/local/rvm/gems/ruby-1.9.3-p194@global/bin:/usr/local/rvm/rubies/ruby-1.9.3-p194/bin:/usr/local/rvm/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin nvidia-installer command line: ./nvidia-installer Using: nvidia-installer ncurses user interface -> License accepted. -> Installing NVIDIA driver version 304.22. -> There appears to already be a driver installed on your system (version: 304.22). As part of installing this driver (version: 304.22), the existing driver will be uninstalled. Are you sure you want to continue? ('no' will abort installation) (Answer: Yes) -> Would you like to register the kernel module sources with DKMS? This will allow DKMS to automatically build a new module, if you install a different kernel later. (Answer: No) -> Performing CC sanity check with CC="gcc-4.6". -> Performing CC version check with CC="gcc-4.6". -> Kernel source path: '/lib/modules/3.2.0-3-amd64/source' -> Kernel output path: '/lib/modules/3.2.0-3-amd64/build' -> Performing rivafb check. -> Performing nvidiafb check. -> Performing Xen check. -> Cleaning kernel module build directory. executing: 'cd ./kernel; make clean'... -> Building kernel module: executing: 'cd ./kernel; make module SYSSRC=/lib/modules/3.2.0-3-amd64/source SYSOUT=/lib/modules/3.2.0-3-amd64/build'... NVIDIA: calling KBUILD... make -C /lib/modules/3.2.0-3-amd64/build \ KBUILD_SRC=/usr/src/linux-headers-3.2.0-3-common \ KBUILD_EXTMOD="/tmp/selfgz10141/NVIDIA-Linux-x86_64-304.22/kernel" -f /usr/src/linux-headers-3.2.0-3-common/Makefile \ modules test -e include/generated/autoconf.h -a -e include/config/auto.conf || ( \ echo; \ echo " ERROR: Kernel configuration is invalid."; \ echo " include/generated/autoconf.h or include/config/auto.conf are missing.";\ echo " Run 'make oldconfig && make prepare' on kernel src to fix it."; \ echo; \ /bin/false) mkdir -p /tmp/selfgz10141/NVIDIA-Linux-x86_64-304.22/kernel/.tmp_versions ; rm -f /tmp/selfgz10141/NVIDIA-Linux-x86_64-304.22/kernel/.tmp_versions/* make -f /usr/src/linux-headers-3.2.0-3-common/scripts/Makefile.build obj=/tmp/selfgz10141/NVIDIA-Linux-x86_64-304.22/kernel gcc-4.6 -Wp,-MD,/tmp/selfgz10141/NVIDIA-Linux-x86_64-304.22/kernel/.nv.o.d -nostdinc -isystem /usr/lib/gcc/x86_64-linux-gnu/4.6/include -I/usr/src/linux-headers-3.2.0-3-common/arch/x86/include -Iarch/x86/include/generated -Iinclude -I/usr/src/linux-headers-3.2.0-3-common/include -include /usr/src/linux-headers-3.2.0-3-common/include/linux/kconfig.h -I/tmp/selfgz10141/NVIDIA-Linux-x86_64-304.22/kernel -D__KERNEL__ -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -Werror-implicit-function-declaration -Wno-format-security -fno-delete-null-pointer-checks -Os -m64 -mtune=generic -mno-red-zone -mcmodel=kernel -funit-at-a-time -maccumulate-outgoing-args -fstack-protector -DCONFIG_AS_CFI=1 -DCONFIG_AS_CFI_SIGNAL_FRAME=1 -DCONFIG_AS_CFI_SECTIONS=1 -DCONFIG_AS_FXSAVEQ=1 -pipe -Wno-sign-compare -fno-asynchronous-unwind-tables -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -Wframe-larger-than=2048 -Wno-unused-but-set-variable -fomit-frame-pointer -g -Wdeclaration-after-statement -Wno-pointer-sign -fno-strict-overflow -fconserve-stack -DCC_HAVE_ASM_GOTO -I/tmp/selfgz10141/NVIDIA-Linux-x86_64-304.22/kernel -Wall -MD -Wsign-compare -Wno-cast- qual -Wno-error -D__KERNEL__ -DMODULE -DNVRM -DNV_VERSION_STRING=\"304.22\" -Wno-unused-function -Wuninitialized -mno-red-zone -mcmodel=kernel -UDEBUG -U_DEBUG -DNDEBUG -DMODULE -D"KBUILD_STR(s)=#s" -D"KBUILD_BASENAME=KBUILD_STR(nv)" -D"KBUILD_MODNAME=KBUILD_STR(nvidia)" -c -o /tmp/selfgz10141/NVIDIA-Linux-x86_64-304.22/kernel/.tmp_nv.o /tmp/selfgz10141/NVIDIA-Linux-x86_64-304.22/kernel/nv.c In file included from /usr/src/linux-headers-3.2.0-3-common/include/linux/kernel.h:17:0, from /usr/src/linux-headers-3.2.0-3-common/include/linux/sched.h:55, from /usr/src/linux-headers-3.2.0-3-common/include/linux/utsname.h:35, from /tmp/selfgz10141/NVIDIA-Linux-x86_64-304.22/kernel/nv-linux.h:38, from /tmp/selfgz10141/NVIDIA-Linux-x86_64-304.22/kernel/nv.c:13: /usr/src/linux-headers-3.2.0-3-common/include/linux/bitops.h: In function ‘hweight_long’: /usr/src/linux-headers-3.2.0-3-common/include/linux/bitops.h:49:41: warning: signed and unsigned type in conditional expression [-Wsign-compare] In file included from /usr/src/linux-headers-3.2.0-3-common/arch/x86/include/asm/uaccess.h:575:0, from /usr/src/linux-headers-3.2.0-3-common/include/linux/poll.h:14, from /tmp/selfgz10141/NVIDIA-Linux-x86_64-304.22/kernel/nv-linux.h:97, from /tmp/selfgz10141/NVIDIA-Linux-x86_64-304.22/kernel/nv.c:13: /usr/src/linux-headers-3.2.0-3-common/arch/x86/include/asm/uaccess_64.h: In function ‘copy_from_user’: /usr/src/linux-headers-3.2.0-3-common/arch/x86/include/asm/uaccess_64.h:53:6: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] ...snipped lots of compile output with the same warning... ld -m elf_x86_64 -r -o /tmp/selfgz10141/NVIDIA-Linux-x86_64-304.22/kernel/nvidia.o /tmp/selfgz10141/NVIDIA-Linux-x86_64-304.22/kernel/nv-kernel.o /tmp/selfgz10141/NVIDIA-Linux-x86_64-304.22/kernel/nv.o /tmp/selfgz10141/NVIDIA-Linux-x86_64-304.22/kernel/nv-acpi.o /tmp/selfgz10141/NVIDIA-Linux-x86_64-304.22/kernel/nv-chrdev.o /tmp/selfgz10141/NVIDIA-Linux-x86_64-304.22/kernel/nv-cray.o /tmp/selfgz10141/NVIDIA-Linux-x86_64-304.22/kernel/nv-gvi.o /tmp/selfgz10141/NVIDIA-Linux-x86_64-304.22/kernel/nv-i2c.o /tmp/selfgz10141/NVIDIA-Linux-x86_64-304.22/kernel/nv-mempool.o /tmp/selfgz10141/NVI DIA-Linux-x86_64-304.22/kernel/nv-mlock.o /tmp/selfgz10141/NVIDIA-Linux-x86_64-304.22/kernel/nv-mmap.o /tmp/selfgz10141/NVIDIA-Linux-x86_64-304.22/kernel/nv-p2p.o /tmp/selfgz10141/NVIDIA-Linux-x86_64-304.22/kernel/nv-pat.o /tmp/selfgz10141/NVIDIA-Linux-x86_64-304.22/kernel/nv-procfs.o /tmp/selfgz10141/NVIDIA-Linux-x86_64-304.22/kernel/nv-usermap.o /tmp/selfgz10141/NVIDIA-Linux-x86_64-304.22/kernel/nv-vm.o /tmp/selfgz10141/NVIDIA-Linux-x86_64-304.22/kernel/nv-vtophys.o /tmp/selfgz10141/NVIDIA-Linux-x86_64-304.22/kernel/os-agp.o /tmp/selfgz10141/NVIDIA-Linux-x86_64-304.22/kernel/os-interface.o /tmp/selfgz10141/NVIDIA-Linux-x86_64-304.22/kernel/os-mtrr.o /tmp/selfgz10141/NVIDIA-Linux-x86_64-304.22/kernel/os-registry.o /tmp/selfgz10141/NVIDIA-Linux-x86_64-304.22/kernel/os-smp.o /tmp/selfgz10141/NVIDIA-Linux-x86_64-304.22/kernel/os-usermap.o (cat /dev/null; echo kernel//tmp/selfgz10141/NVIDIA-Linux-x86_64-304.22/kernel/nvidia.ko;) > /tmp/selfgz10141/NVIDIA-Linux-x86_64-304.22/kernel/modules.order make -f /usr/src/linux-headers-3.2.0-3-common/scripts/Makefile.modpost scripts/mod/modpost -m -i /usr/src/linux-headers-3.2.0-3-amd64/Module.symvers -I /tmp/selfgz10141/NVIDIA-Linux-x86_64-304.22/kernel/Module.symvers -o /tmp/selfgz10141/NVIDIA-Linux-x86_64-304.22/kernel/Module.symvers -S -w -s NVIDIA: left KBUILD. nvidia.ko failed to build! make[1]: *** [module] Error 1 make: *** [module] Error 2 -> Error. ERROR: Unable to build the NVIDIA kernel module. ERROR: Installation has failed. Please see the file '/var/log/nvidia-installer.log' for details. You may find suggestions on fixing installation problems in the README available on the Linux driver download page at www.nvidia.com. UPDATE #2: As per the suggestion of StarNamer, I did reinstall linux-headers-3.2.0-3-amd64. After that was done, DKMS kicked in and tried again to compile the NVIDIA driver. Here's the contents of the file /var/lib/dkms/nvidia/304.22/build/make.log: DKMS make.log for nvidia-304.22 for kernel 3.2.0-3-amd64 (x86_64) Sun Jul 22 14:50:58 EEST 2012 If you are using a Linux 2.4 kernel, please make sure you either have configured kernel sources matching your kernel or the correct set of kernel headers installed on your system. If you are using a Linux 2.6 kernel, please make sure you have configured kernel sources matching your kernel installed on your system. If you specified a separate output directory using either the "KBUILD_OUTPUT" or the "O" KBUILD parameter, make sure to specify this directory with the SYSOUT environment variable or with the equivalent nvidia-installer command line option. Depending on where and how the kernel sources (or the kernel headers) were installed, you may need to specify their location with the SYSSRC environment variable or the equivalent nvidia-installer command line option. *** Unable to determine the target kernel version. *** make: *** [select_makefile] Error 1 UPDATE #3: After days and days of googling, I started to wonder if that's NVIDIA's fault at all. Turns out, it's not. I tried to install Virtual Box 4.1 (from the testing repo), and I stumbled upon this again: # cat /var/lib/dkms/virtualbox/4.1.18/build/make.log DKMS make.log for virtualbox-4.1.18 for kernel 3.2.0-3-amd64 (x86_64) Tue Jul 24 17:58:57 EEST 2012 make: Entering directory `/usr/src/linux-headers-3.2.0-3-amd64' LD /var/lib/dkms/virtualbox/4.1.18/build/built-in.o LD /var/lib/dkms/virtualbox/4.1.18/build/vboxdrv/built-in.o CC [M] /var/lib/dkms/virtualbox/4.1.18/build/vboxdrv/linux/SUPDrv-linux.o ... snipped ... CC [M] /var/lib/dkms/virtualbox/4.1.18/build/vboxpci/SUPR0IdcClientComponent.o CC [M] /var/lib/dkms/virtualbox/4.1.18/build/vboxpci/linux/SUPR0IdcClient-linux.o LD [M] /var/lib/dkms/virtualbox/4.1.18/build/vboxpci/vboxpci.o Building modules, stage 2. MODPOST 0 modules make: Leaving directory `/usr/src/linux-headers-3.2.0-3-amd64' And of course, no more details (as already have been said, it does seem like a linker problem, but I cannot be sure yet). So this must be more of a Debian / DKMS problem or misconfiguration of some kind. However, I swear I didn't touch anything. I was simply doing daily apt-get upgrade-s. Then something went not so well, obviously. UPDATE #4: I did try create a small module as described here: https://stackoverflow.com/questions/4715259/linux-modpost-does-not-build-anything. Indeed I am still seeing MODPOST 0 modules. Here's the output when I put V=1 in the Makefile: # make make -C /lib/modules/3.2.0-3-amd64/build M=/home/dimi/code/hello V=1 modules make[1]: Entering directory `/usr/src/linux-headers-3.2.0-3-amd64' make -C /usr/src/linux-headers-3.2.0-3-amd64 \ KBUILD_SRC=/usr/src/linux-headers-3.2.0-3-common \ KBUILD_EXTMOD="/home/dimi/code/hello" -f /usr/src/linux-headers-3.2.0-3-common/Makefile \ modules test -e include/generated/autoconf.h -a -e include/config/auto.conf || ( \ echo; \ echo " ERROR: Kernel configuration is invalid."; \ echo " include/generated/autoconf.h or include/config/auto.conf are missing.";\ echo " Run 'make oldconfig && make prepare' on kernel src to fix it."; \ echo; \ /bin/false) mkdir -p /home/dimi/code/hello/.tmp_versions ; rm -f /home/dimi/code/hello/.tmp_versions/* make -f /usr/src/linux-headers-3.2.0-3-common/scripts/Makefile.build obj=/home/dimi/code/hello gcc-4.6 -Wp,-MD,/home/dimi/code/hello/.hello.o.d -nostdinc -isystem /usr/lib/gcc/x86_64-linux-gnu/4.6/include -I/usr/src/linux-headers-3.2.0-3-common/arch/x86/include -Iarch/x86/include/generated -Iinclude -I/usr/src/linux-headers-3.2.0-3-common/include -include /usr/src/linux-headers-3.2.0-3-common/include/linux/kconfig.h -I/home/dimi/code/hello -D__KERNEL__ -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -Werror-implicit-function-declaration -Wno-format-security -fno-delete-null-pointer-checks -Os -m64 -mtune=generic -mno-red-zone -mcmodel=kernel -funit-at-a-time -maccumulate-outgoing-args -fstack-protector -DCONFIG_AS_CFI=1 -DCONFIG_AS_CFI_SIGNAL_FRAME=1 -DCONFIG_AS_CFI_SECTIONS=1 -DCONFIG_AS_FXSAVEQ=1 -pipe -Wno-sign-compare -fno-asynchronous-unwind-tables -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -Wframe-larger-than=2048 -Wno-unused-but-set-variable -fomit-frame-pointer -g -Wdeclaration-after-statement -Wno-pointer-sign -fno-strict-overflow -fconserve-stack -DCC_HAVE_ASM_GOTO -DMODULE -D"KBUILD_STR(s)=#s" -D"KBUILD_BASENAME=KBUILD_STR(hello)" -D"KBUILD_MODNAME=KBUILD_STR(hello)" -c -o /home/dimi/code/hello/.tmp_hello.o /home/dimi/code/hello/hello.c (cat /dev/null; echo kernel//home/dimi/code/hello/hello.ko;) > /home/dimi/code/hello/modules.order make -f /usr/src/linux-headers-3.2.0-3-common/scripts/Makefile.modpost scripts/mod/modpost -m -i /usr/src/linux-headers-3.2.0-3-amd64/Module.symvers -I /home/dimi/code/hello/Module.symvers -o /home/dimi/code/hello/Module.symvers -S -w -c -s make[1]: Leaving directory `/usr/src/linux-headers-3.2.0-3-amd64' And here is what I see when I remove V=1: # make make -C /lib/modules/3.2.0-3-amd64/build M=/home/dimi/code/hello modules make[1]: Entering directory `/usr/src/linux-headers-3.2.0-3-amd64' CC [M] /home/dimi/code/hello/hello.o Building modules, stage 2. MODPOST 0 modules make[1]: Leaving directory `/usr/src/linux-headers-3.2.0-3-amd64'
SOLVED! Simple as that: /root/.bashrc had this inside: export GREP_OPTIONS='--color=always' Changed it to: export GREP_OPTIONS='--color=never' ...and restarted the root shell (of course; do not omit this step). Everything started working again. Both NVIDIA and VirtualBox kernel modules built from the first try. I am so happy! :-) Then again though, I am slighly disappointed by the kernel build tools. They should know better and pass --color=never everywhere they use grep; or rather, store the old value of GREP_OPTIONS, override it for the lifetime of the building process, then restore it. I am hopeful that my epic one-week battle with this problem will prove valuable both to the community and the kernel build tools developers. A very warm thanks to the people who were with me and tried to help. (All credits go here: http://forums.gentoo.org/viewtopic-p-4156366.html#4156366)
Cannot create "Hello World" module (and NVIDIA, and VirtualBox)
1,323,741,085,000
To remount a mounted filesystem to read-only, I can use following command: mount -o remount,ro /foo This is used for example in the shutdown sequence, where root filesystem (/) is remounted read-only, right before halt/reboot is called. What does actually remounting to read-only do? Does it change some "flag" in the kernel, so that writes are denied? How difficult would it be to write own program which does nothing else but remount a given filesystem to read-only ?
Mounting or remounting a filesystem is done using the mount(2) syscall. When remounting, this takes the target location (the mountpoint), the flags to be used in the mount operation, and any extra data used for the specific filesystem involved. When remounting read-only, the flags used are MS_RDONLY and MS_REMOUNT; you're also supposed to provide any other flags which were used when the filesystem was first mounted. Remounting a filesystem read-only does indeed set a flag in the kernel's filesystem data structures, after performing some clean-up (basically finishing any outstanding writes). You can see how it's handled in the ext4 source code: if an ext4 filesystem is mounted read-write and then remounted read-only, the filesystem is synced, quotas are suspended, and s_flags in the superblock structure is updated to indicate the filesystem is read-only. This is then used throughout the kernel to deny writes; see for example sb_permission which prevents write access on a read-only filesystem. If you want to do this yourself, you can try just calling mount() with the appropriate options as per the manpage linked above. For a complete solution I believe you'd need to determine the current mount flags and update them, but you could hard-code a simple program to match what your filesystems currently are mounted as...
what does mount -o remount,ro / actually do (under the hood)
1,323,741,085,000
I'm using Debian 'Jessie'. Sometimes my computer freezes, and then I can't use Ctrl+Alt+Del to reboot, Ctrl+Alt+Backspace to kill the X Window System nor Ctrl+Alt+F1 to open a new shell. I've read in several sites that in a computer freeze you can use the basic kernel commands that are used pressing Alt+Sysreq (holding Alt+Sysreq and pressing REISUB one key) But in my computer that 'trick' isn't working when it's frozen. Has the kernel frozen as well? I heard that one of the best things of Linux was that you never had to turn off the computer by holding the power button, but It's not being true for me :/
Magic keys tend to be disabled in Debian these days, so you can't just hard-reboot your machine or kill all your X processes by pressing a few keys accidentally. The X Ctrl+Alt+Backspace key sequence is controlled by the "DontZap" option in /etc/X11/xorg.conf -- man xorg.conf for more details. I think you want this, though: Section "ServerFlags" Option "DontZap" "false" EndSection The sysreq keys are controlled by the kernel options during kernel compile time, boot time, and also sysctl options. To enable it on Debian, put kernel.sysrq=1 into /etc/sysctl.conf, and either reload that file (sysctl -p /etc/sysctl.conf; man sysctl for more), or just edit the file and reboot.
Why isn't "REISUB" working on Debian?
1,323,741,085,000
I'm looking forward to download a Linux Kernel to get to know how to modify it and how to compile it. I am using Debian distribution and I'm interested in the Debian-modified Linux Kernel rather than in the vanilla Kernel form kernel.org. Doing some research I found out there are mainly two ways for achiving this purpose: Install source package (i.e. apt-get install linux-source-3.19) Download source from binary package (i.e. apt-get source linux-image-3.19.0-trunk-amd64) The first option will download the source tarball into /usr/src/linux-source-3.19.tar.xz and the later will download a source tarball (linux_3.19.1.orig.tar.xz), a patch (linux_3.19.1-1~exp1.debian.tar.xz) and a description file (linux_3.19.1-1~exp1.dsc). The latter will also unpack and extract everything into a 'linux-3.19.1' directory. At first I thought both versions would result with the same code, as they have the same kernel version and patch level (based on the report of the apt-cache command). However, diff command reported differences when comparing the unpacked source from apt-get install with the unpacked source from apt-get source (for both patched and non-patched code). When comparing apt-get install with apt-get source: $ diff -rq apt-get-install/ apt-get-source/ | wc -l 253 $ diff -rq apt-get-install/ apt-get-source/ | grep "Only in" Only in apt-get-install/arch/arm/boot/dts: sun7i-a20-bananapro.dts Only in apt-get-install/arch/s390/include/asm: cmb.h.1 Only in apt-get-install/drivers/dma-buf: reservation.c.1 Only in apt-get-install/drivers/dma-buf: seqno-fence.c.1 Only in apt-get-install/drivers/gpu/drm/i915: i915_irq.c.1 Only in apt-get-install/drivers/scsi: constants.c.1 Only in apt-get-install/drivers/usb/gadget/function: f_acm.c.1 Only in apt-get-install/drivers/usb/gadget/function: f_ecm.c.1 Only in apt-get-install/drivers/usb/gadget/function: f_obex.c.1 Only in apt-get-install/drivers/usb/gadget/function: f_serial.c.1 Only in apt-get-install/drivers/usb/gadget/function: f_subset.c.1 Only in apt-get-install/include/linux: reservation.h.1 Only in apt-get-install/kernel: sys.c.1 Only in apt-get-install/lib: crc32.c.1 Only in apt-get-install/sound/soc: soc-cache.c.1 And when comparing apt-get install with apt-get source (+ patch): $ diff -rq apt-get-install/ apt-get-source+patch/ Only in apt-get-install/arch/s390/include/asm: cmb.h.1 Only in apt-get-source+patch/: debian Only in apt-get-install/drivers/dma-buf: reservation.c.1 Only in apt-get-install/drivers/dma-buf: seqno-fence.c.1 Only in apt-get-install/drivers/gpu/drm/i915: i915_irq.c.1 Only in apt-get-install/drivers/scsi: constants.c.1 Only in apt-get-install/drivers/usb/gadget/function: f_acm.c.1 Only in apt-get-install/drivers/usb/gadget/function: f_ecm.c.1 Only in apt-get-install/drivers/usb/gadget/function: f_obex.c.1 Only in apt-get-install/drivers/usb/gadget/function: f_serial.c.1 Only in apt-get-install/drivers/usb/gadget/function: f_subset.c.1 Only in apt-get-install/include/linux: reservation.h.1 Only in apt-get-install/kernel: sys.c.1 Only in apt-get-install/lib: crc32.c.1 Only in apt-get-source+patch/: .pc Only in apt-get-install/sound/soc: soc-cache.c.1 I've found some links where both methods are mentioned but I couldn't get anything clear from those: https://kernel-handbook.alioth.debian.org/ch-common-tasks.html#s-common-official https://help.ubuntu.com/community/Kernel/Compile (Option B vs Alternate option B) I would really appreciate if someone could tell me the differences and advise me which is the preferred option. Thank you.
In Debian terminology, when you run apt-get source linux-image-3.19.0-trunk-amd64 (or the equivalent apt-get source linux), you're actually downloading and extracting the source package. This contains the upstream code (the kernel source code downloaded from kernel.org) and all the Debian packaging, including patches added to the kernel by the Debian kernel team. When you run apt-get install linux-source-3.19 you're actualling installing a binary package which happens to contain the source code of the Linux kernel with the Debian patches applied and none of the Debian packaging infrastructure. The source package's name is just linux; apt-get source will convert any binary package name it is given into the corresponding source package name. By the way, since experimental packages aren't upgraded automatically, you should make sure you've updated your copy of linux-source-3.19 and re-extracted it before comparing; the .dts file you're seeing in your diff was introduced in the latest update. The packages currently in the archive all contain this file. The remaining differences are pretty much normal: as has been indicated in the comments, debian contains all the packaging and is only in the source package, .pc is used by quilt to keep track of the original files modified by patches, and is also only in the source package, and the .1 files are generated manpages, probably a side-effect of the kernel build, and therefore only appear in the binary package (but they shouldn't really be there). The reference package is the source package, as obtained by apt-get source. This builds all the kernel binary packages, including linux-source-3.19 which you install with apt-get install. The latter is provided as a convenience for other packages which may need the kernel source; it's guaranteed to be in the same place all the time, unlike the source package which is just downloaded in the current directory at the time apt-get source is run. As far as documentation goes, I'd follow the Debian documentation in the kernel handbook (section 4.5). Rebuilding the full Debian kernel as documented in section 4.2 which you linked to takes a very long time because it builds a number of variants.
Get kernel source: apt-get install vs apt-get source
1,323,741,085,000
I'm working on a curses GUI that is supposed to start up automatically on boot-up in the default linux terminal (I have no X server installed). I have this working great, but I have a problem where shortly after my curses application starts, the OS will dump some information to the terminal, which messes up my GUI. Something about "read-ahead cache" pops up every time. I have also seen messages displayed when I insert a USB flash drive or some other device. Is there a way to prevent these messages from being sent to /dev/tty1?
You can use the command dmesg -n1 to prevent all messages, except panic messages, from appearing on the console. To make this change permanent, modify your /etc/sysctl.conf file to include the following setting (the first 3 is the important part). kernel.printk = 3 4 1 3 See this post for information on the kernel.printk values.
How do I prevent system information from being displayed on a terminal?
1,323,741,085,000
My sound and wireless hardware are not working under my current 3.16.x kernel on my Debian 8 system. I performed: apt-cache search linux-image with the intention of getting the 4.x version linux kernel to try to fix this (as the hardware works fine under Ubuntu 16.04). However it seems the choice of kernel is limited to: linux-image-3.16.0-4-amd64 - Linux 3.16 for 64-bit PCs I would like to install the 4.x version and have the option to switch between the current kernel and the 4.x version. How can I do this using apt-get or a simple way that does not require manual compilation?
Add something like deb http://mirror.one.com/debian/ jessie-backports main contrib non-free to your sources.list. To install the 4.6 kernel, run: apt-get update apt-get install -t jessie-backports linux-image linux-image-amd64 It might depend on a few other things that can also be found in backports, you might have to add those packages names to the command line explicitly. Apt will automatically track the versions in backports for the packages you install from backports, and not install anything from there unless you explicitly ask for them. And after reading the entire question: It should be possible to leave the old kernel installed, and then grub should be configured to offer you a choice.
Upgrade linux kernel 3 to 4 in Debian 8
1,323,741,085,000
Quite interested in the size of the kernel ring buffer, how much information it can hold, and what data types?
Regarding the size, it's recorded in your kernel's config file. For example, on Amazon EC2 here, it's 256 KiB. # grep CONFIG_LOG_BUF_SHIFT /boot/config-`uname -r` CONFIG_LOG_BUF_SHIFT=18 # perl -e 'printf "%d KiB\n",(1<<18)/1024' 256 KiB # Referenced in /kernel/printk/printk.c #define __LOG_BUF_LEN (1 << CONFIG_LOG_BUF_SHIFT) More information in /kernel/trace/ring_buffer.c Note that if you've passed a kernel boot param "log_buf_len=N" (check using cat /proc/cmdline) then that overrides the value in the config file.
How to find out a linux kernel ring buffer size?
1,323,741,085,000
I'm trying to upgrade my kernel to 4.19, because I need to run some benchmarks that require it (with some kernel options turned on). I'm completely stuck as to why this isn't working. I did two ubuntu 18 clean installs already, download the 4.19 kernel, make oldconfig (or olddefconfig), install modules and the kernel itself. After a reboot, the output just says Loading Linux 4.19.237 Loading initial ramdisk error: out of memory Press any key to continue After a key press it just shows the initial boot messages and an error stack: If I reboot, the older (4.15) kernel still boots and works perfectly. Both entries on grub.cfg are similar menuentry 'Ubuntu, with Linux 4.19.237' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.19.237-advanced-cbc1623f-b651-454a-87dd-da2056dd55\05' { recordfail load_video gfxmode $linux_gfx_mode insmod gzio if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi insmod part_gpt insmod ext2 set root='hd0,gpt2' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt2 --hint-efi=hd0,gpt2 --hint-baremetal=ahci0,gpt2 cbc1623f-b651-454a-87dd-da2056dd5505 else search --no-floppy --fs-uuid --set=root cbc1623f-b651-454a-87dd-da2056dd5505 fi echo 'Loading Linux 4.19.237 ...' linux /boot/vmlinuz-4.19.237 root=UUID=cbc1623f-b651-454a-87dd-da2056dd5505 ro echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-4.19.237 } menuentry 'Ubuntu, with Linux 4.15.0-20-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.15.0-20-generic-advanced-cbc1623f-b651-45\4a-87dd-da2056dd5505' { recordfail load_video gfxmode $linux_gfx_mode insmod gzio if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi insmod part_gpt insmod ext2 set root='hd0,gpt2' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt2 --hint-efi=hd0,gpt2 --hint-baremetal=ahci0,gpt2 cbc1623f-b651-454a-87dd-da2056dd5505 else search --no-floppy --fs-uuid --set=root cbc1623f-b651-454a-87dd-da2056dd5505 fi echo 'Loading Linux 4.15.0-20-generic ...' linux /boot/vmlinuz-4.15.0-20-generic root=UUID=cbc1623f-b651-454a-87dd-da2056dd5505 ro echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-4.15.0-20-generic } I'm totally out of ideas. I have recompiled the kernel many times, disabled TPM in the BIOS, updated the BIOS, reinstalled ubuntu without LVM, tried recovery mode, changed from RAID to AHCI (there's only one disk). Any ideas? Edit: After searching around and changing configs everywhere I made it boot but I still run into the out of memory error, which requires me to press a key. I've enabled a bunch of VIRTIO, AHCI and other BLK parameters in the kernel. Then,when I pressed a button after the out of memory error, it showed a completely different list of UUIDs: Which made me try not using UUIDs. So I changed grub to use root=/dev/sda2. One out of memory error and a key press after, I'm greeted by a long boot log and a login prompt. For some reason the network device changed names (from enp4s0 to enp3s0) with the new kernel, so I had to edit the netplan file and I had network. Now, I still suffer from the out of memory error and I have no idea why. I need to fix this because I use this machine remotely and can't go to it whenever I need to reboot it. Still open to any ideas as to why this is happening in such a simple kernel upgrade.
Turns out the initrd image was huge. 500MB in comparison to the default's 50MB. The key to reducing size was here: How to reduce the size of the initrd when compiling your kernel? Basically: In /etc/initramfs-tools/initramfs.conf change MODULES to MODULES=dep. When installing modules, pass a variable to strip debug symbols: make INSTALL_MOD_STRIP=1 modules_install After this the out of memory error is gone, and so are the vfs errors for some reason. The initrd is less than 50MB now. Before applying these changes I did upgrade all packages, including the kernel, but it might not have had any impact.
Out of memory on "Loading initial ramdisk" after kernel upgrade (4.15 to 4.19) on Ubuntu 18
1,323,741,085,000
Why does "page allocation failure" occur whereas there are still "58*4096kB (C)" that could be used? You see, the kernel complains when allocating memory with the size of order:10(i.e.page allocation failure: order:10). But there are free blocks indeed(i.e. "58*4096kB (C)" ). So I think it should not complain since there are enough free memory indeed. Here is the related log: [ 2161.623563] xxxx: page allocation failure: order:10, mode:0x2084020(GFP_ATOMIC|__GFP_COMP) [ 2161.632085] CPU: 0 PID: 179 Comm: AiApp Not tainted 4.9.56 #53 [ 2161.637947] Call Trace: [<802f63f2>] dump_stack+0x1e/0x3c [<800f6cf4>] warn_alloc+0x100/0x148 [<800f709c>] __alloc_pages_nodemask+0x2bc/0xb5c [<801120fe>] kmalloc_order+0x26/0x48 [<80112158>] kmalloc_order_trace+0x38/0x98 [<8012c5d8>] __kmalloc+0xf4/0x12c [<8048ac78>] alloc_ep_req+0x5c/0x98 [<8048f232>] source_sink_recv+0x2a/0xe0 [<8048f35e>] usb_sourcesink_bulk_read+0x76/0x1c8 [<8048f770>] usb_sourcesink_read+0xfc/0x2c8 [<80134d58>] __vfs_read+0x30/0x108 [<80135c14>] vfs_read+0x94/0x128 [<80136d12>] SyS_read+0x52/0xd4 [<8004a246>] csky_systemcall+0x96/0xe0 [ 2161.689204] Mem-Info: [ 2161.691518] active_anon:3268 inactive_anon:2 isolated_anon:0 [ 2161.691518] active_file:1271 inactive_file:89286 isolated_file:0 [ 2161.691518] unevictable:0 dirty:343 writeback:0 unstable:0 [ 2161.691518] slab_reclaimable:2019 slab_unreclaimable:644 [ 2161.691518] mapped:4282 shmem:4 pagetables:59 bounce:0 [ 2161.691518] free:62086 free_pcp:199 free_cma:60234 [ 2161.724334] Node 0 active_anon:13072kB inactive_anon:8kB active_file:5084kB inactive_file:357144kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:17128kB dirty:1372kB writeback:0kB shmem:16kB writeback_tmp:0kB unstable:0kB pages_scanned:0 all_unreclaimable? no [ 2161.748626] Normal free:248344kB min:2444kB low:3052kB high:3660kB active_anon:13072kB inactive_anon:8kB active_file:5084kB inactive_file:357144kB unevictable:0kB writepending:1372kB present:1048572kB managed:734568kB mlocked:0kB slab_reclaimable:8076kB slab_unreclaimable:2576kB kernel_stack:608kB pagetables:236kB bounce:0kB free_pcp:796kB local_pcp:796kB free_cma:240936kB [ 2161.781670] lowmem_reserve[]: 0 0 0 [ 2161.785225] Normal: 4*4kB (UEC) 3*8kB (EC) 3*16kB (UEC) 2*32kB (UE) 2*64kB (UE) 2*128kB (UE) 2*256kB (EC) 1*512kB (E) 3*1024kB (UEC) 3*2048kB (UEC) 58*4096kB (C) = 248344kB 90573 total pagecache pages [ 2161.803526] 262143 pages RAM [ 2161.806410] 0 pages HighMem/MovableOnly [ 2161.810264] 78501 pages reserved [ 2161.813509] 90112 pages cma reserved
You did not provide much information, like what are the condition(s) under which this occurs, which system (Linux, Android,...) are you running, etc. Anyways you can start fine-tuning your kernel.  You could play around with vm.min_free_kbytes, which tells the kernel to keep such memory free (the unit is KiB) under all circumstances. From kernel.org documentation ("Documentation for /proc/sys/vm/*"): min_free_kbytes: This is used to force the Linux VM to keep a minimum number of kilobytes* free.  The VM uses this number to compute a watermark [WMARK_MIN] value for each lowmem zone in the system.  Each lowmem zone gets a number of reserved free pages based proportionally on its size. Some minimal amount of memory is needed to satisfy PF_MEMALLOC allocations; if you set this to lower than 1024 KB**, your system will become subtly broken, and prone to deadlock under high loads. Setting this too high will OOM your machine instantly. To change this setting permanently you can do (lowering to 16 MiB): echo "vm.min_free_kbytes=16384" >> /etc/sysctl.conf To play around and test that everything works, you can change it just for the current session: sysctl -w vm.min_free_kbytes=16384 The source of the information was kernel.org documentation. Your question is why you have page fault when you have free memory – even when you have more free memory than specified above? If you have free memory above the limit specified by vm.min_free_kbytes, then the answer is most likely memory fragmentation (you could have others, like faulty memory modules). Here are the details: The order:10 bit tells you indirectly how many pages were requested.  Such order is considered high-order as it actually requests 210 (2^10), which is 1024 pages or 4096 KiB of continuous memory! The mode are the flags passed to the kernel memory allocator.  You have a mode:0x2084020 (GFP_ATOMIC|__GFP_COMP) – kernel mode allocator (flags).  For this one you need to have some kernel source knowledge.  To explain your flags in detail. The GFP_ATOMIC flag: The GFP_ATOMIC flag instructs the memory allocator never to block.  Use this flag in situations where it cannot sleep — where it must remain atomic — such as interrupt handlers, bottom halves and process context code that is holding a lock.  Because the kernel cannot block the allocation and try to free up sufficient memory to satisfy the request, an allocation specifying GFP_ATOMIC has a lesser chance of succeeding than one that does not.  Nonetheless, if your current context is incapable of sleeping, it is your only choice. … The __GFP_COMP flag: compound page metadata From include/linux/gfp.h (see source 1, source 2). The page frame belongs to extended page Which comes to the back to the size of memory required.  Extended page allows you to have 4 MiB page frames instead of just 4 KiB.  Recommended reading: the Linux Kernel book and the excellent article: Kernel Korner - Allocating Memory in the Kernel for more information. As you can see, you are requesting 4096 KiB of non-blocking allocation, allocation must remain atomic; you can't block the allocation and try to free the memory (continuous).  The allocation thus fails. Flags can be found at include/linux/gfp.h (see source 1, source 2). Edit – Kernel version 4.9 That is important information the kernel version 4.9.  There has been a regression at this exact kernel version (4.9) which caused SWAP not to be used at all – OOM but no swap used (kernel.org). Recommended way to fix it is to upgrade kernel to at least 4.10.8.  This version and above should have this bug fixed – more at OOM but no swap used (Red Hat) __________________ *   Presumably they mean kibibytes. ** Presumably they mean KiB.
Why does "page allocation failure" occur whereas there are still enough memory(i.e. "58*4096kB (C)") that could be used?
1,323,741,085,000
I failed to find the kernel binary in the standard location in /boot. I've also searched the whole file system for vmlinux or bzimage find / -iname vmlin* find / -iname bzimage However, this is an embedded device not a standard desktop. Is it possible that the kernel binary is located on a different storage location which isn't mounted. Example: / is mounted on the SD card and the kernel is written on flash? If not, what are the options for locating the kernel binary?
/boot is the standard location for the kernel in desktop/server distributions, but embedded systems vary greatly. Where the kernel is stored entirely depends on your bootloader, and it may not be a file as embedded bootloaders are often not capable of reading Linux filesystems. For example, with U-Boot (a popular embedded bootloader), you create an image with mkimage, which may then be written to a separate FAT partition or written in some other system-specific format. If the kernel image is on a FAT partition, that partition is often not mounted under Linux, since Linux never needs to access it (except during upgrades, but most embedded systems don't upgrade their kernel separately from the bootloader). The upshot is that you have to look for it. If you need help, you need to describe your system very precisely, and even then we may or may not be able to help depending on how popular your embedded system is. If you can't find it on your own, consider asking for support from the providers of the embedded system.
Location of the kernel binary (when not in /boot)?
1,323,741,085,000
Sorry - I don't remember the exact name. I know there is mechanism to patch the kernel at runtime by loading modules without need of the reboot as long as the structures involved are not affected. It is used by servers for security patches and recently by Ubuntu & Fedora. What is the name of mechanism Is there any how-to for hand-compiled kernels Is it possible to automatically check if the change x.y.z.a -> x.y.z.a+1 changed any structure or not
I think you are looking for Ksplice. I haven't really followed the technology so I'm not sure how freely available the how-to information is but they certainly have freely available support for some Fedora and Ubuntu versions.
Patching Linux kernel on-line (i.e. without rebooting)
1,323,741,085,000
All my partitions are encrypted (/ and /home), but the /boot partition has to remain unencrypted and is open for manipulation. I was thinking about hashing the kernel on bootup and checking the result against a stored value (generated on compile, saved on my encrypted drive) to see if someone, somehow manipulated the kernel since the last boot (maybe even physically). Is there a problem with writing such a script? Are there programs that do this already?
What you're looking for — verifying that the operating system running on the computer is one you trust — is called trusted boot. (It's one of several things that are sometimes called trusted boot). Your proposed method does not achieve this objective. Encryption does not provide data integrity or authenticity. In other words, it does not prevent an attacker from modifying the contents of your disk and replacing it by a malicious operating system. This malicious operating system could easily be programmed to show the checksum that you expect for the loaded kernel. The easiest path of attack is a man-in-the-middle where the attacker runs your normal operating system under some kind of virtual machine. The virtual machine layer transmits your input to your desired operating system and transmits output back. But it also records your keystrokes (mmmm, passwords) on the side, snoops private keys from the OS's memory and so on. In order to avoid this form of attack, you need to have a root of trust: a component of the system that you trust for a reason other than because some other component of the system says so. In other words, you have to start somewhere. Starting with hardware in your possession is a good start; you could keep your operating system on a USB key that doesn't leave your sight, and plug that only in hardware that you have sufficient confidence in (hardware can have malware!). Mind, if you're willing to trust the computer, you might trust its hard disk too. There is a technical solution to bridge the gap between trusting a small chip and trusting a whole desktop or laptop computer. Some PCs have a TPM (trusted platform module) which can, amongst others, verify that only a known operating system can be booted. Trusted Grub supports TPMs, so with a TPM plus Trusted Grub, you can have the assurance that the kernel you're running is one that you have approved. Note that the adoption of the TPM can work for or against you. It all hinges on who has the keys. If you have the private key for your TPM, then you can control exactly what runs on your computer. If only the manufacturer has the private key, it's a way to turn a general-purpose platform into a locked-in appliance.
Signing/Checksumming the kernel to prevent/detect manipulation
1,323,741,085,000
Block is an abstraction provided by filesystem, block size is integer multiples of disk sector size. Suppose a filesystem uses 4K as its block size, and the disk sector size is 512B, when the filesystem issues a write request to the disk driver, how to atomically write the entire 4K block to disk(avoid partial write)? I want to know how modern kernel addresses this problem, but I don't want to dive into Linux codebase to find the answer. Any help will be appreciate.
A disk should grant that a sector is written atomically. The sector size was 512 bytes and today is typically 4096 bytes for larger disks. In order to get no problem from partially written "blocks", it is important to write everything in a special order. Note that the only reason why there could be a partially written part in the filesystem is a power outage or something similar. The method is: First write all file content and verify that this worked Then write the meta data and make sure that all data structures in the meta data fit in a single disk sector and do not span a sector boundary. This is e.g. important for variable length file names as directory content.
How filesystem atomically writes a block to disk?
1,323,741,085,000
I've learned that the firmware-subsystem uses udevd to copy a firmware to the created sysfs 'data' entry. But how does this work in case of a built-in driver module where udevd hasn't started yet? I'am using a 3.14 Kernel. TIA!
I read through the kernel sources, especially drivers/base/firmware_class.c, and discovered that CONFIG_FW_LOADER_USER_HELPER would activate the udev firmware loading variant (obviously only usable for loadable modules when udev is running). But as mentioned on LKML this seems to be an obsolete method. Furthermore firmware required by built-in modules is loaded from initramfs by fw_get_filesystem_firmware() through a kernel_read(), to be precise.
How does linux load firmeware for built-in driver modules [duplicate]
1,323,741,085,000
I am using Fedora 16 in my DELL n4110. I recently upgraded the kernel from 3.2 to 3.3. In contradiction to the official claim, my system still drains battery as hell. It only provides 1:30 to 2 hrs of backup under normal stress as before, where as Windows provides 3hrs/+ of backup under similar stress. Below are some screen shots from powertop, stats on the services running in my box and few lines from grub.cfg. Overview Idle stats Frequency stats Device stats tunable services /etc/init.d/ceph: ceph conf /etc/ceph/ceph.conf not found; system is not configured. dc_client.service - SYSV: Distcache is a Distributed SSL Session Cache Client Proxy. Loaded: loaded (/etc/rc.d/init.d/dc_client) Active: inactive (dead) CGroup: name=systemd:/system/dc_client.service dc_server.service - SYSV: Distcache is a Distributed SSL Session Cache server. Loaded: loaded (/etc/rc.d/init.d/dc_server) Active: inactive (dead) CGroup: name=systemd:/system/dc_server.service # Generated by ebtables-save v1.0 on Sat Apr 21 09:35:32 NPT 2012 *nat :PREROUTING ACCEPT :OUTPUT ACCEPT :POSTROUTING ACCEPT httpd.service - The Apache HTTP Server (prefork MPM) Loaded: loaded (/lib/systemd/system/httpd.service; disabled) Active: inactive (dead) CGroup: name=systemd:/system/httpd.service No active sessions iscsid.service - LSB: Starts and stops login iSCSI daemon. Loaded: loaded (/etc/rc.d/init.d/iscsid) Active: active (running) since Sat, 21 Apr 2012 08:11:58 +0545; 1h 23min ago Process: 1011 ExecStart=/etc/rc.d/init.d/iscsid start (code=exited, status=0/SUCCESS) Main PID: 1069 (iscsid) CGroup: name=systemd:/system/iscsid.service ├ 1056 iscsiuio ├ 1068 iscsid └ 1069 iscsid libvirtd.service - LSB: daemon for libvirt virtualization API Loaded: loaded (/etc/rc.d/init.d/libvirtd) Active: active (running) since Sat, 21 Apr 2012 08:11:58 +0545; 1h 23min ago Process: 1086 ExecStart=/etc/rc.d/init.d/libvirtd start (code=exited, status=0/SUCCESS) Main PID: 1111 (libvirtd) CGroup: name=systemd:/system/libvirtd.service ├ 1111 libvirtd --daemon └ 1183 /usr/sbin/dnsmasq --strict-order --bind-interfaces... started No open transaction netconsole module not loaded Configured devices: lo Auto_ADW-4401 Auto_PROLiNK_H5004N Auto_korky p4p1 Currently active devices: lo p4p1 virbr0 radvd.service - router advertisement daemon for IPv6 Loaded: loaded (/lib/systemd/system/radvd.service; disabled) Active: inactive (dead) CGroup: name=systemd:/system/radvd.service sandbox is running svnserve.service - LSB: start and stop the svnserve daemon Loaded: loaded (/etc/rc.d/init.d/svnserve) Active: inactive (dead) CGroup: name=systemd:/system/svnserve.service grub.cfg ### BEGIN /etc/grub.d/10_linux ### menuentry 'Fedora (3.3.1-5.fc16.x86_64)' --class fedora --class gnu-linux --class gnu --class os { load_video set gfxpayload=keep insmod gzio insmod part_msdos insmod ext2 set root='(hd0,msdos6)' search --no-floppy --fs-uuid --set=root 2260640d-2901-49e4-b14f-bf9addb04eb7 echo 'Loading Fedora (3.3.1-5.fc16.x86_64)' linux /vmlinuz-3.3.1-5.fc16.x86_64 root=/dev/mapper/vg_machine-lv_root ro pcie_aspm=force i915.i915_enable_rc6=1 i915.i915_enable_fbc=1 rd.lvm.lv=vg_machine/lv_root rd.md=0 rd.dm=0 KEYTABLE=us quiet SYSFONT=latarcyrheb-sun16 rhgb rd.luks=0 rd.lvm.lv=vg_machine/lv_swap LANG=en_US.UTF-8 echo 'Loading initial ramdisk ...' initrd /initramfs-3.3.1-5.fc16.x86_64.img } menuentry 'Fedora (3.3.1-3.fc16.x86_64)' --class fedora --class gnu-linux --class gnu --class os { load_video set gfxpayload=keep insmod gzio insmod part_msdos insmod ext2 set root='(hd0,msdos6)' search --no-floppy --fs-uuid --set=root 2260640d-2901-49e4-b14f-bf9addb04eb7 echo 'Loading Fedora (3.3.1-3.fc16.x86_64)' linux /vmlinuz-3.3.1-3.fc16.x86_64 root=/dev/mapper/vg_machine-lv_root ro pcie_aspm=force i915.i915_enable_rc6=1 i915.i915_enable_fbc=1 rd.lvm.lv=vg_machine/lv_root rd.md=0 rd.dm=0 KEYTABLE=us quiet SYSFONT=latarcyrheb-sun16 rhgb rd.luks=0 rd.lvm.lv=vg_machine/lv_swap LANG=en_US.UTF-8 echo 'Loading initial ramdisk ...' initrd /initramfs-3.3.1-3.fc16.x86_64.img } Is this normal? Are there still problems with power consumption in 3.3? Is there any way to report this problem to the official kernel group???
The problem is gone with new versions of linux kernel :). I have not seen power regression since ubuntu 14.
Linux kernel 3.3 power regression
1,323,741,085,000
I just compiled a new kernel and asked myself: What decides during the compilation process which kernel modules are built in the kernel statically? I then deleted /lib/modules, rebooted and found that my system works fine, so it appears all essential modules are statically built in the kernel. Without /lib/modules, the kernel loads 22. With the directory present, it loads 67 modules.
You do this as part of the configuration process, usually when you run make config, make menuconfig or similar. You can set the module as built-in (marked as *), or modularised (marked as M). You can see examples of this in a screenshot of make menuconfig, from here:
What decides which kernel modules are built in the kernel statically during compilation?
1,323,741,085,000
I have been provided with a vendor supplied minimal linux installation. From an answer to a previous question I discovered that it is possible to build a kernel with or without module support. I have a CANBUS device that I need to attach which comes with drivers in the form of .ko files. I would like to be able to install these with the provided install scripts, but firstly I need to know if my kernel was built with module support - is it possible for me to detect this from the command line?? When I run lsmod it returns nothing so I know that there are no .ko files there at the moment - but doest this mean that the kernel won't allow me to install a .ko file ?
If you have a /proc filesystem, the file /proc/modules exists if and only if the kernel if compiled with module support. If the file exists but is empty, your kernel supports modules but none are loaded at the moment. If the file doesn't exist, your kernel cannot load any module. It's technically possible to have loadable module support without /proc. You can check for the presence of the init_module and delete_module system calls in the kernel binary. This may not be easy if you only have a compressed binary (e.g. vmlinuz or uImage). See How do I uncompress vmlinuz to vmlinux? for vmlinuz. Once you've managed to decompress the bulk of the kernel, search for the string sys_init_module. Note that if modules are supported, you'll need additional files to compile your own modules anyway: kernel headers. These are C header files (*.h), some of which are generated when the kernel is compiled (so you can't just take them from the kernel source). See What does a kernel source tree contain? Is this related to Linux kernel headers?
Can I detect if my custom made kernel was built with module support?
1,323,741,085,000
In the below 2.6.18 Linux Kernel (Red Hat) server, there is a lot of free memory, but I see some swap is used. I always thought of swap as an overflow when memory has been depleted. Why would it swap with about 7GB (50%) free memory? Swappiness is 60 (default). Meminfo output: MemTotal: 16436132 kB MemFree: 7507008 kB Buffers: 534804 kB Cached: 2642652 kB SwapCached: 39084 kB Active: 6001828 kB Inactive: 2532028 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 16436132 kB LowFree: 7507008 kB SwapTotal: 2097144 kB SwapFree: 1990096 kB Dirty: 236 kB Writeback: 0 kB AnonPages: 5353644 kB Mapped: 45764 kB Slab: 330660 kB PageTables: 34020 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 10315208 kB Committed_AS: 14836360 kB VmallocTotal: 34359738367 kB VmallocUsed: 264660 kB VmallocChunk: 34359472735 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB
Swapping only when there is no free memory is only the case if you set swappiness to 0. Otherwise, during idle time, the kernel will swap memory. In doing this the data is not removed from memory, but rather a copy is made in the swap partition. This means that, should the situation arise that memory is depleted, it does not have to write to disk then and there. In this case the kernel can just overwrite the memory pages which have already been swapped, for which it knows that it has a copy of the data. The swappiness parameter basically just controls how much it does this.
Why is swap used when a lot of memory is still free? [duplicate]
1,323,741,085,000
I need a small distro, that is stable. I don't need a full X server or window manager, I only need it to run one single application with a basic UI that consists of a viewport. I would like for the distro to be as small as possible. 700 mb or less would be ideal. Is their a base distro of ubuntu or similar that I can add whatever I need to it from the command line. Which basically is the kernel and some way of graphical output. I was thinking of putting Direct FB on it to render the application. Even a live distro would work.
Have a look at TinyCore Linux. It comes in two variants, one CLI and one including X. The X version including a window manager and a simple desktop is about 12MiB. If you don't need a window manager, you can just start X and your application. A window manager is not required.
Need small distro without a desktop or windows manager, just to run a single graphical app [duplicate]
1,639,666,599,000
One of our devices froze today with the following kernel messages: [79648.067306] BUG: unable to handle page fault for address: 0000000004000034 [79648.067315] #PF: supervisor read access in kernel mode [79648.067318] #PF: error_code(0x0000) - not-present page From the call trace (see below) it appears that this error was caused by the graphics driver (i915). Presumably, a kernel update would fix the problem, however, I'm interested in the background of this problem so I have 3 questions: What do these 3 lines mean exactly, or where can I find a description to these errors? If I enable the hardware watchdog, would it reboot the system when this error occurs? Can this error occur due to faulty hardware (Memory)? System: 5.4.0-91-generic, Ubuntu 20.04.1 LTS Full dump of the kernel ringbuffer (dmesg): [79648.067306] BUG: unable to handle page fault for address: 0000000004000034 [79648.067315] #PF: supervisor read access in kernel mode [79648.067318] #PF: error_code(0x0000) - not-present page [79648.067322] PGD 0 P4D 0 [79648.067328] Oops: 0000 [#1] SMP PTI [79648.067335] CPU: 3 PID: 668 Comm: Xorg Not tainted 5.4.0-91-generic #102-Ubuntu [79648.067338] Hardware name: Shuttle Inc. DH310S/DH310S, BIOS 1.06 03/23/2020 [79648.067349] RIP: 0010:find_get_entry+0x7a/0x170 [79648.067355] Code: b8 48 c7 45 d0 03 00 00 00 e8 d2 ff 85 00 49 89 c4 48 3d 02 04 00 00 74 e4 48 3d 06 04 00 00 74 dc 48 85 c0 74 3d a8 01 75 39 <8b> 40 34 85 c0 74 cc 8d 50 01 f0 41 0f b1 54 24 34 75 f0 48 8b 45 [79648.067359] RSP: 0018:ffffb80a8093f728 EFLAGS: 00010246 [79648.067364] RAX: 0000000004000000 RBX: 00000000000004a6 RCX: 0000000000000000 [79648.067367] RDX: 0000000000000026 RSI: ffff9a369e5ff6c0 RDI: ffffb80a8093f728 [79648.067370] RBP: ffffb80a8093f770 R08: 00000000001120d2 R09: 0000000000000000 [79648.067373] R10: ffff9a3714c8eaa0 R11: 0000000000003c64 R12: 0000000004000000 [79648.067376] R13: 00000000000004a6 R14: 0000000000000001 R15: ffff9a371bf261c0 [79648.067381] FS: 00007f5b0d819a40(0000) GS:ffff9a372ed80000(0000) knlGS:0000000000000000 [79648.067384] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [79648.067387] CR2: 0000000004000034 CR3: 000000025bf12003 CR4: 00000000003606e0 [79648.067390] Call Trace: [79648.067401] find_lock_entry+0x1f/0xe0 [79648.067408] shmem_getpage_gfp+0xef/0x940 [79648.067417] ? __kmalloc+0x194/0x290 [79648.067424] shmem_read_mapping_page_gfp+0x44/0x80 [79648.067520] shmem_get_pages+0x250/0x650 [i915] [79648.067530] ? __update_load_avg_se+0x23b/0x320 [79648.067538] ? update_load_avg+0x7c/0x670 [79648.067619] ____i915_gem_object_get_pages+0x22/0x40 [i915] [79648.067692] __i915_gem_object_get_pages+0x5b/0x70 [i915] [79648.067774] __i915_vma_do_pin+0x3ee/0x470 [i915] [79648.067845] eb_lookup_vmas+0x68a/0xb70 [i915] [79648.067930] ? eb_pin_engine+0x255/0x410 [i915] [79648.067990] i915_gem_do_execbuffer+0x38f/0xc20 [i915] [79648.067997] ? security_file_alloc+0x29/0x90 [79648.068004] ? _cond_resched+0x19/0x30 [79648.068010] ? apparmor_file_alloc_security+0x3e/0x160 [79648.068016] ? __radix_tree_replace+0x6d/0x120 [79648.068020] ? radix_tree_iter_tag_clear+0x12/0x20 [79648.068027] ? kmem_cache_alloc_trace+0x177/0x240 [79648.068035] ? __pm_runtime_resume+0x60/0x80 [79648.068040] ? recalibrate_cpu_khz+0x10/0x10 [79648.068044] ? ktime_get_mono_fast_ns+0x4e/0xa0 [79648.068048] ? __kmalloc_node+0x213/0x330 [79648.068107] i915_gem_execbuffer2_ioctl+0x1eb/0x3d0 [i915] [79648.068112] ? radix_tree_lookup+0xd/0x10 [79648.068167] ? i915_gem_execbuffer_ioctl+0x2d0/0x2d0 [i915] [79648.068196] drm_ioctl_kernel+0xae/0xf0 [drm] [79648.068218] drm_ioctl+0x24a/0x3f0 [drm] [79648.068278] ? i915_gem_execbuffer_ioctl+0x2d0/0x2d0 [i915] [79648.068288] do_vfs_ioctl+0x407/0x670 [79648.068293] ? fput+0x13/0x20 [79648.068299] ? __sys_recvmsg+0x88/0xa0 [79648.068305] ksys_ioctl+0x67/0x90 [79648.068311] __x64_sys_ioctl+0x1a/0x20 [79648.068317] do_syscall_64+0x57/0x190 [79648.068323] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [79648.068327] RIP: 0033:0x7f5b0db7937b [79648.068332] Code: 0f 1e fa 48 8b 05 15 3b 0d 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d e5 3a 0d 00 f7 d8 64 89 01 48 [79648.068335] RSP: 002b:00007fff24ca5d88 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 [79648.068339] RAX: ffffffffffffffda RBX: 000055eaa18c2290 RCX: 00007f5b0db7937b [79648.068342] RDX: 00007fff24ca5db0 RSI: 0000000040406469 RDI: 000000000000000c [79648.068345] RBP: 00007f5b0ba31000 R08: 0000000000000002 R09: 0000000000000001 [79648.068347] R10: 00007f5b0d4156a0 R11: 0000000000000246 R12: 00007fff24ca5db0 [79648.068350] R13: 000000000000000c R14: 000000000000001a R15: 0000000000000068 [79648.068354] Modules linked in: wdat_wdt nls_iso8859_1 dm_multipath scsi_dh_rdac scsi_dh_emc scsi_dh_alua snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio snd_hda_intel snd_intel_dspcfg snd_hda_codec snd_hda_core snd_hwdep snd_pcm snd_seq_midi intel_rapl_msr snd_seq_midi_event intel_rapl_common snd_rawmidi x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel snd_seq kvm rtsx_pci_ms rapl snd_seq_device intel_cstate memstick snd_timer mei_me mei snd soundcore mac_hid acpi_pad sch_fq_codel ip_tables x_tables autofs4 btrfs zstd_compress raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid1 raid0 multipath linear i915 crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel crypto_simd cryptd i2c_algo_bit rtsx_pci_sdmmc glue_helper drm_kms_helper syscopyarea sysfillrect sysimgblt i2c_i801 fb_sys_fops r8169 rtsx_pci drm realtek ahci libahci video [79648.068413] CR2: 0000000004000034 [79648.068418] ---[ end trace 447ad409d057183e ]--- [79648.068425] RIP: 0010:find_get_entry+0x7a/0x170 [79648.068429] Code: b8 48 c7 45 d0 03 00 00 00 e8 d2 ff 85 00 49 89 c4 48 3d 02 04 00 00 74 e4 48 3d 06 04 00 00 74 dc 48 85 c0 74 3d a8 01 75 39 <8b> 40 34 85 c0 74 cc 8d 50 01 f0 41 0f b1 54 24 34 75 f0 48 8b 45 [79648.068432] RSP: 0018:ffffb80a8093f728 EFLAGS: 00010246 [79648.068435] RAX: 0000000004000000 RBX: 00000000000004a6 RCX: 0000000000000000 [79648.068438] RDX: 0000000000000026 RSI: ffff9a369e5ff6c0 RDI: ffffb80a8093f728 [79648.068441] RBP: ffffb80a8093f770 R08: 00000000001120d2 R09: 0000000000000000 [79648.068443] R10: ffff9a3714c8eaa0 R11: 0000000000003c64 R12: 0000000004000000 [79648.068446] R13: 00000000000004a6 R14: 0000000000000001 R15: ffff9a371bf261c0 [79648.068449] FS: 00007f5b0d819a40(0000) GS:ffff9a372ed80000(0000) knlGS:0000000000000000 [79648.068452] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [79648.068455] CR2: 0000000004000034 CR3: 000000025bf12003 CR4: 00000000003606e0
[79648.067306] BUG: unable to handle page fault for address: 0000000004000034 [79648.067315] #PF: supervisor read access in kernel mode [79648.067318] #PF: error_code(0x0000) - not-present page These errors indicate kernel code tried to access an invalid pointer. The kernel code tried to access the virtual memory address 0x0000000004000034, but found that it doesn't correspond to any real memory page (the page could not be faulted in). The second and third lines give context that 1) the code was running in kernel mode (supervisor mode) 2) the access was a read; and 3) the problem was the page was missing, rather than incompatible page protections (such as writing to a read-only page). This likely a bug in kernel/driver code.
Kernel: BUG: unable to handle page fault for address
1,639,666,599,000
I have two questions about the Linux kernel. Specifically, does anybody know exactly, what Linux does in the timer interrupt? Is there some documentation about this? And what is affected when changing the CONFIG_HZ setting, when building the kernel? Thanks in advance!
The Linux timer interrupt handler doesn’t do all that much directly. For x86, you’ll find the default PIT/HPET timer interrupt handler in arch/x86/kernel/time.c: static irqreturn_t timer_interrupt(int irq, void *dev_id) { global_clock_event->event_handler(global_clock_event); return IRQ_HANDLED; } This calls the event handler for global clock events, tick_handler_periodic by default, which updates the jiffies counter, calculates the global load, and updates a few other places where time is tracked. As a side-effect of an interrupt occurring, __schedule might end up being called, so a timer interrupt can also lead to a task switch (like any other interrupt). Changing CONFIG_HZ changes the timer interrupt’s periodicity. Increasing HZ means that it fires more often, so there’s more timer-related overhead, but less opportunity for task scheduling to wait for a while (so interactivity is improved); decreasing HZ means that it fires less often, so there’s less timer-related overhead, but a higher risk that tasks will wait to be scheduled (so throughput is improved at the expense of interactive responsiveness). As always, the best compromise depends on your specific workload. Nowadays CONFIG_HZ is less relevant for scheduling aspects anyway; see How to change the length of time-slices used by the Linux CPU scheduler? See also How is an Interrupt handled in Linux?
Linux timer interrupt
1,639,666,599,000
There's a binary that I need to run which uses bind with a port argument of zero, to get a random free port from the system. Is there a way I can constrain the range of ports the kernel is allowed to pick from?
on Linux, you'd do something like sudo sysctl -w net.ipv4.ip_local_port_range="60000 61000" instruction for changing ephemeral port range on other unices can be found for example at http://www.ncftp.com/ncftpd/doc/misc/ephemeral_ports.html
How to limit range of random port sockets?
1,639,666,599,000
I am trying to install a Linux kernel (3.8.1) from source in a Fedora distribution. The kernel is a vanilla one. I follow the kernel's build instructions closely that is: make menuconfig make sudo make modules_install install sudo grub2-mkconfig -o /boot/grub2/grub.cfg Everything in /boot seems fine. I can see System.map, initramfs, and vmlinuz for the newly compiled kernel. The vmlinuz link points to vmlinuz-3.8.1. There are multiple other kernels installed including an Ubuntu one. grub2 recognises all of them and I can boot to each one of them. When I reboot I see all kernels as menu entries and choose 3.8.1. Then I see this message: early console in decompress_kernel decompressing Linux... parsing ELF ... done Booting the kernel. [1.687084] systemd [1]:failed to mount /dev:no such device [1.687524] systemd [1]:failed to mount /dev:no such device Solution: All three posted responses provide the solution. CONFIG_DEVTMPFS was in fact causing the issue. I copied a working kernel's /boot/config-… into the root of the source tree as .config and executed the standard commands for building the kernel also shown above.
Easiest way to get a working kernel configuration is to just copy Fedora's .config over and then do a make oldconfig to configure it. The configuration is found at /boot/config-*
Self-built kernel: failed to mount /dev: No such device
1,639,666,599,000
I am trying to understand the disadvantages of using Linux kernel modules. I understand the benefits of using them: the ability to dynamically insert code into a running system without having to recompile and reboot the base system. Given this strong advantage, I was guessing most of kernel code should then be in kernel modules instead as part of base kernel, but that does not seem to be the case -- a good number of core subsystems (like memory management) still go into the base kernel. One reason I can think of is that kernel modules are loaded very late in the boot process and hence core functionality has to go in the base kernel. Another reason I read was about fragmentation. I didn't really understand why kernel modules cause memory fragmentation, can someone please explain? Are there any other downsides of using kernel modules?
Yes, the reason that essential components (such as mm) cannot be loadable modules is because they are essential -- the kernel will not work without them. I can't find any references claiming the effects of memory fragmentation with regard to loadable modules is significant, but this part of the LLKM how-to might be interesting reading for you. I think the question is really part and parcel of the issue of memory fragmentation generally, which happens on two levels: the fragmentation of real memory, which the kernel mm subsystem manages, and the fragmentation of virtual address space which may occur with very large applications (which I'd presume is mostly the result of how they are designed and compiled). With regard to the fragmentation of real memory, I do not think this is possible at finer than page size (4 KB) granularity. So if you were reading 1 MB of virtually contiguous space that is actually 100% fragmented into 1024 pages, there may be 1000 extra minor operations involved. In that bit of the how-to we read: The base kernel contains within its prized contiguous domain a large expanse of reusable memory -- the kmalloc pool. In some versions of Linux, the module loader tries first to get contiguous memory from that pool into which to load an LKM and only if a large enough space was not available, go to the vmalloc space. Andi Kleen submitted code to do that in Linux 2.5 in October 2002. He claims the difference is in the several per cent range. Here the vmalloc space, which is where userspace applications reside, would be that which is potentially prone to fragment into pages. This is simply the reality of contemporary operating systems (they all manage memory via virtual addressing). We might infer from this that virtual addressing could represent a performance penalty of "several percent" in userland as well, but in so far as virtual addressing is necessary and inescapable in userland, it is only in relation to something completely theoretical. There is the possibility for further compounding fragmentation by the fragmentation of a process's virtual address space (as opposed to the real memory behind it), but this would never apply to kernel modules (whereas the last paragraph apparently could). If you want my opinion, it is not worth much contemplation. Keep in mind that even with a highly modular kernel, the most used components (fs, networking, etc) will tend to be loaded very early and remain loaded, hence they will certainly be in a contiguous region of real memory, for what it is worth (which might be a reason to not pointlessly load and unload modules).
Disadvantages of linux kernel module?
1,639,666,599,000
Is it possible to change this value in runtime without rebooting? I don't always have this problem, when I suspend right now I'm getting a failure and Suspending console(s) (use no_console_suspend to debug) I would like to debug now, without having to reboot and recreate the problem.
Yes: echo N | sudo tee /sys/module/printk/parameters/console_suspend
Can no_console_suspend be set in runtime?
1,639,666,599,000
In particular, I'd like to add a fd flag and a branch in a couple of fd-handling syscalls that should be used if the flag is set instead of the current code. I think the only thing that matters here for the purposes of this question is that this should be a generic rather than hardware specific modification. How do I set things up so that I can rebuild the modified kernel and test the new feature quickly? I figure I need a basic setup that'll boot in a virtual machine and run my test code, which I guess could be simply in initram and the boot might not need to go any further (?) Are there any good guides on this or can you explain it in a single answer here?
eudyptula-boot is quite handy for this; its introductory blog post has more details, but basically it allows you to boot a VM using the kernel you wish to test, and your existing filesystems (using overlayfs). That way you can quickly check a kernel without rebooting, and you still have access to all your files. The only requirement on the kernel being tested is that it support overlayfs and 9p; these are easy to activate in the kernel configuration before building it.
How do I quickly build and test the kernel if I want to modify a system call
1,639,666,599,000
I'm currently attempting to remove the usbserial module in order to install a new driver module. When I attempt to remove the module I get the following issue: [root@localhost xr21v141x-lnx-3.0-pak]# modprobe -r usbserial FATAL: Module usbserial is builtin How can I remove the usbserial module?
That means the module was compiled into the kernel. If you want to be able to unload it, you will have to compile a new kernel and have it built as a dynamically (un)loadable module instead.
Removing builtin modules in Linux
1,639,666,599,000
What is the convention for numbering the linux kernels? AFAIK, the numbers never seem to decrease. However, i think I've seen three kinds of schemes 2.6.32-29 2.6.32-29.58 2.6.11.10 Can anybody explain what are the interpretations of these numbers and formats?
2.6.32-29: 2.6.32: base kernel, -29 final release by ubuntu 2.6.32-29.58: 2.6.32: base kernel, -29.58 ongoing release (-29) by ubuntu 2.6.11.10: 2.6.11: base kernel, .10 tenth patch release of it. (2.6.11 was chosen by volunteers (read Greg KH) to be a "long term maintenance" release).
Numbering convention for the linux kernel?
1,639,666,599,000
What exactly are shmpages in the grand scheme of kernel and memory terminology. If I'm hitting a shmpages limit, what does that mean? I'm also curious if this applies to more than linux
User mode processes can use Interprocess Communication (IPC) to communicate with each other, the fastest method of achieving this is by using shared memory pages (shmpages). This happens for example if banshee plays music and vlc plays a video, both processes have to access pulseaudio to output some sound. Try to find out more about shared memory configuration and usage with some of the following commands: Display the shared memory configuration: sysctl kernel.shm{max,all,mni} By default (Linux 2.6) this should output: kernel.shmmax = 33554432 kernel.shmall = 2097152 kernel.shmmni = 4096 shmmni is the maximum number of allowed shared memory segments, shmmax is the allowed size of a shared memory segment (32 MB) and shmall is the maximum total size of all segments (displayed as pages, translates to 8 GB) The currently used shared memory: grep Shmem /proc/meminfo If enabled by the distribution: ls -l /dev/shm ipcs is a great tool to find out more about IPC usage: ipcs -m will output the shared memory usage, you can see the allocated segments with the corresponding sizes. ipcs -m -i <shmid> shows more information about a specified segment including the PID of the process creating (cpid) and the last (lpid) using it. ipcrm can remove shared memory segments (but be aware that those are only get removed if no other processes are attached to them, see the nattach column in ipcs -m). ipcrm -m <shmid> Running out of shared memory could be a program heavily using a lot of shared memory, a program which does not detach the allocated segments properly, modified sysctl values, ... This is not Linux specific and also applies to (most) UNIX systems (shared memory first appeared in CB UNIX).
What are shmpages in laymans terms?
1,639,666,599,000
I'm interessted in getting the number of context switches a two processes in a KVM vm takes on a singel CPU over some time. Earlier i have used perf, is this best practice? And how much time is used on a context switch per CPU?
About 1.2 microseconds which is about a thousand Cycles https://eli.thegreenplace.net/2018/measuring-context-switching-and-memory-overheads-for-linux-threads/
How long time does a context switch take in Linux (ubuntu 18.04)
1,639,666,599,000
I want to make a very minimal linux os which only have a terminal interface and basic commands/applications (busybox is my choice for commands/apps). I don't want the installation option on my os. I just want it to be booted and run completely from RAM. I'm planning to use ISO-Linux as bootloader. No networking, no virtualization support, no unnecessary drivers, etc. I want it to be very very basic os. I've downloaded the latest stable kernel (v4.5) source code from kernel.org and the build environment ready. My one more confusion is that does a kernel by default has any user interface (shell, terminal, ...) where i can type commands and see output?
Technically you can achieve this. Though, kernel do not have any built-in user-interface. You need to follow steps as: 1. Create a initramfs with static busybox and nothing else. This initramfs will have few necessary directories: like proc, sys, tmp, bin, usr, etc 2. Write a "/init" script, whose main job will be: a. mount the procfs,tmpfs and sysfs. b. Call busybox's udev i.e. mdev c. Install the busybox command onto virtual system by executing busybox install -s d. Calling /bin/sh 3. Source the initramfs directory while compiling the kernel. You can do so by flag: CONFIG_INITRAMFS_SOURCE 4. Compile your kernel. 5. Boot off this kernel and you will get the shell prompt with minimal things. Though, I write above notes in a very formal way. You can fine tune it the way you desire. UPDATE: Follow this link for some guidelines.
How to make a minimal bootable linux (only with terminal) from kernel source code? [duplicate]
1,639,666,599,000
I updated my buildroot for the version "2014.08" (stable version) and I updated the Kernel version (3.12.26) of my project, when the buildroot try to build the linux-headers-3.12.26 package, occurs the following error: /output/host/usr/arm-buildroot-linux-gnueabi/sysroot 2.6; then exit 1; fi Incorrect selection of kernel headers: expected 2.6.x, got 3.12.x" How can I fix it? Do I have to change the script check-kernel-headers.sh?
No, you don't have to change any script. It seems like your Buildroot configuration is incorrect, but since you didn't provide your config, there's no real way to give a precise answer. Can you run make savedefconfig and post the output of this file here? Basically, what Buildroot is complaining about here is a mismatch between the kernel headers version it is finding, and the kernel headers version you have specified in the configuration. Most likely, you need to go in make menuconfig, and change the option in which you declare the version of the kernel headers (under the Toolchain menu).
Error compiling buildroot
1,639,666,599,000
I want to build my Linux kernel on my host and use it in my VWware virtual machine. They both use the same Ubuntu kernel now. On my Host, I do make and make configure. Then, what files should I copy to the target machine, before I do make modules_install and make install? What other things do I need to do?
The 'best' way to do this, is building it as a package. You can then distribute and install it to any Ubuntu machine running the same (major) version. For building vanilla kernels from source, there's a tool make-kpkg which can build the kernel as packages. Other major advantages: easy reverting by just removing the package, automatic triggers by the package management such as rebuilding DKMS, etc. The Ubuntu community wiki on Kernel/Compile Alternate Build Method provides a few steps on how to do that. Basically, it's just the same as building the kernel from upstream documentation, but instead of having make blindly installing it on your system, have it build in a 'fake root' environment and make a package out of it, using fakeroot make-kpkg --initrd --append-to-version=-some-string-here \ kernel-image kernel-headers This should produce binary .deb files which you will be able to transfer to other machines and install it using dpkg -i mykernelfile-image.deb mykernelfile-headers.deb ...
Build kernel in one machine, install in another
1,639,666,599,000
/usr/src/linux-3.2.1 # make install scripts/kconfig/conf --silentoldconfig Kconfig sh /usr/src/linux-3.2.1/arch/x86/boot/install.sh 3.2.1-12-desktop arch/x86/boot/bzImage \ System.map "/boot" You may need to create an initial ramdisk now. -- /boot # mkinitrd initrd-3.2.1-12-desktop.img 3.2.1-12-desktop Kernel image: /boot/vmlinuz-2.6.34-12-desktop Initrd image: /boot/initrd-2.6.34-12-desktop Kernel Modules: <not available> Could not find map initrd-3.2.1-12-desktop.img/boot/System.map, please specify a correct file with -M. There was an error generating the initrd (9) See the error during mkinitrd command. What's the point that I am missing? What does this mean? Kernel Modules: <not available> OpenSuse 11.3 64 bit EDIT1: I did "make modules". I copied the System.map file from the /usr/src/linux-3.2.1 directory to /boot, now running initrd command gives the following error: linux-dopx:/boot # mkinitrd initrd-3.2.1.img 3.2.1-desktop Kernel image: /boot/vmlinuz-2.6.34-12-desktop Initrd image: /boot/initrd-2.6.34-12-desktop Kernel Modules: <not available> Could not find map initrd-3.2.1.img/boot/System.map, please specify a correct file with -M. Kernel image: /boot/vmlinuz-3.2.1-12-desktop Initrd image: /boot/initrd-3.2.1-12-desktop Kernel Modules: <not available> Could not find map initrd-3.2.1.img/boot/System.map, please specify a correct file with -M. Kernel image: /boot/vmlinuz-3.2.1-12-desktop.old Initrd image: /boot/initrd-3.2.1-12-desktop.old Kernel Modules: <not available> Could not find map initrd-3.2.1.img/boot/System.map, please specify a correct file with -M. There was an error generating the initrd (9)
You should be using mkinitramfs, not mkinitrd. The actual initrd format is obsolete and initramfs is used instead these days, even though it is still called an initrd. Better yet, just use update-initramfs. Also you need to run make modules_install to install the modules.
How to create an initrd image on OpenSuSE linux?
1,639,666,599,000
There are three spin_lock functions in the kernel I am currently busy with. spin_lock spin_lock_irq spin_lock_irqsave I only find contributions covering only two of them (including Linux documentation). Then answers or explanations are formulated ambigouos or contrary to each other or even contain comments saying the explanation is wrong. This makes it hard to get an overview. Some basics are clear to me, as for example in interrupt context a simple spin_lock() can result in a deadlock. But I'd really appreciate a complete picture about this subject. I need to understand: When should or we use which version, when shouldn't we? When is it not necessary to use a more safe version but doesn't hurt (except for performance)? What is the reason to use a version in a particular situation?
A brief description is given in Chapter 5. Concurrency and Race Conditions of Linux Device Drivers, Third Edition void spin_lock(spinlock_t *lock); void spin_lock_irqsave(spinlock_t *lock, unsigned long flags); void spin_lock_irq(spinlock_t *lock); spin_lock_irqsave disables interrupts (on the local processor only) before taking the spinlock; the previous interrupt state is stored in flags. If you are absolutely sure nothing else might have already disabled interrupts on your processor (or, in other words, you are sure that you should enable interrupts when you release your spinlock), you can use spin_lock_irq instead and not have to keep track of the flags. The spin_lock_irq* functions are important if you expect that the spinlock could be held in interrupt context. Reason being that if the spinlock is held by the local CPU and then the local CPU services an interrupt, which also attempts to lock the spinlock, then you have a deadlock.
spin_lock vs. spin_lock_irq vs. spin_lock_irqsave
1,639,666,599,000
I have acquired a new wireless keyboard, and I've tested it out on both a Windows and a Linux box. It worked on both, but with an initial difference - Windows took a minute or two, to look up the keyboard's (Logitech's) drivers on the Internet and install them. It visually notified my of doing so and displayed its progress. However, when I plugged it into my Debian computer - I did not notice such a progress. Also, I was almost immediately able to use it, and I'm not sure how it got working so fast. Is Linux using a combination of a generic Bluetooth dongle driver and a generic keyboard driver?
Linux hardware drivers are kernel modules. Because of the open source model and licensing of the kernel, very few of these are written by hardware manufacturers; most of them are reverse engineered or based on standardized public protocols. Pretty sure bluetooth is in the later realm, and also that things like mice and keyboards are in most cases totally generic. The modules are part and parcel of the kernel source tree; i.e., if you download the linux kernel source, it comes with the code for all the available modules. You do not have to include all of them when you build it, of course. Linux distros (generally) are a collection of pre-built binaries, and this includes the kernel. The kernel itself is one binary; modules may either be built into this, or separate binaries which the kernel can load and unload. Since building all the available modules into the one binary would result in a massive and ridiculous kernel, and the distros want to cover as much hardware as possible, distro kernel packages include a broad array of individual binary modules. You can see these in /lib/modules. Driver modules are registered with the kernel and built at the same time; the kernel is aware of what is available on the system. When you plug in some new hardware, it identifies itself to the system and the kernel chooses an appropriate driver from /lib/modules to load. You can see all your currently loaded modules with lsmod.
How are drivers for peripheral hardware installed in Linux?
1,639,666,599,000
The question stated below might not be technically correct(misconception) so it would be appreciable if misconception is also addressed. Which ring level do the different *nix run levels operate in? Ring tag not available.
Unix runlevels are orthogonal (in the sense "unrelated", "independent of" - see comments) to protection rings. Runlevels are basically a run time configurations/states of the operating system as a whole, they describe what services are available ("to the user") - like SSH access, MTA, file server, GUI. Rings are a hardware aided concept which allows finer grained control over the hardware (as mentioned in the wikipedia page you link to). For example code running in higher Ring may not be able to execute some CPU instructions. Linux on the x86 architecture usually uses Ring0 for kernel (including device drivers) and Ring3 for userspace applications (regerdless of whether they are run by root or another ordinary or privileged user). Hence you can't really say that a runlevel is running in some specific Ring - there are always1 userspace applications (at least PID 1 - the init) running in Ring3 and the kernel (Ring0). 1As always, the "always" really means "almost always", since you can run "normal" programs in Ring0, but you are unlikely to see that in real life (unless you work on HPC).
Rings and run levels
1,639,666,599,000
I formatted an external hard drive (sdc) to ntfs using parted, creating one primary partition (sdc1). Before formatting the device SystemRescueCd was installed on the external hard drive using the command dd in order to be used as a bootable USB. However when listing devices with lsblk -f I am still getting the old FSTYPE (iso9660) and LABEL (sysrcd-5.2.2) for the formatted device (sdc): NAME FSTYPE LABEL UUID MOUNTPOINT sda ├─sda1 ntfs System Reserved ├─sda2 ntfs ├─sda3 ntfs ├─sda4 sdc iso9660 sysrcd-5.2.2 └─sdc1 ntfs sysrcd-5.2.2 /run/media/user/sysrcd-5.2.2 As shown in the output of lsblk -f only the FSTYPE of the partition sdc1 is correct, the sdc1 partition LABEL, sdc block device FSTYPE and LABEL are wrong. The nautilus GUI app is also showing the old device label (sysrcd-5.2.2). After creating a new partition table, parted suggested I reboot the system before formatting the device to ntfs, but I decided to unmount sdc instead of rebooting. Could it be that the kernel is still using the old FSTYPE and LABEL because I haven't rebooted the system? Do I have to reboot the system to get rid of the old FSTYPE and LABEL? As an alternative to rebooting is there a way to rename the FSTYPE and LABEL of a block device manually so that I can change them to the original FSTYPE and LABEL that shipped with the external hard drive?
From the output of lsblk -f in the original post I suspected that the signature of the installed SystemRescueCd was still present in the external hard drive. So I ran the command wipefs /dev/sdc and wipefs /dev/sdc1 which printed information about sdc and all partitions on sdc: [root@fedora user]# wipefs /dev/sdc DEVICE OFFSET TYPE UUID LABEL sdc 0x8001 iso9660 sysrcd-5.2.2 sdc 0x1fe dos [root@fedora user]# wipefs /dev/sdc1 DEVICE OFFSET TYPE UUID LABEL sdc1 0x3 ntfs sdc1 0x1fe dos The above printout confirmed that the iso9660 partition table created by SystemRescueCd was still present. lsblk was using the TYPE and LABEL of the iso9660 partition table instead of the dos (Master Boot Record) partition table. To get lsblk to display the correct partition table the iso9660 partition table had to be deleted. Note that dd can also be used to wipe out a partition-table signature from a block (disk) device but dd could also wipe out other partition tables. Because we want to target only a particular partition-table signature for wiping, wipefs was preferred since unlike dd, with wipefs we would not have to recreate the partition table again. The -a option of the command wipefs erases all available signatures on the device but the -t option of the command wipefs when used together with the -a option restricts the erasing of signatures to only a certain type of partition table. Below we wipe the iso9660 partition table. The -f (--force) option is required when erasing a partition-table signature on a block device. [root@fedora user]# wipefs -a -t iso9660 -f /dev/sdc /dev/sdc: 5 bytes were erased at offset 0x00008001 (iso9660): 43 44 30 30 31 After erasing the iso9660 partition table we check the partition table again to confirm that the partition table iso9660 was erased: [root@fedora user]# wipefs /dev/sdc DEVICE OFFSET TYPE UUID LABEL sdc 0x1fe dos [root@fedora user]# wipefs /dev/sdc1 DEVICE OFFSET TYPE UUID LABEL sdc1 0x3 ntfs 34435675G36Y4776 sdc1 0x1fe dos But now that the problematic iso9660 partition table has been erased lsblk is now using the UUID of the partition as the mountpoint directory name since the previously used label of the iso9660 partition-table no longer exists: NAME FSTYPE LABEL UUID MOUNTPOINT sda ├─sda1 ntfs System Reserved ├─sda2 ntfs ├─sda3 ntfs ├─sda4 sdc └─sdc1 ntfs 34435675G36Y4776 /run/media/user/34435675G36Y4776 we can check which volumes (i.e. partitions) have labels in the directory /dev/disk/by-label which lists all the partitions that have a label: [root@fedora user]# ls -l /dev/disk/by-label total 0 lrwxrwxrwx. 1 root root 10 Apr 30 19:47 'System\x20Reserved' -> ../../sda1 The ntfs file system on the partition sda1 is the only partition that has a label To make the directory name of the mountpoint more human readable we change the label for the ntfs file system on the partition sdc1 from nothing (empty string) to a "new label". The commands for changing the label for a file system depend on the file system 12. For an ntfs file system changing the label is done with the command ntfslabel: ntfslabel /dev/sdc1 "new-label" Now after changing the label on the ntfs file system lsblk uses the "new-label" as the name of the directory of the mountpoint: NAME FSTYPE LABEL UUID MOUNTPOINT sda ├─sda1 ntfs System Reserved ├─sda2 ntfs ├─sda3 ntfs ├─sda4 sdc └─sdc1 ntfs new-label /run/media/user/new-label Notice: also that the device sdc no longer has a file system type and label just like all the other block devices (e.g. sda). Only partitions should have a file system type since the file system is on the partition not the device, and label since the column header LABEL is the file system label not the device label.
Why is lsblk showing the old FSTYPE and LABEL of a device that was formatted?
1,639,666,599,000
I want to try my hand at writing a linux driver. I am trying to set up my environment. My current kernel: $ uname -r 4.10.0-37-generic I then download the source code: $ apt-get source linux-image-$(uname -r) Reading package lists... Done Picking 'linux' as source package instead of 'linux-image-4.10.0-37-generic' ... I go to compile and modprobe my driver, it fails. Looking in dmesg, it shows: version magic '4.10.17 SMP mod_unload ' should be '4.10.0-37-generic SMP mod_unload ' At this point, I'm confused. I go back to the source tree I downloaded, and when I run $ make kernelversion 4.10.17 Ok, try two. Download kernel 4.10.17 and install it. $ uname -r 4.10.17-041017-generic Still error: version magic '4.10.17 SMP mod_unload ' should be '4.10.17-041017-generic SMP mod_unload So maybe someone can help: what is the best and correct way for me to get a working kernel and matching source on ubuntu (well, xubuntu, but I don't think it should matter)? Do I need to get the code from kernel.org and build it from scratch? I kinda want to match the shipping Ubuntu kernel.
There are a number of approaches... If you’re trying to build an external module (including one you’re developing), you only need the kernel headers: apt install linux-header-$(uname -r) This will provide the necessary files so that the /lib/modules/$(uname -r)/{build,source} symlinks point to something meaningful. Then you can build a module in another directory by running make -C /lib/modules/$(uname -r)/build SUBDIRS="/path/to/your/module" modules This will ensure that the module is built for the kernel you’re running. If you want to base your development on the Ubuntu kernel, use the appropriate linux-source package; for your release of Ubuntu, that’s currently linux-source-4.10.0: apt install linux-source-4.10.0 cd /usr/src tar xf linux-source-4.10.0.tar.bz2 This will include the Ubuntu kernel patches, allowing you to build a kernel with the same features as your current kernel. Note however the caveat from the package description: This package is mainly meant for other packages to use, in order to build custom flavours. If you wish to use this package to create a custom Linux kernel, then it is suggested that you investigate the package kernel-package, which has been designed to ease the task of creating kernel image packages. If you are simply trying to build third-party modules for your kernel, you do not want this package. Install the appropriate linux-headers package instead. If you want to base your development on the upstream kernel (which is what I’d recommend), you should clone Linus’ tree and work there. To test your module, you’ll need to either build a full upstream kernel, or build your module using the approach given in point 1 above. In any case it’s not a good idea to use the linux source package itself (as obtained using apt-get source), since that’s really designed for building all the kernels used in Ubuntu. If you blindly debuild using that source package, you’ll wait for many hours before the build finishes... (There are circumstances where this is appropriate, and the Ubuntu kernel documentation will explain what to do; but this is very likely not one of them.)
Getting kernel source code (ubuntu)
1,639,666,599,000
I've noticed that some procs, such as bash, have their entire /proc/<pid>/ resources readable by the user who created that proc. However other procs, such as chrome or gnome-keyring-daemon, have most of their /proc/<pid>/ resources only accessible by root, despite the process itself being owned by the normal user and no suid being called. I dug through the kernel a bit and found that the /proc/ stuff gets limited if a task lacks a 'dumpable' flag, however I'm having a hard time understanding under what scenarios a task becomes undumpable (other than the setuid case, which doesn't apply to chrome or gnome-keyring): https://github.com/torvalds/linux/blob/164c09978cebebd8b5fc198e9243777dbaecdfa0/fs/proc/base.c#L1532 Anyone care to help me understand the underlying mechanism and the reasons for it? Thanks! Edit: Found a good doc on why you wouldn't want to have your SSH agent (such as gnome-keyring-daemon) dumpable by your user. Still not sure how gnome-keyring-daemon is making itself undumpable. https://github.com/torvalds/linux/blob/164c09978cebebd8b5fc198e9243777dbaecdfa0/Documentation/security/Yama.txt#L30
Linux has a system call, which will change the dumpable flag. Here is some example code, which I wrote several years ago: #include <sys/prctl.h> ... /* The last three arguments are just padding, because the * system call requires five arguments. */ prctl(PR_SET_DUMPABLE,1,42,42,42); It may be that gnome-keyring-daemon deliberately set the dumpable flag to zero for security reasons.
What causes /proc/<pid>/* resources to become owned by root, despite the procs being launched as a normal user?
1,639,666,599,000
I wanted to check my current Debian version so I typed uname -a and it gave me some stuff including Debian 3.2.73. But Then I found this command cat /etc/debian_version and it gave me Debian 7.9 1- What is the difference between the two commands & 2-Which version is installed?
uname It prints the name, version and other details about the current machine and the operating system kernel running on it. 3.2.73 is kernel version of your operating system. When you run the command shows the updated Operating system, kernel version, released date etc. /etc/debian_version This command is used to Check version you of the Debian distribution you are running.
Different Debian versions from two different commands
1,639,666,599,000
I was trying to study the debugging of kernel using QEMU. I tried initially and failed due to the fact that there was no virtual file system. The answers to this post suggests that there should be a virtual file system. But it doesn't talk about how to create virtual FS for kernel debugging and how to pass it over to the qemu. Can you help me out?
Depending on the distribution you'd like to use, there are various ways to create a file system image, e.g. this article walks you through the laborious way to a "Linux from Scratch" system. In general, you'd either create a QEMU image using qemu-img, fetch some distribution's installation media and use QEMU with the installation medium to prepare the image (this page explains the process for Debian GNU/Linux) or use an image prepared by someone else. This section of the QEMU Wikibook contains all the information you need. Edit: As Gilles' answer to the linked question suggests, you don't need a full-blown root file system for testing, you could just use an initrd image (say, Arch Linux's initrd like here)
Debugging Linux Kernel with QEMU
1,639,666,599,000
Directly related: Prevent claiming of novelty usb device by usbhid so I can control it with libusb? I want to access an RFID reader (works as HID device) from a program that uses libusb-0.1. In the code, the kernel driver is correctly detached with usb_detach_kernel_driver_np (no errors), but is seems that whenever my program tries to access the USB device, the usbhid module reclaims it. The following error always appears in dmesg: usb 1-1.3: usbfs: interface 0 claimed by usbhid while 'MyProgram' sets config #1 I've added the following udev rule, restarted udevd and replugged the device, but without effect. It is supposed to blacklist the device from being used by usbhid. # I anonymized the vendor/product IDs here ATTRS{idVendor}=="dead", ATTRS{idProduct}=="beef", OPTIONS=="ignore_device" Apart from dmesg output, I can see in /sys/bus/usb/drivers/usbhid/ that the device 1-1.3:1.0 is recreated every time, so the blacklisting doesn't seem to work. Anything else I could try? The operating system is Raspbian (on a Raspberry Pi) with kernel 3.2.27.
I've solved this part of the problem: OPTIONS=="ignore_device" was removed from the kernel (commit) blacklist usbhid didn't do anything, not even blocked my keyboard A configuration file in /etc/modprobe.d with options usbhid quirks=0xdead:0xbeef:0x0004 did not work because usbhid was not compiled as module So, I added usbhid.quirks=0xdead:0xbeef:0x4 to the boot command line (on Raspbian, that's in /boot/cmdline.txt) and usbhid does not bind the device anymore. My original problem, however, still remains. I always get a read/timeout error when accessing the RFID reader the first time.
Prevent usbhid from claiming USB device
1,639,666,599,000
I have compiled and installed the 2.6 kernel on an ARM board. I am using the ARM mini2440 board. I would like to know if there is already a way to access the General Purpose I/O port pins? Or will I have to do ioctl() and access them directly from the memory?
Use the sysfs control files in /sys/class/gpio. The following links will hopefully be useful to helping you get started: http://www.avrfreaks.net/wiki/index.php/Documentation:Linux/GPIO Have seen reports of this article on the Beagle Board also working with the mini2440: http://blog.makezine.com/archive/2009/02/blinking_leds_with_the_beagle_board.html In your Linux kernel documentation, look at Documentation/gpio.txt too.
Linux kernel 2.6 on ARM
1,639,666,599,000
I get following errors in my logs: kernel: snd_hda_intel 0000:00:1b.0: IRQ timing workaround is activated for card #0. Suggest a bigger bdl_pos_adj Google found some old posts here and here which deal with the same problem. The offered solution suggests to change the value for the kernel module: options snd-hda-intel enable_msi=1 bdl_pos_adj=1,48 Howeveer, nowhere is there explained what the numbers mean. Moreover, the current (default) value that I have now has multiple numbers: # cat /sys/module/snd_hda_intel/parameters/bdl_pos_adj -1,1,-1,-1,-1,-1,-1,-1 Can somebody please explain what all these numbers mean, and how to change them to get rid of the errors ?
The kernel documentation describes bdl_pos_adj as follows (see the ALSA driver configuration guide and More Notes on HD-Audio Driver): bdl_pos_adj - Specifies the DMA IRQ timing delay in samples. Passing -1 will make the driver to choose the appropriate value based on the controller chip. (sic). On Intel controllers, the default is 1 (which is what you can see in your own /sys/module/snd_hda_intel/parameters/bdl_pos_adj). The multiple numbers are there because the module supports multiple HDA devices (eight by default, it's SNDRV_CARDS in the kernel source). I'm not sure off-hand what the correspondence is; I'd have hoped it would match the card number, but you're getting the error for card #0 while your bdl_pos_adj suggests it's taking its value in second position... As far as fixing the problem, there isn't much documentation and the code doesn't say much either. The only suggestion I have is to follow the instructions, and try increasing the value until you get something that works: options snd-hda-intel enable_msi=1 bdl_pos_adj=2,2 (I'm using 2,2 here because I'm not sure which of the first two will be used for your device.)
bdl_pos_adj: set IRQ timing workaround for hda-intel
1,639,666,599,000
I am trying to run a distro in the virtual disk image with a custom kernel,so that I can experiment and debug the kernel. I followed this to make a disk image and then install Debian to it. Now I tried running distro with the following command:- qemu-system-i386 -hda debian.img -kernel ../linux-3.6.11/arch/i386/boot/bzImage -append "root=/dev/sda1" To my dissappointment it simply gives a Kernel panic-not syncing:VFS:unable to mount root fs on unknown-block(8,1). How can I fix the problem?Am I on the right path as far as kernel debugging is concerned?
I don't think you would have to start debugging the kernel right away. This error message means that the kernel is unable to mount the partition you requested to be /. This would happen for example if you gave it an empty disk image (my hunch is this is your case) - the kernel in the VM sees an unpartitioned drive, there is no /dev/sda1 just /dev/sda. To overcome this follow the instructions in the guide you have used - download a bootable ISO image and use it to install system into the VM image. When raw disk image is used, it can be directly partitioned with utilities like gdisk, fdisk or parted. Another possibility is, that there you are trying to mount a filesystem for which the kernel doesn't have a driver. This usually happens when one uses a kernel, that has most drivers in loadable modules on initrd and the initrd isn't loaded (hence the kernel lacks the ability to understand the particular filesystem).
Kernel and QEMU : Unable to mount root fs error
1,639,666,599,000
Is there any user space tool that can retrieve and dump the list of bad blocks in a NAND flash device? I've checked the mtdinfo command line utility, and also searched /proc and /sys, but couldn't find anything. I am looking for something suitable for use from a shell script. I could parse dmesg as the kernel prints bad block information on init, but I am hoping there will be a better way.
I have not been able to find any user space utility doing what I need. The closest I have found is the nanddump utility from mtd-utils, which can dump NAND contents, including bad blocks.
Print list of bad blocks in NAND flash from user space
1,639,666,599,000
I was going to post this in ServerFault originally but I thought this might be a better place. Let me know if you think there is a better place to post this question. I have an user-space application which performs networking through Java NIO's API (aka epoll on Linux) For demonstration and diagnostic purposes, I have a line testing utility. Its basically the same thing as iperf. Some information about the environment and how the test is run. Ubuntu 16.04 Desktop updated today (4.4.0-34-generic) irqbalance is off Intel X504T1 10GbE (ixgbe) receiver <-> Solarflare 10GbE (sfc) sender Uses 10, 000 TCP sockets Sockets use the OS default configurations The user-space read buffer is 32KB reading occurs no more than 40hz The line test consists of a single client that transmits as much information as possible over the TCP sockets. each read() per socket is allowed to be called more than once to obtain up to 98KB per hz (the 32KB buffer would have to be read 3 times to hit the ceiling) This means that at 40hz and the 98KB ceiling that read() can be called up to 120 times per second per connection; reading a total of 3, 840KB. Line tester shows that read() is called a total of about 110, 000 times a second. The line test will totally saturate the 10GbE adapter easily using about 8% softirq top - 22:04:29 up 51 min, 1 user, load average: 1.31, 1.02, 0.66 Tasks: 258 total, 1 running, 257 sleeping, 0 stopped, 0 zombie %Cpu(s): 2.2 us, 3.6 sy, 0.0 ni, 85.6 id, 1.1 wa, 0.0 hi, 7.4 si, 0.0 st KiB Mem : 16378912 total, 12909832 free, 2383088 used, 1085992 buff/cache KiB Swap: 16721916 total, 16721916 free, 0 used. 13746736 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 4922 jon 20 0 1553556 492552 127160 S 125.0 3.0 0:54.61 firefox 5099 jon 20 0 7212040 218396 16872 S 75.0 1.3 2:59.88 java 3194 root 20 0 722144 163812 134052 S 18.8 1.0 1:25.63 Xorg 4149 jon 20 0 1588648 147848 75344 S 6.2 0.9 0:28.63 compiz 4197 jon 20 0 544660 40600 26804 S 6.2 0.2 0:01.20 indicator-+ 5186 jon 20 0 41948 3696 3084 R 6.2 0.0 0:00.01 top 1 root 20 0 119744 5884 3964 S 0.0 0.0 0:00.84 systemd 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 5:01.01 ksoftirqd/0 5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:+ 7 root 20 0 0 0 0 S 0.0 0.0 0:01.06 rcu_sched 8 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcu_bh 9 root rt 0 0 0 0 S 0.0 0.0 0:00.00 migration/0 10 root rt 0 0 0 0 S 0.0 0.0 0:00.04 watchdog/0 11 root rt 0 0 0 0 S 0.0 0.0 0:00.01 watchdog/1 12 root rt 0 0 0 0 S 0.0 0.0 0:00.00 migration/1 13 root 20 0 0 0 0 S 0.0 0.0 0:08.16 ksoftirqd/1 cat /proc/interrupts CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7 0: 17 0 0 0 0 0 0 0 IR-IO-APIC 2-edge timer 1: 0 1 0 0 1 0 0 0 IR-IO-APIC 1-edge i8042 5: 0 0 0 0 0 0 0 0 IR-IO-APIC 5-edge parport0 8: 0 0 0 0 0 1 0 0 IR-IO-APIC 8-edge rtc0 9: 0 0 0 0 0 0 0 0 IR-IO-APIC 9-fasteoi acpi 12: 2 0 1 0 1 0 0 0 IR-IO-APIC 12-edge i8042 16: 50 6 2 6 10 0 0 3 IR-IO-APIC 16-fasteoi ehci_hcd:usb1 17: 1138 35 14 24 227 25 35 24 IR-IO-APIC 17-fasteoi snd_hda_intel 19: 0 1 0 0 0 1 0 0 IR-IO-APIC 19-fasteoi firewire_ohci 23: 11 4 10 1 7 0 0 0 IR-IO-APIC 23-fasteoi ehci_hcd:usb2 24: 0 0 0 0 0 0 0 0 DMAR-MSI 0-edge dmar0 27: 4571 1431 1142 812 1286 1442 985 730 IR-PCI-MSI 327680-edge xhci_hcd 28: 26230 3078 1744 1325 6297 2715 1703 1258 IR-PCI-MSI 512000-edge 0000:00:1f.2 29: 754 43 28 30 215 176 129 76 IR-PCI-MSI 2097152-edge eth0-rx-0 30: 0 0 0 0 0 0 0 0 IR-PCI-MSI 2097153-edge eth0-tx-0 31: 0 0 0 0 1 0 0 0 IR-PCI-MSI 2097154-edge eth0 32: 757 64 28 33 205 169 129 66 IR-PCI-MSI 2621440-edge eth1-rx-0 33: 0 0 0 0 0 0 0 0 IR-PCI-MSI 2621441-edge eth1-tx-0 34: 1 0 0 0 0 0 0 0 IR-PCI-MSI 2621442-edge eth1 35: 1042128 233608 58916 16705 1612687 1484813 1121118 630363 IR-PCI-MSI 1048576-edge enp2s0-TxRx-0 36: 858271 736510 372134 165262 1704892 1127381 1265752 767377 IR-PCI-MSI 1048577-edge enp2s0-TxRx-1 37: 816359 711664 426719 192686 1475309 1307882 807216 712562 IR-PCI-MSI 1048578-edge enp2s0-TxRx-2 38: 934786 714007 432100 217627 1905295 1622682 1150693 517990 IR-PCI-MSI 1048579-edge enp2s0-TxRx-3 39: 0 0 0 0 14185366 0 0 0 IR-PCI-MSI 1048580-edge enp2s0-TxRx-4 40: 0 0 0 0 0 14332864 0 0 IR-PCI-MSI 1048581-edge enp2s0-TxRx-5 41: 0 0 0 0 0 0 14617282 0 IR-PCI-MSI 1048582-edge enp2s0-TxRx-6 42: 0 0 0 0 0 0 0 14840029 IR-PCI-MSI 1048583-edge enp2s0-TxRx-7 43: 57 88 47 34 77 64 75 58 IR-PCI-MSI 1048584-edge enp2s0 44: 0 0 0 0 0 13 1 1 IR-PCI-MSI 360448-edge mei_me 45: 246 20 30 4 345 132 128 142 IR-PCI-MSI 442368-edge snd_hda_intel 46: 63933 9794 7233 4753 28843 19323 17678 11191 IR-PCI-MSI 524288-edge nvidia NMI: 57 43 35 42 103 98 83 76 Non-maskable interrupts LOC: 300755 258293 257168 289802 373725 262211 218677 196510 Local timer interrupts SPU: 0 0 0 0 0 0 0 0 Spurious interrupts PMI: 57 43 35 42 103 98 83 76 Performance monitoring interrupts IWI: 0 0 0 0 1 0 0 0 IRQ work interrupts RTR: 0 0 0 0 0 0 0 0 APIC ICR read retries RES: 7721466 2192716 1958606 3095012 1106115 1189666 309133 169884 Rescheduling interrupts CAL: 2598 2206 2194 1751 1976 2255 2130 2211 Function call interrupts TLB: 5450 6659 6103 5640 4352 5128 4535 4470 TLB shootdowns TRM: 0 0 0 0 0 0 0 0 Thermal event interrupts THR: 0 0 0 0 0 0 0 0 Threshold APIC interrupts DFR: 0 0 0 0 0 0 0 0 Deferred Error APIC interrupts MCE: 0 0 0 0 0 0 0 0 Machine check exceptions MCP: 11 11 11 11 11 11 11 11 Machine check polls ERR: 0 MIS: 0 PIN: 0 0 0 0 0 0 0 0 Posted-interrupt notification event PIW: 0 0 0 0 0 0 0 0 Posted-interrupt wakeup event Now, lets apply rate control to the socket reader. Inbound rate control is set to 50KB per connection Which is about 500MB/s since we have 10, 000 connections rate control sets reading frequency to 5hz, down from 40hz in the previous example. rate control's frequency is not aligned, meaning that not all connections tick using the same starting reference however, they are all governed by a single clock. clock is 40hz; meaning there is 40 opportunities for scheduled rate control reads to occur. during each of those 5hz rate control reads, the socket is only allowed to read up to 10KB. So, 5 times a second it reads 10KB out of the socket buffer. Line tester shows that read() is called a total of about 47, 000 times a second. The amount of softirq jumps from 8% to 50-65%; the number of interrupts almost triple and there is 26-58 million RES interrupts (per core) compared to 1-7 million before. top - 22:31:50 up 1:19, 1 user, load average: 2.30, 2.30, 1.96 Tasks: 259 total, 2 running, 257 sleeping, 0 stopped, 0 zombie %Cpu(s): 3.3 us, 5.5 sy, 0.0 ni, 41.2 id, 0.0 wa, 0.0 hi, 50.0 si, 0.0 st KiB Mem : 16378912 total, 11752520 free, 2189080 used, 2437312 buff/cache KiB Swap: 16721916 total, 16721916 free, 0 used. 12590400 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3 root 20 0 0 0 0 S 82.1 0.0 26:57.43 ksoftirqd/0 5194 jon 20 0 7212040 233488 16720 S 46.2 1.4 12:08.73 java 28 root 20 0 0 0 0 S 40.2 0.0 9:04.84 ksoftirqd/4 33 root 20 0 0 0 0 S 30.9 0.0 7:26.84 ksoftirqd/5 43 root 20 0 0 0 0 R 21.6 0.0 4:26.41 ksoftirqd/7 38 root 20 0 0 0 0 S 21.3 0.0 5:37.16 ksoftirqd/6 4922 jon 20 0 1533388 475124 127784 S 5.6 2.9 2:41.82 firefox 3194 root 20 0 722448 163872 134052 S 5.3 1.0 2:50.84 Xorg 5154 jon 20 0 589896 83876 53964 S 1.7 0.5 0:26.08 plugin-con+ 13 root 20 0 0 0 0 S 1.3 0.0 0:42.60 ksoftirqd/1 4548 jon 20 0 5492168 634252 43104 S 1.3 3.9 2:18.86 java 4149 jon 20 0 1604016 169732 75348 S 1.0 1.0 0:52.62 compiz 18 root 20 0 0 0 0 S 0.7 0.0 0:35.31 ksoftirqd/2 23 root 20 0 0 0 0 S 0.3 0.0 0:22.65 ksoftirqd/3 cat /proc/interrupts CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7 0: 17 0 0 0 0 0 0 0 IR-IO-APIC 2-edge timer 1: 0 1 0 0 1 0 0 0 IR-IO-APIC 1-edge i8042 5: 0 0 0 0 0 0 0 0 IR-IO-APIC 5-edge parport0 8: 0 0 0 0 0 1 0 0 IR-IO-APIC 8-edge rtc0 9: 0 0 0 0 0 0 0 0 IR-IO-APIC 9-fasteoi acpi 12: 2 0 1 0 1 0 0 0 IR-IO-APIC 12-edge i8042 16: 50 6 2 6 10 0 0 3 IR-IO-APIC 16-fasteoi ehci_hcd:usb1 17: 1138 35 14 24 227 25 35 24 IR-IO-APIC 17-fasteoi snd_hda_intel 19: 0 1 0 0 0 1 0 0 IR-IO-APIC 19-fasteoi firewire_ohci 23: 11 4 10 1 7 0 0 0 IR-IO-APIC 23-fasteoi ehci_hcd:usb2 24: 0 0 0 0 0 0 0 0 DMAR-MSI 0-edge dmar0 27: 6518 1966 1471 1031 4361 3847 2501 1673 IR-PCI-MSI 327680-edge xhci_hcd 28: 26732 3381 1957 1447 6687 3367 2112 1502 IR-PCI-MSI 512000-edge 0000:00:1f.2 29: 930 184 150 114 283 344 232 142 IR-PCI-MSI 2097152-edge eth0-rx-0 30: 0 0 0 0 0 0 0 0 IR-PCI-MSI 2097153-edge eth0-tx-0 31: 0 0 0 0 1 0 0 0 IR-PCI-MSI 2097154-edge eth0 32: 899 234 138 104 277 348 236 143 IR-PCI-MSI 2621440-edge eth1-rx-0 33: 0 0 0 0 0 0 0 0 IR-PCI-MSI 2621441-edge eth1-tx-0 34: 1 0 0 0 0 0 0 0 IR-PCI-MSI 2621442-edge eth1 35: 1339704 330929 97391 31445 2023348 1859243 1369358 782238 IR-PCI-MSI 1048576-edge enp2s0-TxRx-0 36: 1863223 3328011 1764431 788048 2411300 2677922 2540016 1742062 IR-PCI-MSI 1048577-edge enp2s0-TxRx-1 37: 1911973 3426913 2084294 955668 2216702 2894499 2008907 1723010 IR-PCI-MSI 1048578-edge enp2s0-TxRx-2 38: 2064515 3379490 2155421 1093171 2652077 3162801 2369659 1442568 IR-PCI-MSI 1048579-edge enp2s0-TxRx-3 39: 0 0 0 0 23079493 0 0 0 IR-PCI-MSI 1048580-edge enp2s0-TxRx-4 40: 0 0 0 0 0 23379687 0 0 IR-PCI-MSI 1048581-edge enp2s0-TxRx-5 41: 0 0 0 0 0 0 24721093 0 IR-PCI-MSI 1048582-edge enp2s0-TxRx-6 42: 0 0 0 0 0 0 0 25752073 IR-PCI-MSI 1048583-edge enp2s0-TxRx-7 43: 211 430 277 179 142 219 240 197 IR-PCI-MSI 1048584-edge enp2s0 44: 0 0 0 0 0 13 1 1 IR-PCI-MSI 360448-edge mei_me 45: 246 20 30 4 345 132 128 142 IR-PCI-MSI 442368-edge snd_hda_intel 46: 87961 29805 21965 14718 43334 42053 34617 23830 IR-PCI-MSI 524288-edge nvidia NMI: 218 130 107 105 252 247 225 214 Non-maskable interrupts LOC: 716630 636798 640606 679852 641275 555921 488433 446196 Local timer interrupts SPU: 0 0 0 0 0 0 0 0 Spurious interrupts PMI: 218 130 107 105 252 247 225 214 Performance monitoring interrupts IWI: 0 0 0 0 3 0 0 0 IRQ work interrupts RTR: 0 0 0 0 0 0 0 0 APIC ICR read retries RES: 38554509 4165414 4123561 5839087 2680226 2883656 1297965 812274 Rescheduling interrupts CAL: 3292 2356 2373 2014 2215 2496 2375 2474 Function call interrupts TLB: 10997 21211 21364 22716 11757 23899 28023 27646 TLB shootdowns TRM: 0 0 0 0 0 0 0 0 Thermal event interrupts THR: 0 0 0 0 0 0 0 0 Threshold APIC interrupts DFR: 0 0 0 0 0 0 0 0 Deferred Error APIC interrupts MCE: 0 0 0 0 0 0 0 0 Machine check exceptions MCP: 17 17 17 17 17 17 17 17 Machine check polls ERR: 0 MIS: 0 PIN: 0 0 0 0 0 0 0 0 Posted-interrupt notification event PIW: 0 0 0 0 0 0 0 0 Posted-interrupt wakeup event Can anyone explain why this is happening and possibly how to avoid it? For reference, here is top when using Outbound Rate Control @ 500MB/s top - 01:26:15 up 4:13, 1 user, load average: 0.38, 0.31, 1.00 Tasks: 254 total, 1 running, 253 sleeping, 0 stopped, 0 zombie %Cpu(s): 1.7 us, 3.7 sy, 0.0 ni, 93.3 id, 0.1 wa, 0.0 hi, 1.2 si, 0.0 st KiB Mem : 16378912 total, 12912528 free, 2209912 used, 1256472 buff/cache KiB Swap: 16721916 total, 16721916 free, 0 used. 13873312 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 6560 jon 20 0 7212040 204656 16836 S 38.9 1.2 0:21.37 java 3194 root 20 0 871176 206844 175404 S 1.0 1.3 12:11.62 Xorg 4149 jon 20 0 1909092 221972 99348 S 0.7 1.4 3:21.75 compiz 4548 jon 20 0 5879804 662312 45948 S 0.7 4.0 6:48.86 java 3940 jon 20 0 350840 13196 5468 S 0.3 0.1 0:20.41 ibus-daemon 4922 jon 20 0 1779380 686992 145824 S 0.3 4.2 20:38.42 firefox 5827 root 20 0 0 0 0 S 0.3 0.0 0:00.64 kworker/4:1 6341 root 20 0 0 0 0 S 0.3 0.0 0:00.93 kworker/1:2 6539 root 20 0 0 0 0 S 0.3 0.0 0:00.31 kworker/0:2 1 root 20 0 185280 5896 3964 S 0.0 0.0 0:01.01 systemd 2 root 20 0 0 0 0 S 0.0 0.0 0:00.02 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 107:56.20 ksoftirqd/0 5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:+ Attaching 2, 500 TCP connections and using rate control sees an internal-tcp outbound packet rate of 20K pps; jumping to 5, 000 TCP connections sees that number jump to 105K pps; jumping to 7, 500 TCP makes outbound jump to 190K pps (these are just the packets acknowledging reads -- or I assume)** 2: Putting the Solarflare card on the server and the Intel X540T1 on the client; I see IRQ pinning to ksoftirqd/0 using 100% and the total si to 12.5% which is about one core. With Solarflare the RES interrupts don't exceede 10, 000 per core.** The following is the server when using the Solarflare card.. but only about 360-400MB/s is being received instead of the target 500MB/s top - 11:07:55 up 16 min, 1 user, load average: 1.49, 1.09, 0.62 Tasks: 259 total, 3 running, 256 sleeping, 0 stopped, 0 zombie %Cpu(s): 1.5 us, 2.5 sy, 0.0 ni, 83.5 id, 0.0 wa, 0.0 hi, 12.5 si, 0.0 st KiB Mem : 16378912 total, 12294300 free, 2356136 used, 1728476 buff/cache KiB Swap: 16721916 total, 16721916 free, 0 used. 13067464 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3 root 20 0 0 0 0 R 99.7 0.0 5:20.82 ksoftirqd/0 4620 jon 20 0 7212040 246176 16712 S 25.6 1.5 1:24.67 java 3241 root 20 0 716936 161772 133628 R 3.3 1.0 0:15.42 Xorg 4659 jon 20 0 654928 36356 27820 S 1.0 0.2 0:00.63 gnome-term+ 4103 jon 20 0 1567768 141048 75340 S 0.7 0.9 0:06.44 compiz 4542 jon 20 0 5688204 601804 43040 S 0.7 3.7 1:03.91 java 7 root 20 0 0 0 0 S 0.3 0.0 0:00.93 rcu_sched 4538 root 20 0 0 0 0 S 0.3 0.0 0:00.68 kworker/4:2 1 root 20 0 119844 5980 4028 S 0.0 0.0 0:00.84 systemd 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:+ 8 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcu_bh 9 root rt 0 0 0 0 S 0.0 0.0 0:00.00 migration/0 10 root rt 0 0 0 0 S 0.0 0.0 0:00.02 watchdog/0 11 root rt 0 0 0 0 S 0.0 0.0 0:00.00 watchdog/1 12 root rt 0 0 0 0 S 0.0 0.0 0:00.00 migration/1 13 root 20 0 0 0 0 S 0.0 0.0 0:00.02 ksoftirqd/1 cat /proc/interrupts CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7 0: 17 0 0 0 0 0 0 0 IR-IO-APIC 2-edge timer 1: 1 0 0 1 0 0 0 0 IR-IO-APIC 1-edge i8042 5: 0 0 0 0 0 0 0 0 IR-IO-APIC 5-edge parport0 8: 0 0 0 0 0 1 0 0 IR-IO-APIC 8-edge rtc0 9: 0 0 0 0 0 0 0 0 IR-IO-APIC 9-fasteoi acpi 12: 1 0 1 0 1 0 1 0 IR-IO-APIC 12-edge i8042 16: 61 2 1 3 7 2 1 0 IR-IO-APIC 16-fasteoi ehci_hcd:usb1 17: 1166 55 10 19 245 45 13 19 IR-IO-APIC 17-fasteoi snd_hda_intel 19: 0 0 0 0 2 0 0 0 IR-IO-APIC 19-fasteoi firewire_ohci 23: 26 1 2 0 1 2 0 1 IR-IO-APIC 23-fasteoi ehci_hcd:usb2 24: 0 0 0 0 0 0 0 0 DMAR-MSI 0-edge dmar0 27: 1723 170 168 126 1603 166 135 47 IR-PCI-MSI 327680-edge xhci_hcd 28: 24980 1714 933 754 7492 1546 1202 936 IR-PCI-MSI 512000-edge 0000:00:1f.2 29: 298 2 1 7 159 4 6 1 IR-PCI-MSI 2097152-edge eth0-rx-0 30: 0 0 0 0 0 0 0 0 IR-PCI-MSI 2097153-edge eth0-tx-0 31: 1 0 0 0 0 0 0 0 IR-PCI-MSI 2097154-edge eth0 32: 16878 5179 2952 3044 18575 7842 3822 3939 IR-PCI-MSI 1048576-edge enp2s0f0-0 33: 16174 4967 2787 2583 19305 7883 3507 3862 IR-PCI-MSI 1048577-edge enp2s0f0-1 34: 16707 5192 2952 2659 18031 8588 3496 4393 IR-PCI-MSI 1048578-edge enp2s0f0-2 35: 17726 5431 2951 2746 17183 8105 3529 4238 IR-PCI-MSI 1048579-edge enp2s0f0-3 36: 6 1 0 3 6 3 0 1 IR-PCI-MSI 1050624-edge enp2s0f1-0 37: 1 1 0 0 0 0 0 0 IR-PCI-MSI 1050625-edge enp2s0f1-1 38: 1 1 0 0 0 0 0 0 IR-PCI-MSI 1050626-edge enp2s0f1-2 39: 1 1 0 0 0 0 0 0 IR-PCI-MSI 1050627-edge enp2s0f1-3 40: 414 12 9 3 0 14 18 8 IR-PCI-MSI 2621440-edge eth1-rx-0 41: 0 0 0 0 0 0 0 0 IR-PCI-MSI 2621441-edge eth1-tx-0 42: 1 0 0 0 0 0 0 0 IR-PCI-MSI 2621442-edge eth1 43: 0 0 0 0 10 0 5 0 IR-PCI-MSI 360448-edge mei_me 44: 95 26 8 33 398 384 51 16 IR-PCI-MSI 442368-edge snd_hda_intel 45: 17400 1413 1135 806 17781 1714 1401 988 IR-PCI-MSI 524288-edge nvidia NMI: 37 3 5 3 2 1 1 1 Non-maskable interrupts LOC: 112894 53399 87350 46718 43552 19663 25436 19705 Local timer interrupts SPU: 0 0 0 0 0 0 0 0 Spurious interrupts PMI: 37 3 5 3 2 1 1 1 Performance monitoring interrupts IWI: 0 0 0 0 0 0 0 0 IRQ work interrupts RTR: 0 0 0 0 0 0 0 0 APIC ICR read retries RES: 1808 7668 9364 1244 4161 2554 9171 954 Rescheduling interrupts CAL: 1900 2028 1497 1984 1862 1931 2118 2004 Function call interrupts TLB: 1991 2539 3176 2985 3176 2458 1612 2087 TLB shootdowns TRM: 0 0 0 0 0 0 0 0 Thermal event interrupts THR: 0 0 0 0 0 0 0 0 Threshold APIC interrupts DFR: 0 0 0 0 0 0 0 0 Deferred Error APIC interrupts MCE: 0 0 0 0 0 0 0 0 Machine check exceptions MCP: 5 5 5 5 5 5 5 5 Machine check polls ERR: 0 MIS: 0 PIN: 0 0 0 0 0 0 0 0 Posted-interrupt notification event PIW: 0 0 0 0 0 0 0 0 Posted-interrupt wakeup event
The problem ended up being using rate-control with the default configured sockets was creating a situation where the internal TCP buffer size was automatically-adjusting to larger and larger size due to the slow read out times. (the default max size is like 6MB) When the size was automatically growing, the TCP compact process would start to churn like crazy and thus eating into all the softirq. The way to fix this is to set an explicit TCP buffer size when using rate control to prevent this aberrant behavior.
High softirq when using rate control networking
1,639,666,599,000
One of my servers keeps getting unreachable because of a kernel bug. I tried all the kernel versions mentioned below but unfortunately none of them fixed this issue. What can I do to resolve this issue? Ubuntu Version: Ubuntu 16.04.3 LTS Kernel Versions: 4.13.0 4.14.17 4.15.2 4.15.3 NIC: 00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (2) I219-LM (rev 31) Subsystem: Fujitsu Technology Solutions Ethernet Connection (2) I219-LM Kernel driver in use: e1000e Kernel modules: e1000e syslog: Feb 16 09:26:19 foxtrot kernel: [ 6315.103309] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx Feb 16 09:26:46 foxtrot kernel: [ 6341.860523] e1000e 0000:00:1f.6 eth0: Reset adapter unexpectedly Feb 16 09:26:46 foxtrot kernel: [ 6341.880459] ------------[ cut here ]------------ Feb 16 09:26:46 foxtrot kernel: [ 6341.880461] kernel BUG at /home/kernel/COD/linux/drivers/net/ethernet/intel/e1000e/netdev.c:3836! Feb 16 09:26:46 foxtrot kernel: [ 6341.880609] invalid opcode: 0000 [#1] SMP PTI Feb 16 09:26:46 foxtrot kernel: [ 6341.880702] Modules linked in: ipt_MASQUERADE nf_nat_masquerade_ipv4 nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo iptable_nat nf_nat_ipv4 xt_addrtype nf_nat br_netfilter bridge stp llc xt_tcpudp overlay nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack nf_conntrack iptable_filter ip_tables x_tables intel_rapl x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel pcbc aesni_intel aes_x86_64 crypto_simd glue_helper cryptd intel_cstate intel_rapl_perf serio_raw intel_pch_thermal mac_hid acpi_pad autofs4 btrfs zstd_compress raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid0 multipath linear raid1 e1000e psmouse ptp ahci pps_core libahci wmi video Feb 16 09:26:46 foxtrot kernel: [ 6341.881046] CPU: 7 PID: 72 Comm: kworker/7:1 Tainted: G W 4.15.3-041503-generic #201802120730 Feb 16 09:26:46 foxtrot kernel: [ 6341.881156] Hardware name: FUJITSU /D3401-H2, BIOS V5.0.0.12 R1.8.0 for D3401-H2x 05/15/2017 Feb 16 09:26:46 foxtrot kernel: [ 6341.881275] Workqueue: events e1000_reset_task [e1000e] Feb 16 09:26:46 foxtrot kernel: [ 6341.881373] RIP: 0010:e1000_flush_desc_rings+0x2cb/0x2e0 [e1000e] Feb 16 09:26:46 foxtrot kernel: [ 6341.881465] RSP: 0018:ffff9ff6033f3d88 EFLAGS: 00010202 Feb 16 09:26:46 foxtrot kernel: [ 6341.881555] RAX: 00000000000000d3 RBX: ffff8f0d2ee048c0 RCX: 00000000000000e9 Feb 16 09:26:46 foxtrot kernel: [ 6341.881648] RDX: 00000000000000d3 RSI: 0000000000000246 RDI: 0000000000000246 Feb 16 09:26:46 foxtrot kernel: [ 6341.881742] RBP: ffff9ff6033f3dc0 R08: 0000000000000002 R09: ffff9ff6033f3d54 Feb 16 09:26:46 foxtrot kernel: [ 6341.881835] R10: 00000000000000fe R11: 0000000000000000 R12: 000000003103f0fa Feb 16 09:26:46 foxtrot kernel: [ 6341.881946] R13: ffff8f0d2ee04d78 R14: ffff8f0d39ca9480 R15: 0000000004008000 Feb 16 09:26:46 foxtrot kernel: [ 6341.882071] FS: 0000000000000000(0000) GS:ffff8f0d5e5c0000(0000) knlGS:0000000000000000 Feb 16 09:26:46 foxtrot kernel: [ 6341.882263] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Feb 16 09:26:46 foxtrot kernel: [ 6341.882387] CR2: 00007fd08b9f7fd7 CR3: 0000000700a0a001 CR4: 00000000003606e0 Feb 16 09:26:46 foxtrot kernel: [ 6341.882481] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 Feb 16 09:26:46 foxtrot kernel: [ 6341.882661] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Feb 16 09:26:46 foxtrot kernel: [ 6341.882787] Call Trace: Feb 16 09:26:46 foxtrot kernel: [ 6341.882878] e1000e_reset+0x516/0x760 [e1000e] Feb 16 09:26:46 foxtrot kernel: [ 6341.882968] e1000e_down+0x1db/0x210 [e1000e] Feb 16 09:26:46 foxtrot kernel: [ 6341.883064] e1000e_reinit_locked+0x4c/0x70 [e1000e] Feb 16 09:26:46 foxtrot kernel: [ 6341.883156] e1000_reset_task+0x59/0x60 [e1000e] Feb 16 09:26:46 foxtrot kernel: [ 6341.883250] process_one_work+0x1ef/0x410 Feb 16 09:26:46 foxtrot kernel: [ 6341.883338] worker_thread+0x32/0x410 Feb 16 09:26:46 foxtrot kernel: [ 6341.883419] kthread+0x121/0x140 Feb 16 09:26:46 foxtrot kernel: [ 6341.883506] ? process_one_work+0x410/0x410 Feb 16 09:26:46 foxtrot kernel: [ 6341.883594] ? kthread_create_worker_on_cpu+0x70/0x70 Feb 16 09:26:46 foxtrot kernel: [ 6341.883685] ret_from_fork+0x35/0x40 Feb 16 09:26:46 foxtrot kernel: [ 6341.883772] Code: e8 fb fc ff ff eb d6 4c 89 ef e8 f1 fc ff ff eb 95 4c 89 ef e8 e7 fc ff ff e9 66 ff ff ff 4c 89 ef e8 da fc ff ff e9 02 ff ff ff <0f> 0b e8 5e fb 13 d8 0f 1f 40 00 66 2e 0f 1f 84 00 00 00 00 00 Feb 16 09:26:46 foxtrot kernel: [ 6341.883949] RIP: e1000_flush_desc_rings+0x2cb/0x2e0 [e1000e] RSP: ffff9ff6033f3d88 Feb 16 09:26:46 foxtrot kernel: [ 6341.884056] ---[ end trace abbf45ab36b73ab9 ]--- Feb 16 09:28:38 foxtrot autossh[1513]: ssh exited with error status 255; restarting ssh Feb 16 09:28:38 foxtrot autossh[1513]: starting ssh (count 2) Feb 16 09:28:38 foxtrot autossh[1513]: ssh child pid is 20383 Feb 16 09:28:40 foxtrot autossh[1513]: ssh exited with error status 255; restarting ssh Feb 16 09:28:40 foxtrot autossh[1513]: starting ssh (count 3)
I was able to resolve this issue by disabling TSO, GSO and GRO with the following command (This command needs to be run again after a server reboot, it can also be added to rc.local): ethtool -K eth0 gso off gro off tso off It has been over 6 months now and the issue didn't occur again since I disabled this.
Kernel bug causes ethernet driver to stop working
1,639,666,599,000
I need to use a custom kernel option at compile time (ACPI_REV_OVERRIDE_POSSIBLE) in order for my graphical card to work correctly with bumblebeed and nvidia drivers on my Dell XPS 15 9560. I'm using ArchLinux. Every few days, there is a new kernel release (4.11.5, 4.11.6, ...). How should I handle those kernel updates ? Do I need to recompile the kernel manually each time ? (I made a small script to accelerate the process, but some stuff still need to be done manually, and it take a REALLY LONG TIME to compile). Is it possible to automate the process such as each time a kernel update shows in, the package manager compiles the kernel itself with the option I specified ? Or with a script ?
That config line should exist in the /proc/config.gz file of any kernel you previously configured it in. You could do what I do, in a two-liner, on my Gentoo systems: su - cd /usr/src && cp -a linux-<new version> /dev/shm/ && ln -s /dev/shm/linux-<new version> linux && cd linux && zcat /proc/config.gz > .config && make olddefconfig && make -j<numcpus+1> bzImage modules && mount /boot && make modules_install install && grub-mkconfig > /boot/grub/grub.cfg && sync && reboot -hi I'm typing this from memory on my mobile right now, and I always goof on the order of 'ln', and it might be "defoldconfig". But, basicallly, that's what I do every time. Works for me. :) YMMV. I'll edit later with corrections once I get a good terminal and shell. :) I always compile on tmpfs, because nothing on a system is faster and more resilient to write-rot than RAM. Check out 'make help' output when run in the kernel source directory for references, and the yummy Gentoo Wiki for even more good info. https://wiki.gentoo.org/wiki/Kernel/Upgrade/ https://wiki.gentoo.org/wiki/GRUB2
How to handle linux kernel updates when using a custom kernel?
1,425,951,077,000
I am running VirtualBox (using the Qiime image http://qiime.org/install/virtual_box.html) The physical hardware is a 32 core machine. The virtual machine in VirtualBox has been given 16 cores. When booting I get: Ubuntu 10.04.1 LTS Linux 2.6.38-15-server # grep . /sys/devices/system/cpu/* /sys/devices/system/cpu/kernel_max:255 /sys/devices/system/cpu/offline:1-15 /sys/devices/system/cpu/online:0 /sys/devices/system/cpu/possible:0-15 /sys/devices/system/cpu/present:0 /sys/devices/system/cpu/sched_mc_power_savings:0 # ls /sys/kernel/debug/tracing/per_cpu/ cpu0 cpu1 cpu10 cpu11 cpu12 cpu13 cpu14 cpu15 cpu2 cpu3 cpu4 cpu5 cpu6 cpu7 cpu8 cpu9 # ls /sys/devices/system/cpu/ cpu0 cpufreq cpuidle kernel_max offline online possible present probe release sched_mc_power_savings # echo 1 > /sys/devices/system/cpu/cpu6/online -su: /sys/devices/system/cpu/cpu6/online: No such file or directory So it seems it detects the resources for 16 CPUs, but it only sets one online. I have tested with another image that the VirtualBox host can run a guest with 16 cores. That works. So the problem is to trouble shoot the Qiime image to figure out why this guest image only detects 1 CPU.
QIIME came out with a new virtualbox image (version 1.5), which works. If no one finds the answer to the problem above I will close the question in a week.
VirtualBox guest: 16 CPUs detected but only 1 online
1,425,951,077,000
Is there a command like vi > out vi | out That I could use to cause a watchdog reset of my embedded linux device?
If you have a watchdog on your system and a driver that uses /dev/watchdog, all you have to do is kill the process that is feeding it; if there is no such process, then you can touch /dev/watchdog once to turn it on, and if you don't touch it again, it will reset. You also might be interested in resetting the device using the "magic sysrq" way. If you have a kernel with the CONFIG_MAGIC_SYSRQ feature compiled in, then you can echo 1 > /proc/sys/kernel/sysrq to enable it, then echo b > /proc/sysrq-trigger to reboot. When you do this, it reboots immediately, without unmounting or or syncing filesystems.
How do I cause a watchdog reset of my embedded Linux device
1,425,951,077,000
I set up zram and made extensive tests inside my Linux machines to measure that it really helps in my scenario. However, I'm very confused that zram seems to use up memory of the whole uncompressed data size. When I type in "zramctl" I see this: NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT /dev/zram0 2G 853,6M 355,1M 367,1M 4 [SWAP] According to the help command of zramctl, DATA is the uncompressed size and TOTAL the compressed memory including metadata. Yet, when I type in swapon -s, I see this output: Filename Type size used Priority /dev/sda2 partition 1463292 0 4 /dev/zram0 partition 2024224 906240 5 906240 is the used memory in Kilobytes, which translates to the 853,6M DATA value of zramctl. Which leaves the impression that the compressed zram device needs more memory than it saves. Once DATA is full, it actually starts swapping to the disk drive, so it must be indeed full. Why does zram seemingly occupy memory of the original data size? Why is it not the size of COMPR or TOTAL? It seems there is no source about that on the Internet yet, because I haven't found any information about this. Thank you!
So after more testing and observations, I made a few very interesting discoveries. DATA is indeed the uncompressed amount of memory that takes up the swap space. But at first glance it's very deceiving and confusing. When you setup zram and use it as swap, disksize does not stand for the total amount of memory that zram will consume for compressed data. Instead, it stands for the total amount of uncompressed data that zram will compress. So you could create a zram device with a size of 2 GB, but in practice zram will stop after the total compressed memory is around 500 - 1000 MB (depends on your scenario of course). Commands like swapon -s or Gnome's system monitor show the uncompressed data size for the zram device, just like the DATA of zramctl. Thankfully, in reality, zram does not actually use up the reported amount of memory. But this means that in practice, you actually have to create a zram disk size that equals the RAM you have + 50% to take real advantage of it and not a disk size that equals half of the RAM size, like zram-config incorrectly does. But read on to find out more. Here is the deeper background: Why am I so sure? Because I tested this with zswap as well. I have compiled an own kernel where I lowered the file_prio value inside mm/vmscan.c compared to anon_prio (in newer Linux 5.6 kernels the variables have been renamed to fp and ap respectively). The reduced file_prio value will make the kernel not discard valuable cache memory as much anymore. By default, even with vm.swappiness at 100, the kernel discards an insane amount of cached RAM data, both in standby memory and for active programs. The performance hit with the default configuration is extreme in memory pressure situations when you actually want to make use of zram, because then you absolutely want the kernel to swap rarely used and highly compressible memory way more often. With more free memory, you have more space for cached data. Then cached data won't be thrown away at a ridiculously high rate, and Linux won't have to reread certain purged program file cache repeatedly. When testing this on classic hard drives, you can easily verify the performance impact. Back to my zswap test: With my custom kernel, zswap got plenty of memory to compress once I hit the 50 - 70% memory mark. Gnome's System Monitor immediately shows a high swap data usage for the page partition, but oddly enough, there was no hard drive paging at all! This is of course by design of zswap, that it swaps least recently used memory on its own. But the interesting part is that the system reports such high swap usage for the swap partiton anyway, so ultimately you are limited by the size of your swap partition or swap file. Even though all memory is compressed, you have to have at least the swap size of the uncompressed data. Therefore even if in practice 4 GB of swapped memory in zswap use up only 1 - 2 GB, your swap needs to have the size of the uncompressed data size. The same goes for zram, but here the memory is at least not actually reserved. Well, unless you use zswap with a dynamically growing swap file of course. As for zram again, there is also a very interesting detail that backs up the observation I made: There is little point creating a zram of greater than twice the size of memory since we expect a 2:1 compression ratio. Note that zram uses about 0.1% of the size of the disk when not in use so a huge zram is wasteful. This means that to make an effectice use of zram, you have to at least create a disk size that equals your installed RAM. Due to the high compression ratios, I would suggest to use your GB of RAM + 50%, but the quote above implies that it does not make much sense if you go above +100%. Additionally, since we have to specify a disk size that matches the uncompressed data size, it is much harder to control and predict the actual real memory usage. From the helpful official source above, we can limit the actual memory usage (which equals the TOTAL value of zramctl) with this command: echo 1G > /sys/block/zram0/mem_limit. But in reality, doing this will lock up the machine. Because the system tries to still swap to it, but zram imposes a limit, and the machine locks up with super high CPU usage. This behavior can't be intentional at all, which strengthens my impression that something about the whole story is very wonky. To sum this up: The disksize you set during zram device creation is basically a virtual disk size, this does not stand for the real RAM usage. You have to predict the actual RAM usage (compression ratio) for your scenario, or make sure that you never create a zram disk size that is too large. Your current RAM size + 50% should be nearly always fine in practice. The default configuration of the Linux kernel is unfortunately totally unsuited for zram compression, even when setting vm.swappiness to 100. You need to make your own custom kernel to actually make real use of this handy feature, since Linux purges way too many file caches instead of freeing up memory by swapping the most compressible data much earlier. Ironically, a helpful patch to fix this situation was never accepted. Using the zram limit echo 1G > /sys/block/zram0/mem_limit will lock up your system once the compressed data reached that threshold. You are better off to limit zram usage with a well-predicted zram disksize, as it seems there is no other alternative for a limit.
Why does zram occupy much more memory compared to its "compressed" value?
1,425,951,077,000
I have a hypothetical situation: Let us say we have two strace processes S1 & S2, which are simply monitoring each other. How can this be possible? Well, in the command-line options for strace, -p PID is the way to pass the required PID, which (in our case) is not yet known when we issue the strace command. We could change the strace source code, such that -P 0 means, ask user for PID. E.g., read() from STDIN. When we can run two strace processes in two shell sessions and find the PIDs in a third shell, we can provide that input to S1 & S2 and let them monitor each other. Would S1 & S2 get stuck? Or, go into infinite loops, or crash immediately or...? Again, let us say we have another strace process S3, with -p -1, which, by modifying the source code, we use to tell S3 to monitor itself. E.g., use getpid() without using STDIN. Would S3 crash? Or, would it hang with no further processing possible? Would it wait for some event to happen, but, because it is waiting, no event would happen? In the strace man-page, it says that we can not monitor an init process. Is there any other limitation enforced by strace, or by the kernel, to avoid a circular dependency or loop? Some Special Cases : S4 monitors S5, S5 monitors S6, S6 monitors S4. S7 & S8 monitoring each other where S7 is the Parent of S8. More special cases are possible. EDIT (after comments by @Ralph Rönnquist & @pfnuesel) : https://github.com/bnoordhuis/strace/blob/master/strace.c#L941 if (pid <= 0) { error_msg_and_die("Invalid process id: '%s'", opt); } if (pid == strace_tracer_pid) { error_msg_and_die("I'm sorry, I can't let you do that, Dave."); } Specifically, what will happen if strace.c does not check for pid == strace_tracer_pid or any other special cases? Is there any technical limitation (in kernel) over one process monitoring itself? How about a group of 2 (or 3 or more) processes monitoring themselves? Will the system crash or hang?
I will answer for Linux only. Surprisingly, in newer kernels, the ptrace system call, which is used by strace in order to actually perform the tracing, is allowed to trace the init process. The manual page says: EPERM The specified process cannot be traced. This could be because the tracer has insufficient privileges (the required capability is CAP_SYS_PTRACE); unprivileged processes cannot trace pro‐ cesses that they cannot send signals to or those running set- user-ID/set-group-ID programs, for obvious reasons. Alterna‐ tively, the process may already be being traced, or (on kernels before 2.6.26) be init(8) (PID 1). implying that starting in version 2.6.26, you can trace init, although of course you must still be root in order to do so. The strace binary on my system allows me to trace init, and in fact I can even use gdb to attach to init and kill it. (When I did this, the system immediately came to a halt.) ptrace cannot be used by a process to trace itself, so if strace did not check, it would nevertheless fail at tracing itself. The following program: #include <sys/ptrace.h> #include <stdio.h> #include <unistd.h> int main() { if (ptrace(PTRACE_ATTACH, getpid(), 0, 0) == -1) { perror(NULL); } } prints Operation not permitted (i.e., the result is EPERM). The kernel performs this check in ptrace.c: retval = -EPERM; if (unlikely(task->flags & PF_KTHREAD)) goto out; if (same_thread_group(task, current)) // <-- this is the one goto out; Now, it is possible for two strace processes can trace each other; the kernel will not prevent this, and you can observe the result yourself. For me, the last thing that the first strace process (PID = 5882) prints is: ptrace(PTRACE_SEIZE, 5882, 0, 0x11 whereas the second strace process (PID = 5890) prints nothing at all. ps shows both processes in the state t, which, according to the proc(5) manual page, means trace-stopped. This occurs because a tracee stops whenever it enters or exits a system call and whenever a signal is about to be delivered to it (other than SIGKILL). Assume process 5882 is already tracing process 5890. Then, we can deduce the following sequence of events: Process 5890 enters the ptrace system call, attempting to trace process 5882. Process 5890 enters trace-stop. Process 5882 receives SIGCHLD to inform it that its tracee, process 5890 has stopped. (A trace-stopped process appears as though it received the `SIGTRAP signal.) Process 5882, seeing that its tracee has made a system call, dutifully prints out the information about the syscall that process 5890 is about to make, and the arguments. This is the last output you see. Process 5882 calls ptrace(PTRACE_SYSCALL, 5890, ...) to allow process 5890 to continue. Process 5890 leaves trace-stop and performs its ptrace(PTRACE_SEIZE, 5882, ...). When the latter returns, process 5890 enters trace-stop. Process 5882 is sent SIGCHLD since its tracee has just stopped again. Since it is being traced, the receipt of the signal causes it to enter trace-stop. Now both processes are stopped. The end. As you can see from this example, the situation of two process tracing each other does not create any inherent logical difficulties for the kernel, which is probably why the kernel code does not contain a check to prevent this situation from happening. It just happens to not be very useful for two processes to trace each other.
How can strace monitor itself?
1,425,951,077,000
I'd like to upgrade my Centos 7.1 kernel from 3.10 to 4.0 i did successfully in Ubuntu 15.04 using Linux 4.0.1-040001-generic even though i updated it manually it's works fine so please could anyone tell me how to upgrade Centos 7.1 Kernel to 4.0
You could upgrade the kernel via elrepo. rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm (external link) yum install --enablerepo=elrepo-kernel kernel-ml You can also install the updated firmware and headers yum install --enablerepo=elrepo-kernel kernel-ml-{firmware,headers,devel} You'll probably need to remove the kernel-firmware first: yum remove kernel-{firmware,headers,devel}
Centos 7.1 still using outdate kernel 3.10 how to upgrade to kernel 4.0
1,425,951,077,000
Is it possible to boot linux without a initrd.img ? I am planning to add default drivers as a part-of-kernel itself and avoid initrd completely. What are the modules that should be made part-of-the-kernel instead of loadable modules ?
It is, unless your root volume is on an LVM, on a dmcrypt partition, or otherwise requires commands to be run before it can be accessed. I haven't used an initrd on my server in years. You need at a minimum these modules built in: the drivers of whatever controller where your root volume disk lives the drivers necessary to "get to" that like PCI, PCIe support, USB support, etc. the modules that run the filesystem mounted on it It's also a very good idea to build in your network card drivers as well. I've found that lspci/lsmod can help you here from your currently running kernel, look at what's there and use the make menuconfig search option before compiling to find where to enable the modules.
Booting without initrd
1,425,951,077,000
My kernel command line looks like this: root=31:0 ro noinitrd console=ttyS0,115200 root=/dev/mtdblock2 rootfstype=squashfs I think the first root entry identifies a disk by its major and minor device number and the second entry identifies it by its name. I can confirm that the rootfs is indeed on /dev/mtdblock2 but I don't know how to interpret 31:0.
Different modules behave differently when you provide the same option multiple times. I know you can say console= multiple times, and you get multiple consoles (we use it for machines with main consoles on both their framebuffers and serial port). However, you can only have one root partition, so root= almost certainly overwrites the previous value seen, almost certainly in a left-to-right fashion. This is corroborated by the kernel source, in init/do_mounts.c, function root_dev_setup() is responsible for acting on the root= option, and all it does is store the parameter key in a variable. The bootparam root=31:0 is overridden by root=/dev/mtdblock2, or at least that's the case in the 2.6.25 source tree I just checked. By the way, if you're competent with C, the function name_to_dev_t() in the same file is responsible for parsing the value of root=, and is very enlightening! The x:y notation is standard Unixism for major:minor, which is the way Unices identify devices. Traditionally, major was an 8-bit number identifying the driver for the hardware, and minor was an 8-bit number identifying the device itself. There are two namespaces for the major numbers: character devices and block devices. You can see both by typing cat /proc/devices, and you can see what's currently active by saying ls -la /dev. Here's an example: ls -la /dev/zero /dev/sda brw-rw---- 1 root disk 8, 0 Jan 12 22:01 /dev/sda crw-rw-rw- 1 root root 1, 5 Jan 12 22:01 /dev/zero The first column identifies the driver type (b for block, c for character). The two columns to the left of Jan are the major and minor numbers in major, minor format. You can give root= any block device independent of its name using the major:minor notation. The full list of device numbers is in your kernel source tree, under Documentation/devices.txt. 31:0 seems to refer to /dev/rom0, the first ROM card on the system.
Multiple root options in Linux command line
1,425,951,077,000
I would like to change my kernel's page size from 4KB to 4MB as I have had a large addition of RAM to my computer and I am never running out of anymore. The idea is that programs requiring large amounts of memory will spend less time on allocating pages. I suppose it would improve performance, and I would like to try. I can’t find anywhere when running make menuconfig. Is there a way to do that?
You probably want to look at Transparent Hugepages. The .config item is CONFIG_TRANSPARENT_HUGEPAGE. Note that enabling this won't give you huge pages automatically. You'll need to set the CONFIG_TRANSPARENT_HUGEPAGE_MADVISE to 'n', in order to make it the default. Also note that this doesn't allow you to choose an arbitrary page size. I allows to use the huge page size of the architecture. For x86_64 this is 1Mb, see https://en.wikipedia.org/wiki/Page_(computer_memory)#Huge_pages for the full table.
Change the size of my memory pages?
1,425,951,077,000
I have a custom application running on an embedded x86 setup (built using buildroot and uClibc). The application has been running fine but this morning when I returned to work I discovered my process had been killed and the following output on my terminal SAK: killed process 1008 (CX_SC3): fd#4 opened to the tty SAK: killed process 1009 (CX_SC3): fd#4 opened to the tty SAK: killed process 1011 (CX_SC3): fd#4 opened to the tty SAK: killed process 1012 (CX_SC3): fd#4 opened to the tty Now CX_SC3 is my process - it has multiple threads, one of which opens /dev/ttyS0 to send messages over a radio modem. The fd number is 4 for the serial port. What I don't understand is What the SAK means The PID listed above must refer to a process that was killed by my application as there is only ever one instance of my application running at a time. Is it possible that these PIDs are actually my thread IDs (as my application does run 4 threads always). If my application killed other processes, why was my application also killed? What does the opened to the tty part mean? From some research this suggests that this has something to do with a interrupt character sent to the tty that I used to start the program. What events could have lead to the following output? My embedded setup is very small, uses busybox and runs vsftpd and very little else other than my custom application. It is vital that my application is robust. EDIT: In response to the comment below, if this is due to a SAK being detected, is there anything that can accidentally trigger this? Is it possible that anything being read on the serial port has triggered this? Also, how can I find the SAK combination for my system - I do not have a rc.sysinit or rc.local file anywhere in my root file system. UPDATE: I have managed to pin this event down to the point at which my host machine shuts down. I have a serial cable between my host machine and my target device which I use to send serial data to the embedded target. When I leave the target running, but shutdown the host, my application is killed as described above. When I disconnect the serial cable prior to shutting down my host machine then my application does not get killed and runs as normal. This behaviour happens even after I have performed echo 0 > /proc/sys/kernel/sysrq as advised.
SAK in this case really means Secure Attention Key. The message you are seeing is a kernel message defined in drivers/tty/tty_io.c. SAK is a key combination which ensures secure login for a user on console. On Linux SAK ensures this by killing all processes attached to the terminal SAK is invoked on. It is expected that init will then restart the trusted login process like getty followed by login or X server with display manager. The listed PIDs are indeed PIDs of threads of your application CX_SC3 which were killed by SAK. fd#n opened to the tty means that the process/thread which was killed had the file descriptor n opened to the terminal on which the SAK was invoked. In Linux there are two ways of invoking SAK: Through the magic SysRq key - typically Alt+SysRq+K (virtual terminal) or BreakK (serial console). This is not your case as you already tried to disable the magic SysRq by echo 0 > /proc/sys/kernel/sysrq and sending the BreakK sequence by accident is improbable. Through a defined key sequence (virtual terminal) or the break signal (serial console). SAK availability on a serial console is controlled by setserial. Break signal on a serial line is continuous sending of spacing values over a time longer than the character sending time (including start, stop and parity bits). In you case it is highly probable that the condition of the Break signal appears during shutting your host machine down. Please try to turn the SAK off on your serial port on the target device by setserial: setserial /dev/ttyS0 ^sak You can check the SAK functionality status on the serial port by setserial -g /dev/ttyS0. When turned on it will show SAK after Flags:. For automatic setting of the option after boot see the startup scripts which on BusyBox systems are usually /etc/init.d/rcS and /etc/rc.d/S* or check /etc/inittab for other possibilities.
My process was killed but I cannot understand the kernel notice
1,425,951,077,000
I'm studying the linux kernel right know with O'Reilly's Understanding Linux Kernel and lately covered the signal and interrupt handling chapter sticking to some basic 2.4 linux version and diving into code as far as I can understand. Yet, I couldn't explain to myself nor finding an answer elsewhere, what is the instruction flow that occurs when, let's say, a ctrl + c is pressed for a process which runs in the shell. what I did figured out so far: once keyboard pressed APIC raises IRQ line to the cpu if interrupts are not maskable, cpu loads the corresponding int. handler from IDT than, some critical int. handler code is invoked ,handling further the char pressed from the keyboard device's register in the APIC to other registers from here it's vague for me. I do understand though, that interrupt handling is not in the process context while exception is, so it was easy to figure out how exception updates current->thread.error_code and current->thread.trap_no finally invoking force_sig. Yet, once an interrupt handler is executed, as in the example above, how does it finally gets into context with the desirable process and generating the signal?
The keypress generates an interrupt, just like you figured out. The interrupt is processed by an interrupt handler; which handler depends on the type of hardware, e.g. USB keyboard or PS/2 keyboard. The interrupt handler reads the key code from the hardware and buffers it. From the buffer the character is picked up by the tty driver, which, in the case of Ctrl-C recognizes it as the interrupt character and sends a SIGINT to the foreground process group of the terminal. See n_tty.c. Note that the tty driver is only involved in "terminal"-type (command line) interfaces, like the Linux console, serial terminals (/dev/ttyS*), and pseudo ttys. GUI systems (X11, Wayland implementations) handle input devices differently.
How does keyboard interrupt ends up as process signal
1,425,951,077,000
I am trying to install a NVidia Cuda driver on a Amazon EC2. GPU instance (Amazon Linux AMI (HVM) 2013.09.2 - ami-e9a18d80) following the instructions laid out in a blog. It worked for the last two weeks, but today it fails. The instructions state sudo yum -y groupinstall "Development Tools" sudo yum -y install git libcurl-devel python-devel screen rsync yasm numpy openssl-devel wget http://developer.download.nvidia.com/compute/cuda/5_5/rel/installers/cuda_5.5.22_linux_64.run sudo sh cuda_5.5.22_linux_64.run The error is Installing the NVIDIA display driver... The driver installation is unable to locate the kernel source. Please make sure that the kernel source packages are installed and set up correctly. If you know that the kernel source packages are installed and set up correctly, you may pass the location of the kernel source with the '--kernel-source-path' flag. There is a comment in the instructions on how to possibly fix it, but I do not understand the commands. I can't seem to navigate to the paths specified. If someone could explain it to me like I am 5, I think it would be helpful. For people having trouble with installing CUDA (fails with some complaint about the kernel source), here's the fix I found… The kernel source in /usr/src/kernels wasn't the same version as the kernel I was running (which you can find with uname -r). I went into /boot/grub/menu.lst and made sure that the only enabled kernel version was the one I had the source for.
You simply need to snatch your kernel-source tree (and make sure the build is identical to your bootable kernel.) So, yum -y install kernel-devel kernel-headers uname -r will also tell you the specific kernel build, important to make sure it matches the devel packages!
Driver install, kernel source not found