date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,389,107,558,000
I'm having an incredibly tough time making sense of this excerpt from the Linux device drivers book (sorry for text-heavy post): The kernel (on the x86 architecture, in the default configuration) splits the 4-GB virtual address space between user-space and the kernel; the same set of mappings is used in both contexts. A typical split dedicates 3 GB to user space, and 1 GB for kernelspace. Ok, got it. The kernel’s code and data structures must fit into that space, but the biggest consumer of kernel address space is virtual mappings for physical memory. What does this mean? Aren't the kernel's code and data structures also in "virtual memory that's mapped to physical address space." Otherwise where are these code and data structures even stored? Or is this saying that the kernel needs virtual address space to map random non-kernel related data that it's operating on via drivers, IPC or whatever? The kernel cannot directly manipulate memory that is not mapped into the kernel’s address space. The kernel, in other words, needs its own virtual address for any memory it must touch directly. Is this even true? If the kernel is running in the context of a process (handling a syscall), the process' page tables will still be loaded, so why can't the kernel read usermode process memory directly? Thus, for many years, the maximum amount of physical memory that could be handled by the kernel was the amount that could be mapped into the kernel’s portion of the virtual address space, minus the space needed for the kernel code itself. Ok, if my understanding in quote #2 is correct, this makes sense. As a result, x86-based Linux systems could work with a maximum of a little under 1 GB of physical memory. ???? This seems like a complete non sequitur. Why can't it work with 4GB of memory and just map different stuff into the 1GB space available for the kernel as needed? How does the kernel space only being ~1GB mean the system can't run with 4GB? It doesn't have to all be mapped at once.
Why can't it work with 4GB of memory and just map different stuff into the 1GB space available for the kernel as needed? It can, that's what the HIGHMEM config options do for memory that doesn't fit to be mapped directly. But when you need to access an arbitrary location in memory, it's much easier to do that if you can point to it directly, without setting up a mapping every time. For that, you need an area of virtual memory that's always mapped to all of the physical memory, and that can't be done if the virtual address space is smaller than the physical. Direct access is also faster, vm/highmem.txt in the kernel docs says: The cost of creating temporary mappings can be quite high. The arch has to manipulate the kernel's page tables, the data TLB and/or the MMU's registers. Sure, you can access the running process's memory through the user space mapping, and perhaps you can avoid the need to access the memory of other processes. But if there are any large in-kernel data structures (like the page cache), it would be nice to be able to use all the memory for them. The whole thing is a sort of bank switching, which was something that was used in 16-bit machines, and in 386/486 systems in the DOS-era (HIMEM.SYS). I don't think anybody particularly liked accessing memory like that even then, since it makes things rather difficult if you need to have multiple areas of the physical memory "open" at the same time. Evolving to 32-bit and then to 64-bit systems has cleared that problem.
Linux Kernel memory management quote
1,389,107,558,000
First, the issue that I'm having is being unable to run VirtualBox on Kali 2.0. I set up a usb live with persistence running Kali 2.0, which at the time had the 4.6.0-kali1-amd64 kernel. I have since updated/upgraded/dist-upgraded etc with all of the recommended sources. As a part of this, the new headers/kernels that have been installed are 4.9.0-kali4-amd64. However, even after boot, the kernel is 4.6.0, as confirmed by uname -r and the error thrown by vbox. I know normally grub needs to be config'd, though there is no grub bootloader in the usb live boot. The error thrown by virtualbox says that no suitable driver was found for the 4.6.0 kernel, and also that the system is not set up to dynamically create drivers (though I believe that this is due to the fact that it is making the driver for the 4.9.0, but this is not the running kernel).
Due to a bug in either the way my live system was installed or the way live-tools handles the mounted partition, live-update-initramfs does not work in this particular case, as it looks to /lib/live/mount/medium/ as the root of the usb live device, though this was not the mountpoint (and there are 3 partitions needed from the usb device). Instead of messing with mounting/unmounting etc. I was able to simply create a initrd.img file (it was missing) using update-initramfs, and moving this to the live folder manually from my non-live linux dist: /usr/sbin/update-initramfs.orig.initramfs-tools -c -k 4.9.0-kali4-amd64 This creates the image. The vmlinuz-4.9.0-kali4-amd64 was already available. From within my non-live dist, with my usb inserted: I first moved the initrd.img and vmlinuz from the /live folder on my usb to my desktop (for backup). I then copied the initrd.img-4.9.0-kali4-amd64 and vmlinuz from my usb's persistence rw root folder to the /live folder. I renamed these to initrd.img and vmlinuz and rebooted. Voilà -Big thank you Jeff S. for your contribution.
How to change the boot kernel of a usb live w/ persistent running Kali
1,389,107,558,000
I am monitoring file operations events (VFS). I have a problem with BTRFS filesystem, BTRFS is using subvolumes, All highest hierarchy directories in btrfs has the same inode (256/512). Short story: When I receive file operation event, I receive the path and then resolve it to inode. By resolving I mean: given a path, I get its dentry (user_path() call), from dentry i pull: dEntry->d_inode->i_ino The problem is I receive same inode for different directories on the same Device. I guess, BTRFS has some sort of abstraction layer, that create a "virtual" inode number (the identical ones are virtual) - there is no way for two identical inodes on same device id. Proof for device id issue: From kernel I receive device id 29: Code: device id resolving: for a given path (/home) -> Get the dentry with user_path, then dEntry->d_inode->i_sb->s_dev OR I run command: "grep btrfs /proc/self/mountinfo | less" output: /proc/self/mountinfo return inode 29 also: 34 18 0:29 /home /home rw,noatime,nodiratime shared:19 - btrfs /dev/md127 rw,nospace_cache,subvolid=257,subvol=/home From user space I receive device Id 33: root@nas-B9-43-AA:/# stat /home File: `/home' Size: 90 Blocks: 0 IO Block: 4096 directory Device: 21h/33d Inode: 256 Links: 1 root@nas-B9-43-AA:/# mountpoint -d /home 0:33 So I get 29 and 33 as device id. Lets call device id 29 "actual id"", and 33 is "virtual id". Is there a way to obtain the actual id from kernel code ? I am looking for replacement to dEntry->d_inode->i_sb->s_dev.. to obtain the same id as we receive from user mode. I am on Debian 7
Instead of going to the dentry - inode - superblock - device id. I get the device Id using the getattr(..) on the dentry. My solution is taken from the Suse patch in the subject (after alot of google digging). https://patchwork.kernel.org/patch/2825842/
BTRFS how to get real device Id
1,389,107,558,000
After switching from Debian stable to Debian testing i have a problem to mount my FreeBSD root partition on my system: How to mount ufs file system under Debian testing? I find out what filesystems my Linux kernel supports through this command cat /proc/filesystems : nodev sysfs nodev rootfs nodev ramfs nodev bdev nodev proc nodev cpuset nodev cgroup nodev cgroup2 nodev tmpfs nodev devtmpfs nodev debugfs nodev tracefs nodev securityfs nodev sockfs nodev bpf nodev pipefs nodev hugetlbfs nodev devpts nodev pstore nodev mqueue ext3 ext2 ext4 nodev autofs btrfs By default The Linux Kernel can’t read/write on FreeBSD-ufs partition. How to enable Unix file system (ufs) support on Linux kernel? Update the output of: modprobe ufs mount -t ufs -o ufstype=ufs2 /dev/sda4 /mnt/ufs_mount is: mount: /dev/sda4 is write-protected, mounting read-only mount: wrong fs type, bad option, bad superblock on /dev/sda4, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so. The output of dmesg | tail : [ 1136.965142] ufs: ufs_fill_super(): bad magic number [ 1255.758946] ufs: ufs_fill_super(): bad magic number [ 2098.945757] ufs: ufs was compiled with read-only support, can't be mounted as read-write [ 2098.946045] ufs: You didn't specify the type of your ufs filesystem mount -t ufs -o ufstype=sun|sunx86|44bsd|ufs2|5xbsd|old|hp|nextstep|nextstep-cd|openstep ... >>>WARNING<<< Wrong ufstype may corrupt your filesystem, default is ufstype=old [ 2098.967212] ufs: ufs_fill_super(): bad magic number [ 2927.982112] perf: interrupt took too long (2504 > 2500), lowering kernel.perf_event_max_sample_rate to 79750 My sources.list: deb http://httpredir.debian.org/debian/ stretch main deb-src http://httpredir.debian.org/debian/ stretch main deb http://security.debian.org/debian-security stretch/updates main deb-src http://security.debian.org/debian-security stretch/updates main And uname -a: Linux debian 4.6.0-1-amd64 #1 SMP Debian 4.6.4-1 (2016-07-18) x86_64 GNU/Linux
Using a standard debian kernel? Try modprobe ufs as root to load UFS filesystem support.
How to enable Unix file system support on Linux kernel? [duplicate]
1,389,107,558,000
I'd like to modify the security module (Specifically security/IMA) of the linux kernel.(This module has to be compiled into the kernel) I have to use functions from a shared library (an .so file) in this module. but I don't know how to compile it. Is there a way to put the shared library file to the Linux kernel? And if there is no way to do it ,can you tell me the reason?
You practically cannot do that (linking a shared library into the kernel or some kernel module). The kernel is conceptually a freestanding program (so does not know about low-level standard C functions like malloc or fprintf that your shared library is very likely to use). Kernel modules (*.ko files) are specially built and are loaded by some special kernel code. Also, user-land code (including libraries) are based upon system calls (listed in syscalls(2)), which are not available in kernel code (since the kernel is providing the system calls to processes running in user mode thru specific machinery). You might consider having a user-land helper program communicating with the kernel (i.e. your kernel module), e.g. using netlink(7). Perhaps look also into systemd. You probably should read more about Operating Systems and read Advanced Linux Programming to understand the user-land aspects. BTW, as a rule of thumb, you should limit the amount of kernel code added to the system (and prefer working in user-land).
Linking shared library in linux kernel
1,389,107,558,000
Question intended for system administrators. Consider system running a old but working kernel and all the required functionality is available. (Ubuntu 12.04 LTS specifically with kernel 3.2) Then a new version of kernel is released ( Ubuntu 16.04 LTS with kernel 4.4). Above is just an example case. I have been suggested that I should not immediately update to most recent releases on the production system even if the release has support. Why is it so ? How long should I wait ?
This question has a big chance of getting closed due to the answers you will get are going to be mostly opinion based. But here is my 2 cents. Production systems are reliant on stable releases of operating systems. If you install the latest and greatest kernel/patch/update on your system, you don't know what deficiencies lurk in the short time ahead. I am not underestimating kernel developers'abilities, but at the end they are also human. There are things that gets forgotten to be checked in their code and one of those "forgotten" things may come to bite you. Then, they say, "Oooohhh, we forgot it" and issue a patch. Repercussions of this even are two fold. One, when you experience this unchecked condition come to real life, your system may experience and outage. Not good if you are talking production server(s). Management will not take it lightly. Two, probably you will need to take the system down to fix/upgrade it. Here goes another outage, even if it is weekend. night or whatever, outage is an outage and production server owners do not like it. Think about a corporation like Walmart. Their stores never close fully. So, a minute of down time of their production servers means incredible gobs of money. And fresher the kernel you install, much more likely that you will find yourself in the position of patching more and more. Hence people say, wait at least 6-12 months of real world testing of any new OS release, before you put it on your production servers.
Why should not I update to most recent kernel immediately after release [closed]
1,389,107,558,000
I'm reading about Unix in a book "The design of UNIX operating system". But, in the text middle of process state (e.g wakeup, interrupt), there is term 'raise processor execution level'. What does "raise processor execution level" mean?
"Raise processor execution level" means to temporarily block all interrupts so that the kernel can get some critical (by definition, non-interruptible) task completed. Reference: Operating Systems by I. A. Dhotre I hope you enjoy reading The Design of the UNIX Operating System. I haven't read that one since back when books were printed on thin slices of processed dead trees. (Sadly, it appears that this remains the only form in which this otherwise excellent book is offered. How quaint!) Do keep in mind that the theoretical design, while valuable to understand, isn't always exactly what gets implemented.
what means 'processor execution level'?
1,389,107,558,000
I would like to enable my wifi card automatically at the startup every time. However, the command below requires sudo. I see Hardware Disabled at the wireless connection which blocks the internet access through wireless connection without the sudo command. There is no permanent solution to this problem, since those drivers are buggy so deprecate that feature from this thread. Temporary solution is here such that I run in Lenovo G50-30 sudo modprobe -r ideapad-laptop which solves the problem until next shut down. The wireless card is disabled in the next startup. The command loads some module to Linux kernel. I do not understand the origin of the bug. The command rfkill list gives 2: phy0: Wireless LAN Soft blocked: no Hard blocked: no I cannot use cronjob's @reboot command because it does not run from the cold system as described here. Unsuccessful modprobe and /etc/init.d/rc.local One attempt from here. Adding @reboot modprobe -r ideapad-laptop to your crontab would work, but apparently only works when the machine is rebooted not when it comes up cold. So do sudo echo '@reboot modprobe -r ideapad-laptop' >> /etc/crontab That will add the line to your cron jobs, but call it at login by adding it in a line to /etc/init.d/rc.local. How can you enable the modprobe -command at login?
Your question is stated in a rather chaotic way, but this is what I understand: You're talking about enabling your wireless network adapter using a command that requires root privileges (hence run using sudo). The command you're executing actually removes a module (see man modprobe under the -r option). It was probably suggested that you remove the ideapad-laptop module because it is conflicting with another module. That module is disabled, because the defunct ideapad-laptop module has priority over it. You were probably unable to find the answer you were looking for, because you were searching for the wrong terms. You'll want to disable the ideapad-laptop module in a process called blacklisting. Simply create a new file under /etc/modprobe.d/ that starts with .conf, e.g., /etc/modprobe.d/disable_deprecated.conf and add blacklist ideapad-laptop to it. This prevents the module from ever loading. You won't have to run the modprobe command anymore.
How to run modprobe for Wi-Fi at login?
1,389,107,558,000
In the linux kernel, there is a section "Library routines" with a snippet shown below: Library routines ---> <M> CRC-CCITT functions <M> CRC ITU-T V.41 functions <M> CRC7 functions <M> CRC32c (Castagnoli, et al) Cyclic Redundancy-Check <M> CRC8 function ... ... I have most of the options compiled in as "module", but these modules never get loaded. I'm curious to know what these modules are used for and in which situation I would need them? The Kernel Config help is not very illuminating: This option is provided for the case where no in-kernel-tree modules require <XYZ> functions, but a module built outside the kernel tree does. Such modules that use library <XYZ> functions require M here.
CCITT stands for "Comité Consultatif International Téléphonique et Télégraphique" and ITU for "International Telecommunication Union". These modules have to do with (error correction) for telephone-modem connections. Since even old style high-end modems (to which you would normally communicate via a, real, hardware serial port) do things like CRC themselves, my guess is those modules are for low-end hardware, where a large part of the handling was done by the CPU, so called softmodems So unless you have, and use, that kind of simple modem hardware, your kernel is unlikely to load those modules.
Library routines in linux kernel
1,389,107,558,000
I am wanting to download EVERY version of the Linux kernel as source code, Debian files, and RPM files. Where can I find a single site where I can download them all at once? If that is not possible, I know I can get the source code for every kernel here (https://www.kernel.org/pub/linux/kernel/), but I need the .deb and .rpm files as well.
I'd be surprised if you'd find every version as a .deb and .rpm on a single site. You'll be lucky if you find every version of the .rpms. I'd be very surprised. You can reach back to Fedora Core 1 (FC1) through FC6 here on the Fedora Project Archive. Fedora 7 through 18 (plus the latest) are available on the same site in a different directory here. The .deb files are available through the Debian Distributions Archive you can search through the archive here.
Download the Complete Linux Kernel Collection
1,389,107,558,000
I am current in the process of migrating my website(big and complex with mysql backend) from debian server to ubuntu server. The kernel of debian and ubuntu are different. I have some concerned questions : Is it possible to migrate database and website (with etc and home folder) from one server to another if the kernel version is different? My understanding is it may cause error, because the kernel architecture are different which may cause some dependency error. Does my understanding is wrong?
Do not copy the binary database files. Instead do a dump and restore the dump. That's the most portable way. Dump: mysqldump -u [username] -p [password] [databasename] > backupfile.sql Restore: mysql -u [username] -p [password] [database_to_restore] < backupfile.sql More detailled information can be found here: Backup and Restore MySQL Database Using mysqldump The kernel versions don't matter. What matters are the versions of the programs installed on both systems, e.g. MySQL, PHP, etc. Using different version might cause incompatibilities.
Migrating Data from Debian to ubuntu server
1,389,107,558,000
I'm compiling my own version of the linux kernel and I was wondering if there is any way to do parse a local XML or JSON file.
jsawk will probably do what you need: https://github.com/micha/jsawk Edit: However I found jshon to work much better. Here is an example: curl 'http://twitter.com/users/username.json' | jshon -e "location" Outputs: "new hampshire"
Parse JSON or XML on bootup
1,389,107,558,000
I want to practice with makedumpfile. However, it needs /proc/vmcore which is the memory image of the currently running kernel. Also, reading the man page of makedumpfile, we also need 2 kernels: panicked kernel (crashed kernel) and capture kernel. Does this capture kernel run on the same machine or remotely?
The capture kernel runs on the same host. It runs in memory that the panic'd kernel reserved for the capture kernel to use. The capture kernel is started with the kexec mechanism by the panicing kernel. /proc/vmcore should be provided by the kernel if its setup to export a memory image. If your kernel does not have a /proc/vmcore, then you're missing the right kernel infrastructure. The linux kernel source implies that /proc/vmcore is only populated inside a capture kernel (the kernel command line providing the address of the panic'd kernels vmcore ELF header is required), so /proc/vmcore will exist in a regular kernel, but won't contain anything at all. Inside the capture kernel, /proc/vmcore presents the crashed kernel as an ELF core image. Here's some RH doc with more details: https://access.redhat.com/knowledge/solutions/6038
How can I generate /proc/vmcore?
1,389,107,558,000
I have recently compiled linux kernel 3.2. But at time of compiling in make menuconfig, I had disabled sound support. Now, I want to enable it without recompile it. I don't want to use stock kernel or direct kernel image, I always wanted to use compiled kernel.
make menuconfig and enable it as a module. Then, make modules_install, which compiles and installs modules, should do the trick. Though you wil not need to compile whole kernel, you will have to compile modules. At least on Gentoo. You haven't mentioned what distro you are using. May be someone else could provide better answer. Tip: configuration of running kernel can be found in /proc/config.gz (Usually, this feature is enabled).
how to enable sound support in linux kernel without recompile?
1,389,107,558,000
I can only boot into Ubuntu suddenly if I choose either 6.5.0-14-generic in recovery mode or 6.2.0-39-generic normally. But this setting does not persist so I have to enter GRUB everytime I start the PC. I believe that the automatic update GUI "Actualizations" in the task bar in Kubuntu installed the new Kernel a day before together with a bunch of other packages. Which is why everything worked fine in the evening still and the next morning (yesterday) I could not boot and would wonder why and how to get this solved. I don't know what kind of problems this hints to. Whether it only hints to problems with the kernel 6.5.0-14-generic or (also) to other problems? How do I solve this? Thanks!
The easy way is to remove the buggy kernel: sudo apt remove linux-image-6.5.0-14-generic Or, you can set the working kernel in your grub configuration file. To list the kernel in order: grep -P "menuentry" /boot/grub/grub.cfg | cut -d "'" -f2 Then set the GRUB_DEFAULT=n in /etc/default/grub, 0 is the current and the first kernel in the list above. The command line way: In your /etc/default/grub you should have: GRUB_DEFAULT=saved GRUB_SAVEDEFAULT=false Then set the desired kernel version: sudo grub-set-default <menuentry> e,g: sudo grub-set-default "Ubuntu with Linux 6.5.0-14-generic" Verify: grub-editenv list To unset the selected kernel use grub-set-default unset. GUI way Starting from Ubuntu 20.04 grub-customizer is available from universe repository. Install it then set the desired kernel on the top of the list . sudo add-apt-repository universe sudo apt update sudo apt install grub-customizer To make it work when a new kernel is installed, make "previously booted entry" as default entry in the ""General settings""
Can only boot into Ubuntu if I choose 6.2.0-39-generic or 6.5.0-14-generic (recovery mode)
1,389,107,558,000
Since the TSS does not store the values of the registers in x86-64, how are those saved when the context switch occurs?
The general-purpose registers are mostly saved on the stack; see PUSH_REGS and struct pt_regs. To find actual uses, look for PUSH_AND_CLEAR_REGS and POP_REGS. The rest of the CPU state is stored in thread_struct. Linux avoids the TSS as much as possible (early 32-bit x86 versions used it, but that changed a long time ago).
How is the execution state saved at context switch in x86-64 linux kernel?
1,389,107,558,000
Assuming the Linux kernel doesn't swap out my process's memory pages, can I assume their physical location in RAM will not be changed, or is it possible the kernel will move them around? The reason I'm asking is that I'm considering writing my own memtester from scratch, and I was wondering if it's possible a newly allocated page would be in the same physical location in the RAM that the process has already tested before.
I’m aware of two cases where an allocated page’s physical address can change, and thus where a subsequent allocation can re-use a previously-used physical page: swap-out (as you mention) compaction The former can be prevented in all cases by locking memory with mlock() or mlockall(). For the latter, the vm.compact_unevictable_allowed sysctl also needs to be set to 0 (it’s enabled by default if compaction is enabled). Transparent huge page defragmentation uses both swap-out and compaction, but it adds a number of sysctl entries of its own; I don’t know whether they would be needed on top of disabling compaction globally and locking the memory being tested.
Does Linux keep the memory on the same physical location?
1,389,107,558,000
Is it generally safe to manually compile and install a new kernel from kernel.org using: make -j 8 make install make modules_install or might the distribution, e.g. Debian, break, because it assumes that it manages kernel upgrades through apt? Intuitively, everything should continue to work, since the kernel preserves a stable syscall API, and the drivers are compatible with earlier versions.
Yes, it’s safe; Debian doesn’t require a packaged kernel, and as you say the kernel is backwards-compatible. You only need to make sure your kernel configuration is functional. However, you can build a kernel package from the upstream kernel source, and install the resulting packages instead. Run this in the kernel source tree, instead of make install modules_install: make deb-pkg See also the Debian kernel handbook which explains how to build kernel packages in a variety of scenarios.
Is it safe to manually install a new kernel for Debian and other distributions?
1,389,107,558,000
I want to use the return values from statvfs to get the total and free filesystem size. unsigned long f_bsize; /* Filesystem block size */ unsigned long f_frsize; /* Fragment size */ fsblkcnt_t f_blocks; /* Size of fs in f_frsize units */ fsblkcnt_t f_bfree; /* Number of free blocks */ ... Source: https://man7.org/linux/man-pages/man3/statvfs.3.html So to get the total filesystem size, it seems like I want f_blocks * f_frsize, since the comment for f_blocks says that it's the size "in f_frsize units". However, f_bfree is the number of free blocks. So for free filesystem size, I have to use f_bsize? Or f_frsize again?
Fragments of blocks seems to be a filesystem feature in some legacy filesystems (googling suggests UFS and JFS have a use for it). The fragment size seems to indicate the minimum value a fragment is allowed to be and should be between 1 and f_bsize. On filesystems that do not support it, this value should equal f_bsize (or be zero, see below) since further fragmentation of the block wouldn't be supported. If you check the coreutils source code (on redhat based systems at least) you can see how GNU handle this in df. Given GNU is supposed to handle all kinds of POSIX semantics with various flavours of UNIX not just Linux, it should offer a fairly robust suggestion on how to resolve this. In lib/fsusage.c 120 if (statvfs (file, &vfsd) < 0) 121 return -1; 122 123 /* f_frsize isn't guaranteed to be supported. */ 124 fsp->fsu_blocksize = (vfsd.f_frsize 125 ? PROPAGATE_ALL_ONES (vfsd.f_frsize) 126 : PROPAGATE_ALL_ONES (vfsd.f_bsize)); 127 128 fsp->fsu_blocks = PROPAGATE_ALL_ONES (vfsd.f_blocks); 129 fsp->fsu_bfree = PROPAGATE_ALL_ONES (vfsd.f_bfree); 130 fsp->fsu_bavail = PROPAGATE_TOP_BIT (vfsd.f_bavail); 131 fsp->fsu_bavail_top_bit_set = EXTRACT_TOP_BIT (vfsd.f_bavail) != 0; 132 fsp->fsu_files = PROPAGATE_ALL_ONES (vfsd.f_files); 133 fsp->fsu_ffree = PROPAGATE_ALL_ONES (vfsd.f_ffree); 134 return 0; In their code, they are copying the POSIX statvfs struct into a struct of their own making, the important part however is in lines 124-126 which shows what they are doing: using the f_frsize if its not zero otherwise using f_bsize. My suggestion is to just copy their method as df has seen extremely wide distribution in the wild and through time. One would hope that someone would have indicated it reports the wrong values by now if it was incorrect. You should also be aware more modern filesystems have a rather nebulous concept of filesystem usage. btrfs springs to mind which due to reflink copies, quotas and snapshots doesn't give an exact absolute value anymore. You probably want to treat this as more as an exception rather than the rule at this stage but something you might want to be aware of.
Using statvfs to get total and free filesystem size
1,389,107,558,000
I'm configuring the Linux kernel and at this point I am not sure what it means. Symmetric multi-processing support This enables support for systems with more than one CPU. If you have a system with only one CPU, say N. If you have a system with more than one CPU, say Y. If you say N here, the kernel will run on uni- and multiprocessor machines, but will use only one CPU of a multiprocessor machine. If you say Y here, the kernel will run on many, but not all, uniprocessor machines. On a uniprocessor machine, the kernel will run faster if you say N here. What douse this means "more than one CPU" ? I have a Muticore CPU with eight processor cores. does this apply to my CPU or only to two processors on the motherboard, regardless of processor cores? Are the processor cores meant or processor chips? I would say that this function applies to systems like this. Regardless of the processor cores, only the processor units (chips) count.
More than one CPU means that there is more than one microprocessor, (physical chip or a single chip with more than one CPU core). I don't know what type of CPU you have, but if it's a regular, rather recent computer, you can probably say "Yes". You take a look at /proc/cpuinfo to see more data.
Linux-Kernel Config - Symmetric multi-processing support - Does this apply to my CPU?
1,389,107,558,000
Where does qemu pull modules from when using a custom-built kernel (using -kernel)? Will the kernel try to find them in the guest FS or is the whole linux/qemu setup smart enough to realize that modules should be pulled from the custom-built kernel set up on the host?
-kernel only says where to load a kernel from, nothing else. It's like telling the bootloader in real hardware "load this kernel file". Once the guest kernel has booted, it is what makes the decision about where to look for modules (or even whether to look for modules at all). So the modules have to be in the guest filesystem. Personally I usually try to use a non-modules kernel if I'm doing development and booting a kernel with -kernel.
qemu - where are modules pulled from if using -kernel?
1,612,418,083,000
In a call trace we see: WARNING: CPU: 3 PID: 123456 at xxxxxxxx Modules linked in: cmac md4 cifs ccm ipt_MASQUERADE nf_nat_masquerade_ipv4 xt_conntrack ipt_REJECT nf_reject_ipv4 iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack ip6table_filter iptable_filter bridge stp llc cdc_acm i2c_mux_ltc4306 i2c_mux cdc_ether usbnet mii amd64_edac_mod edac_mce_amd edac_core xhci_pci gq(O) kvm_amd pcspkr sha3_generic xhci_hcd i2c_piix4 evdev acpi_cpufreq sch_fq_codel i2c_via_ipmi(O) autofs4 Call trace: xxxxxxxx What do the Modules linked in mean? Does it mean the modules related to ( or called from?) this call trace?
“Modules linked in” lists all the modules currently loaded, along with their taint flags if any. If modules have been loaded and then unloaded, the last unloaded module is listed too. If any modules are being loaded or unloaded, they are marked with + or - respectively. The list isn’t limited to modules involved in the trace. See the kernel bug-hunting guide for details.
What does 'modules linked in' mean in call trace?
1,612,418,083,000
I started Firefox from by bash window by entering the command "firefox", it started the Firefox browser in the user interface. When I checked the Firefox process by entering "ps" command, I found Firefox process has a controlling terminal attached to it (pts12 / which is evident from above screenshot) and also the firefox process has bash as the parent process. Now, how am I able to provide the keyboard input directly to the Firefox browser window? (I have typed "Hello world" in the browser) Since the tty is attached to the process, the input to Firefox should be via terminal window right? I know there is something called X-11 involved here, but can't get the whole picture This question is asked keeping the following as base which doesn't provide enough information on the above queries, How do keyboard input and text output work?
Input to X11 applications doesn’t go through a tty device, it’s provided as X11 events. The X11 server receives the input event, determines which application currently has the focus, and translates the input event into the corresponding X11 event. The X11 server provides an abstraction for the hardware in the system. X11 applications run as clients of the server, and receive events from it. Events can even be received remotely, i.e. you can run an X11 server on your local system and use it to interact with X11 applications running on another system. You can see this happening by running xev, as mentioned in How do keyboard input and text output work?
How GUI applications receive keyboard input?
1,612,418,083,000
I have a server running CentOS 7.3.1611, kernel 3.10.0-514. Now when installing kernel-devel, the version in repo is 3.10.0-1160 which differ from kernel version. I know that I can get exact kernel-devel rpm, but the dependency is too complicated. I would like to ask is there any feasible way to install specific kernel-devel version along with all the dependencies? (I do not want to upgrade current kernel).
All the packages that were ever released by CentOS can be found in an archive on vault.centos.org. You can just point yum to the package(s) you want. For example: yum install https://vault.centos.org/7.3.1611/updates/x86_64/Packages/kernel-devel-3.10.0-514.26.2.el7.x86_64.rpm If you need yum to automatically pull in archived dependencies of some package, you can just enable the Vault repo for that particular transaction: yum --enablerepo='C7.3.1611-updates' install kernel-devel-3.10.0-514.26.2 You can find the names of all vault repositories in /etc/yum.repos.d/CentOS-Vault.repo.
Is there any feasible way to install a kernel-devel version that not in base repo anymore?
1,612,418,083,000
From here: wait_table A hash table of wait queues of processes waiting on a page to be freed. This is of importance to wait_on_page() and unlock_page(). While processes could all wait on one queue, this would cause all waiting processes to race for pages still locked when woken up. A large group of processes contending for a shared resource like this is sometimes called a thundering herd. Wait tables are discussed further in Section 2.2.3; What is the "thundering herd" problem and, also, it's mentioned in the page that a spinlock is used, how can a race condition happen even when using spinlocks? Does the number of contending processes really matter? I thought that no matter how many processes are contending, only one process will grab the spinlock and no race condition will occur.
There is no race condition, the spinlock ensures this. The thundering herd problem is that when something happens, typically a lock being released or an I/O input event completing, lots of processes which have been waiting will resume. One will be choosen and all the rest will typically resume waiting on the lock or I/O event. Think about a press conference. When the person giving the briefing finishes one answer all the reporters start trying to attract the attention of the briefer. The briefer chooses one, all the reporters settle down, a question is asked and answered, and then all the reporters try and attract the attention again. This works well with ten reporters but with ten thousand there is a lot of wasted effort of all the reporters trying to attract the attention that could be made more efficient by having a queue of reporters and each asking their question in turn. So thundering herd is about efficiency rather than correctness.
What is the "thundering herd" problem and how can a race condition happen even after using spinlocks?
1,612,418,083,000
I need to update the kernel of an old headless server (small machine logging some instruments). Alas I cannot upgrade beyond Debian 8 Jessie. Some Virtualbox modules I need are only available for 3.16.0-11-amd64 and not for 3.16.0-4-amd64: $ cat /lib/modules/3.16.0-4-amd64/modules.dep | grep vbox <NOTHING> $ cat /lib/modules/3.16.0-11-amd64/modules.dep | grep vbox updates/dkms/vboxnetflt.ko: updates/dkms/vboxdrv.ko updates/dkms/vboxnetadp.ko: updates/dkms/vboxdrv.ko updates/dkms/vboxpci.ko: updates/dkms/vboxdrv.ko updates/dkms/vboxdrv.ko: The system has been upgraded and rebooted. There are now 3 available kernel images: $ dpkg -l | grep linux-image ii linux-image-3.16.0-10-amd64 3.16.81-1 amd64 Linux 3.16 for 64-bit PCs ii linux-image-3.16.0-11-amd64 3.16.84-1 amd64 Linux 3.16 for 64-bit PCs ii linux-image-3.16.0-4-amd64 3.16.43-2+deb8u5 amd64 Linux 3.16 for 64-bit PCs ii linux-image-amd64 3.16+63+deb8u7 amd64 Linux for 64-bit PCs (meta-package) According to my understanding, at boot the newest one should be picked, but something strange happens: $ uname -a Linux bluelikon-mini-abgebaut 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux Is there a way to force using 3.16.0-11-amd64? Is there perhaps a configuration file in Debian that is forcing 3.16.0-4-amd64 instead? Online I found that it is quite easy to change grub settings to add the new kernel (all examples refer to grub, i.e. here), but in my system grub is not used. Any idea?
By looking at the contents of /boot you can see which bootloader is used and then set it up to boot that certain kernel you need.
Debian 8 is not using the latest kernel
1,612,418,083,000
Fresh install Ubuntu Server 20.04. cat /proc/filesystems shows exfat in the output. Not installed any other packages for exFAT as it should work from kernel. Mounted 2 internal HDDs on in fstab as below #INT-1TB-4K Internal HDD mount to /mnt/INT-1TB-4K UUID=0E7E-6579 /mnt/INT-1TB-4K exfat defaults, permissions 0 0 #INT-1TB-BAK Internal HDD mount to /mnt/INT-1TB-BAK UUID=3037-96B0 /mnt/INT-1TB-BAK exfat defaults, permissions 0 0 /mnt ls-all gives exharris@plexserv:/mnt$ ls -all total 520 drwxr-xr-x 4 root root 4096 Jul 2 09:32 . drwxr-xr-x 20 root root 4096 Jul 2 05:15 .. drwxr-xr-x 9 root root 262144 Jul 3 03:49 INT-1TB-4K drwxr-xr-x 7 root root 262144 Jul 3 03:49 INT-1TB-BAK I get permission denied errors in the terminal when trying to create files in these folders (unless I use 'sudo', of course). This is because the 'others' write bit is set to -. When running sudo chmod -R 777 INT-1TB-4K from /mnt, I get no errors, but when doing ls -all again, nothing has changed. This is causing me problems also as I have set these up as Samba shares and also cannot write to them from other machines. I also tried sudo chmod -R o+w INT-1TB-4K - same thing happened. What is going on? I do not want to use exfat utils and fuse.
exfat behaves just like vfat and since it has no concept of permissions, chown and chmod both won't work. You have to specify mount options such as uid, fmask and dmask, e.g. defaults,noatime,nofail,uid=1000,fmask=0133,dmask=0022 (run id to find out what your ID is).
Native exFAT support in 5.4 kernel - issues?
1,612,418,083,000
Is it possible to modify source binary in Linux if any process spawned from that binary is still running? Will OS allow to perform write operation on that binary?
Try it: cd $(mktemp -d) cp /bin/sleep . ./sleep 120 & echo test > sleep The shell’s redirection operator changes the file in-place, and this fails with a “text file busy” error. It is however possible to replace the file: cp /bin/ls sleep This is how, for example, packages can be updated while the program they contain is running. The old file remains accessible to the running process as long as it keeps running.
Is it possible to modify source binary in linux if any process spawned from that binary is still running?
1,612,418,083,000
I am trying to make a restriction to procfs like only a certain groups of members can perform read and write actions. kernel document says we can do that by setting hidepid and gid in /etc/fstab. It will restrict the malicious user from making read and write on procfs but I have a doubt whether it is possible for malicious user (restricted in the /etc/fstab) to access content in profs using syscall instead of fs operation like read and write.
/proc is the interface between the kernel and userspace for all its contents, and most of those contents aren’t available in any other way (for content under pid directories, outside of that process). So hidepid=2 is effective in hiding information such as a process’ command line and environment from other users. Some information can be determined through side effects. For example the existence of a process with a given pid can be determined by attempting to kill it, with signal 0: kill() fails with ESRCH if a process doesn’t exist, EPERM if the calling process doesn’t have the right permissions. Similarly, open ports can be determined by attempting to connect to them.
What is the best way to restrict /proc fs from malicious users (linux)? [closed]
1,612,418,083,000
If we create a listening socket it will return us a descriptor (let say root descriptor) and we are binding this descriptor to a address. Whenever a new client connection is available the root descriptor informs us and we accept that new connection and receive a unique descriptor (let say client descriptor) for each client. From now on wards we can communicate with that client using that descriptor. Client information is stored in the separate inode which is pointed out by the client descriptor. Due to this Linux was able to deliver respective client data to a respective descriptor. If the above I mentioned is correct (kindly correct me if my understanding is wrong) then I got a doubt. What is the client information stored in the inode? How is the client uniquely identified by Linux?
The TCP/IP and UDP/IP protocols know a "session" which is defined by local and remote IP address and port [1]. A TCP/IP package, for example, will contain source and target IP address and port [2]. A server or client (say, Firefox) which has more than one connection open will distinguish at the OSI [3] session layer by address and port. Please open a shell and run as root, while using a web browser netstat -tulpan to see current and active connections [4]. Example output: # netstat -tulpan Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1966/sshd tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 1902/cupsd tcp 0 0 192.168.1.16:57374 172.217.23.165:443 ESTABLISHED 4730/firefox-bin tcp 0 0 192.168.1.16:55478 104.26.11.30:443 ESTABLISHED 4730/firefox-bin udp 0 0 127.0.0.1:53 0.0.0.0:* 1996/named The lines show "ESTABLISHED" connections by firefox with differing local ports so that firefox will recognise which packet is the answer to which request. Other lines with the LISTEN state are local programs running as a server process, including sshd (Secure Shell Server), cupsd (printer daemon) and named (Bind name server). These will accept incoming connections. References to learn more: [1] https://en.wikipedia.org/wiki/Port_(computer_networking) [2] https://en.wikipedia.org/wiki/Transmission_Control_Protocol#TCP_segment_structure as well as https://en.wikipedia.org/wiki/IPv4_header#Header [3] https://en.wikipedia.org/wiki/OSI_model [4] https://en.wikipedia.org/wiki/Netstat
How does the TCP/IP protocol differentiate between clients?
1,612,418,083,000
I am running Fedora 29 on a DELL XPS 15 9570, latest BIOS installed. Since upgrading to kernel 5.0.3 / 5.0.5, my laptop screen stays blank (with occasional white flicker) after the BIOS logo, and it makes a weird high-frequency, "coil-whiny" noise. I can force the system to boot by entering GRUB and picking an older 4.x kernel. I have nouveau disabled because it never worked with the 4.x kernel. Thanks ### BEGIN /etc/grub.d/10_linux ### menuentry 'Fedora (5.0.5-200.fc29.x86_64) 29 (Workstation Edition)' --class fedora --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-4.20.8-200.fc29.x86_64-advanced-f4720609-44ff-4b36-a4c4-31e8af02f468' { load_video set gfxpayload=keep insmod gzio insmod part_gpt insmod ext2 if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root c60cd3be-dfeb-4a5f-983e-b2510b0b8991 else search --no-floppy --fs-uuid --set=root c60cd3be-dfeb-4a5f-983e-b2510b0b8991 fi linuxefi /vmlinuz-5.0.5-200.fc29.x86_64 root=/dev/mapper/fedora-root ro resume=/dev/mapper/fedora-swap rd.lvm.lv=fedora/root rd.luks.uuid=luks-0fcd4a94-b2b9-4613-8e04-780631a9f752 rd.lvm.lv=fedora/swap rhgb quiet nouveau.modeset=0 acpi_osi=Linux acpi_osi='!Windows 2015' LANG=de_DE.UTF-8 initrdefi /initramfs-5.0.5-200.fc29.x86_64.img } menuentry 'Fedora (5.0.3-200.fc29.x86_64) 29 (Workstation Edition)' --class fedora --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-4.20.8-200.fc29.x86_64-advanced-f4720609-44ff-4b36-a4c4-31e8af02f468' { load_video set gfxpayload=keep insmod gzio insmod part_gpt insmod ext2 if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root c60cd3be-dfeb-4a5f-983e-b2510b0b8991 else search --no-floppy --fs-uuid --set=root c60cd3be-dfeb-4a5f-983e-b2510b0b8991 fi linuxefi /vmlinuz-5.0.3-200.fc29.x86_64 root=/dev/mapper/fedora-root ro resume=/dev/mapper/fedora-swap rd.lvm.lv=fedora/root rd.luks.uuid=luks-0fcd4a94-b2b9-4613-8e04-780631a9f752 rd.lvm.lv=fedora/swap rhgb quiet nouveau.modeset=0 acpi_osi=Linux acpi_osi='!Windows 2015' LANG=de_DE.UTF-8 initrdefi /initramfs-5.0.3-200.fc29.x86_64.img } menuentry 'Fedora (4.20.16-200.fc29.x86_64) 29 (Workstation Edition)' --class fedora --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-4.20.8-200.fc29.x86_64-advanced-f4720609-44ff-4b36-a4c4-31e8af02f468' { load_video set gfxpayload=keep insmod gzio insmod part_gpt insmod ext2 if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root c60cd3be-dfeb-4a5f-983e-b2510b0b8991 else search --no-floppy --fs-uuid --set=root c60cd3be-dfeb-4a5f-983e-b2510b0b8991 fi linuxefi /vmlinuz-4.20.16-200.fc29.x86_64 root=/dev/mapper/fedora-root ro resume=/dev/mapper/fedora-swap rd.lvm.lv=fedora/root rd.luks.uuid=luks-0fcd4a94-b2b9-4613-8e04-780631a9f752 rd.lvm.lv=fedora/swap rhgb quiet nouveau.modeset=0 acpi_osi=Linux acpi_osi='!Windows 2015' LANG=de_DE.UTF-8 initrdefi /initramfs-4.20.16-200.fc29.x86_64.img }
The problem has been discussed here: https://bugs.archlinux.org/task/61964 It was introduced by an optimization for eDP 1.4+ ("link config fast and narrow") in the 5.x kernel; the patch doesn't work for some panels, including that of the XPS 15, and had to be rolled back. The rollback was merged and should be released with the next kernel version (5.0.8): https://github.com/torvalds/linux/commit/21635d7311734d2d1b177f8a95e2f9386174b76d
Fedora 29: screen blank/flickering after upgrade to Kernel 5.x on DELL XPS 15 9570
1,612,418,083,000
I am trying to debug my kernel module. When I run it I get the following kernel warnings, but it seems that there is no informative message like other warnings I've seen. Is it possible to get any useful info out of this? Some more info: The module is called firewall, it diverts tcp packets to a proxy server in user space, and the proxy then sends the tcp data it receives to the intended destination. This happens when processing an http response by simply receiving all the data on one socket and calling sendall on another. The warning doesn't happen when all the response comes in one packet, but does when the http payload data is segmented into several tcp packets. The proxy is written in python. It seems strange to me that in the warning it says "python tainted". Can userspace applications cause kernel warnings? I tried only receiving a large file in the proxy but not sending it and did not get any errors, and the system didn't freeze at any point. The problem happens only on calling socket.sendall/socket.send reducing the read buffer size and then sending smaller chunks causes the system to lockup faster. Turning off both gso, tso with ethtool prevents the error messages, but the system still locks up after the same amount of time, making me wonder if the warnings are even tied to the lockup [16795.153478] ------------[ cut here ]------------ [16795.153489] WARNING: at /build/buildd/linux-3.2.0/net/core/dev.c:1970 skb_gso_segment+0x2e9/0x360() [16795.153492] Hardware name: VirtualBox [16795.153495] e1000: caps=(0x40014b89, 0x401b4b89) len=2948 data_len=0 ip_summed=0 [16795.153497] Modules linked in: firewall(O) vesafb vboxsf(O) snd_intel8x0 snd_ac97_codec ac97_bus snd_pcm snd_seq_midi snd_rawmidi snd_seq_midi_event snd_seq snd_timer snd_seq_device joydev psmouse snd soundcore serio_raw i2c_piix4 snd_page_alloc vboxguest(O) video bnep mac_hid rfcomm bluetooth parport_pc ppdev lp parport usbhid hid e1000 [last unloaded: firewall] [16795.153529] Pid: 7644, comm: python Tainted: G W O 3.2.0-37-generic-pae #58-Ubuntu [16795.153532] Call Trace: [16795.153540] [<c105a822>] warn_slowpath_common+0x72/0xa0 [16795.153544] [<c14ad2b9>] ? skb_gso_segment+0x2e9/0x360 [16795.153548] [<c14ad2b9>] ? skb_gso_segment+0x2e9/0x360 [16795.153551] [<c105a8f3>] warn_slowpath_fmt+0x33/0x40 [16795.153555] [<c14ad2b9>] skb_gso_segment+0x2e9/0x360 [16795.153561] [<c14b01ce>] dev_hard_start_xmit+0xae/0x4c0 [16795.153568] [<f9a6f2fd>] ? divertPacket+0x7d/0xe0 [firewall] [16795.153574] [<c14c8151>] sch_direct_xmit+0xb1/0x180 [16795.153578] [<f9a6f941>] ? hook_localout+0x71/0xe0 [firewall] [16795.153582] [<c14b06d6>] dev_queue_xmit+0xf6/0x370 [16795.153586] [<c14c6459>] ? eth_header+0x29/0xc0 [16795.153590] [<c14b73f0>] neigh_resolve_output+0x100/0x1c0 [16795.153594] [<c14c6430>] ? eth_rebuild_header+0x80/0x80 [16795.153598] [<c14dec62>] ip_finish_output+0x152/0x2e0 [16795.153602] [<c14df75f>] ip_output+0xaf/0xc0 [16795.153605] [<c14dd340>] ? ip_forward_options+0x1d0/0x1d0 [16795.153609] [<c14deec0>] ip_local_out+0x20/0x30 [16795.153612] [<c14defee>] ip_queue_xmit+0x11e/0x3c0 [16795.153617] [<c10841c5>] ? getnstimeofday+0x55/0x120 [16795.153622] [<c14f4de7>] tcp_transmit_skb+0x2d7/0x4a0 [16795.153626] [<c14f5786>] tcp_write_xmit+0x146/0x3a0 [16795.153630] [<c14f5a4c>] __tcp_push_pending_frames+0x2c/0x90 [16795.153634] [<c14e7d2b>] tcp_sendmsg+0x71b/0xae0 [16795.153638] [<c104a33d>] ? update_curr+0x1ed/0x360 [16795.153642] [<c1509c23>] ? inet_recvmsg+0x73/0x90 [16795.153646] [<c1509ca0>] inet_sendmsg+0x60/0xa0 [16795.153650] [<c149ae27>] sock_sendmsg+0xf7/0x120 [16795.153655] [<c1044648>] ? ttwu_do_wakeup+0x28/0x130 [16795.153660] [<c1036a98>] ? default_spin_lock_flags+0x8/0x10 [16795.153667] [<c149ce7e>] sys_sendto+0x10e/0x150 [16795.153672] [<c1117e7f>] ? handle_pte_fault+0x28f/0x2c0 [16795.153675] [<c111809e>] ? handle_mm_fault+0x15e/0x2c0 [16795.153679] [<c15ab9c7>] ? do_page_fault+0x227/0x490 [16795.153681] [<c149cefb>] sys_send+0x3b/0x40 [16795.153684] [<c149d842>] sys_socketcall+0x162/0x2c0 [16795.153687] [<c15af55f>] sysenter_do_call+0x12/0x28 [16795.153689] ---[ end trace 3170256120cbbc8f ]---
Have you tried following backwards from end of stack trace with addr2line? For example looking at the last line sysenter_do_call+0x12/0x28 It tells us that the offset is 0x12 and the length is 0x28 $ addr2line -e [path-to-kernel-module-with-issue] 0xc15af55f and so on...gdb is another alternative in terms of breaking down the stack trace into lines. However, I am not completely sure how you are arriving at a kernel-panic, as all I am seeing in the log excerpt you provided is a warning. Does it result in a crash/kernel-panic message after the stack-trace you posted? -------as far as the stack trace posted: it has to do with the general segmentation offload and the skbuffer not being happy with the ip_summed checksum, disabling large\general receiver offload with $ethtool -k [NIC] lro off $ethtool -k [NIC] gro off might be a possible workaround. Also, skipping checksum check with skb->ip_summed = CHECKSUM_UNNECESSARY might also solve this issue, depending on the purpose of the setup.
Understanding a dmesg kernel warning message
1,612,418,083,000
I am really struggling to get to the bottom of this one. For some reason cron cannot see a file system I mounted manually. This is a USB drive formatted to ext4 mounted to /backup. For completeness I mounted it while logged into SSH, not directly at a terminal. If I compare typing mount | sort at the commandline (over ssh) with the same command over cron or atd the cron version will miss the lines: tmpfs on /run/sshd type tmpfs (rw,nosuid,nodev,mode=755) /dev/sdb1 on /backup type ext4 (rw,relatime,data=ordered) I've confirmed that neither cron nor sshd are chrooted using the accepted answer to another question. If neither are chrooted, then what else can cause two different processes to see different mounts? This is really causing a problem because my backups keep writing to the 30GB SD card in my r-pi instead of the 2TB USB hard drive. Edit for future reference. This bug in Systemd v236 looks like the cause. https://github.com/systemd/systemd/issues/7761 If I type mount | sort I get /dev/mmcblk0p1 on /boot type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,errors=remount-ro) /dev/mmcblk0p2 on / type ext4 (rw,noatime,data=ordered) /dev/sda1 on /home type ext4 (rw,relatime,data=ordered) /dev/sda1 on /mnt/mercury_data type ext4 (rw,relatime,data=ordered) /dev/sda1 on /root type ext4 (rw,relatime,data=ordered) /dev/sda1 on /var type ext4 (rw,relatime,data=ordered) /dev/sdb1 on /backup type ext4 (rw,relatime,data=ordered) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/net_cls type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd) cgroup on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime) configfs on /sys/kernel/config type configfs (rw,relatime) debugfs on /sys/kernel/debug type debugfs (rw,relatime) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) devtmpfs on /dev type devtmpfs (rw,relatime,size=470180k,nr_inodes=117545,mode=755) mqueue on /dev/mqueue type mqueue (rw,relatime) proc on /proc type proc (rw,relatime) sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime) sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=27,pgrp=1,timeout=0,minproto=5,maxproto=5,direct) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755) tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k) tmpfs on /run/sshd type tmpfs (rw,nosuid,nodev,mode=755) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755) But when I run this through cron or atd I get: /dev/mmcblk0p1 on /boot type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,errors=remount-ro) /dev/mmcblk0p2 on / type ext4 (rw,noatime,data=ordered) /dev/sda1 on /home type ext4 (rw,relatime,data=ordered) /dev/sda1 on /mnt/mercury_data type ext4 (rw,relatime,data=ordered) /dev/sda1 on /root type ext4 (rw,relatime,data=ordered) /dev/sda1 on /var type ext4 (rw,relatime,data=ordered) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/net_cls type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd) cgroup on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime) configfs on /sys/kernel/config type configfs (rw,relatime) debugfs on /sys/kernel/debug type debugfs (rw,relatime) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) devtmpfs on /dev type devtmpfs (rw,relatime,size=470180k,nr_inodes=117545,mode=755) mqueue on /dev/mqueue type mqueue (rw,relatime) proc on /proc type proc (rw,relatime) sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime) sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=27,pgrp=1,timeout=0,minproto=5,maxproto=5,direct) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755) tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
This sounds like the daemons are running in different mount namespaces, so changes you make in your SSH session aren’t visible in cron or at jobs. Look at mountinfo and ns/mnt inside the various deamons’ /proc/${pid} directories to check which namespaces they’re using and what they can inherit.
What can cause different processes to see different mount points?
1,612,418,083,000
I have updated my server with ubuntu 14.04 to the newest kernel: 3.13.0-141-generic Which is the second release after the disclosure of the spectreMeltdown vulnerability as far as i can tell. But when running the spectre-and-meltdown vulnerability checker: https://github.com/speed47/spectre-meltdown-checker My system still seems vulnerable: Spectre and Meltdown mitigation detection tool v0.32 Checking for vulnerabilities on current system Kernel is Linux 3.13.0-141-generic #190-Ubuntu SMP Fri Jan 19 12:52:38 UTC 2018 x86_64 CPU is Intel(R) Xeon(R) CPU E5-2630L v2 @ 2.40GHz Hardware check * Hardware support (CPU microcode) for mitigation techniques * Indirect Branch Restricted Speculation (IBRS) * SPEC_CTRL MSR is available: NO * CPU indicates IBRS capability: NO * Indirect Branch Prediction Barrier (IBPB) * PRED_CMD MSR is available: NO * CPU indicates IBPB capability: NO * Single Thread Indirect Branch Predictors (STIBP) * SPEC_CTRL MSR is available: NO * CPU indicates STIBP capability: NO * Enhanced IBRS (IBRS_ALL) * CPU indicates ARCH_CAPABILITIES MSR availability: NO * ARCH_CAPABILITIES MSR advertises IBRS_ALL capability: NO * CPU explicitly indicates not being vulnerable to Meltdown (RDCL_NO): NO * CPU vulnerability to the three speculative execution attacks variants * Vulnerable to Variant 1: YES * Vulnerable to Variant 2: YES * Vulnerable to Variant 3: YES CVE-2017-5753 [bounds check bypass] aka 'Spectre Variant 1' * Checking count of LFENCE opcodes in kernel: YES > STATUS: NOT VULNERABLE (99 opcodes found, which is >= 70, heuristic to be improved when official patches become available) CVE-2017-5715 [branch target injection] aka 'Spectre Variant 2' * Mitigation 1 * Kernel is compiled with IBRS/IBPB support: YES * Currently enabled features * IBRS enabled for Kernel space: NO (echo 1 > /proc/sys/kernel/ibrs_enabled) * IBRS enabled for User space: NO (echo 2 > /proc/sys/kernel/ibrs_enabled) * IBPB enabled: NO (echo 1 > /proc/sys/kernel/ibpb_enabled) * Mitigation 2 * Kernel compiled with retpoline option: NO * Kernel compiled with a retpoline-aware compiler: NO * Retpoline enabled: NO > STATUS: VULNERABLE (IBRS hardware + kernel support OR kernel with retpoline are needed to mitigate the vulnerability) CVE-2017-5754 [rogue data cache load] aka 'Meltdown' aka 'Variant 3' * Kernel supports Page Table Isolation (PTI): YES * PTI enabled and active: YES * Running as a Xen PV DomU: NO > STATUS: NOT VULNERABLE (PTI mitigates the vulnerability) A false sense of security is worse than no security at all, see --disclaimer Can i do anything further to mitigate these threats, or is this just the state of things at the moment? Thanks in advance!
The best source of information regarding Ubuntu-specific mitigations of Meltdown and Spectre is the dedicated page in their security knowledge base. The information there matches what you’re seeing: your CPU hasn’t been updated to support IBRS/IBPB, so the mitigations for Spectre variant 2 aren’t functional; Meltdown and Spectre variant 1 are mitigated. Regarding your CPU, there was an updated microcode package, but that caused regressions and was reverted. Your motherboard manufacturer might have a firmware update available to address the issue; installing that would enable the Spectre variant 2 mitigations. Retpoline support isn’t included yet but might come later to address Spectre variant 2 as well. As Rui points out, these updates shouldn’t lull you into a false of security, there are probably other similar vulnerabilities waiting to be discovered (if they haven’t already been!). In any case, you should always consider that computers can’t be trusted...
Still vulnerable after SpectreMeltdown ubuntu kernel 3.13.0-141-generic update?
1,612,418,083,000
Today I tried to remove the old kernel in Debian Stretch with sudo aptitude purge linux-image-4.9.0-3-amd64 While the procedure went smoothly in most of the servers, and has been going smoothly over several years, this time I got the following error in 2 of my servers: Failed to substitute package name in title: 10 at /usr/bin/linux-check-removal line 102, < STDIN> line 1. dpkg: error processing package linux-image-4.9.0-3-amd64 (--remove): subprocess installed pre-removal script returned error exit status 255 Errors were encountered while processing: linux-image-4.9.0-3-amd64 E: Sub-process /usr/bin/dpkg returned an error code (1) Using sudo dpkg --purge linux-image-4.9.0-3-amd64 Also returns a similar message. Trying to repeat uninstall/purge commands of this package always give this error. How would I be able to delete it?
Investigating a workaround, I noticed that /usr/bin/linux-check-removal is a Perl script; according to man: linux-check-removal - check whether removal of a kernel is safe SYNOPSIS. linux-check-removal VERSION DESCRIPTION. linux-check-removal is intended to be called from the prerm maintainer scripts of Linux kernel packages. At the end of the day, I just modified the script to return 0 (success), uninstalled the package, with sudo aptitude purge linux-image-4.9.0-3-amd64 and restored the script at the end of the operation. My temporary modification, applied at the end of the script, was: # replace check() call with exit with success code # check(@ARGV); exit 0;
Error deleting older kernel package
1,612,418,083,000
I came across the term "Kernel Memory Patching" in a presentation. I googled it but couldn't find the exact answer somewhere. My understanding tells me that Kernel Memory Patching is somehow adding or increasing the memory size of kernel or the address space it accesses? Can someone please correct me. Or give more information about this process. Also, what is the possible difference between: Kernel Memory patching Kernel Patching Loadable kernel module
In reverse order because the explanation is a bit easier that way: Loadable kernel module: This refers to a piece of code that can be loaded at runtime by the kernel. Usually these are drivers, but in some cases it may provide extra functionality that has nothing to do with hardware or protocols (for example, add some extra accounting or debugging info). In essense, a kernel module serves the same function as a dynamic library for a userspace program (although the low-level linking is way more complicated). On Linux systems, kernel modules are stored in /lib/modules, on NetBSD (and I think most other BSD systems except OS X) they are found in /stand, and on Windows they are found in various locations in C:\Windows. Kernel patching: Comes in two varieties, live, and offline. Offline kernel patching is essentially just a kernel upgrade (and should be done as such instead of applying a patch to the kernel binary). Live kernel patching is a feature that allows updates to be applied to a running OS kernel with no downtime. On Linux at least, live kernel patches are contained in special kernel modules. Note that live kernel patching is not the same as loading a new version of a driver after unloading the old version (like Windows does when updating certain types of driver). Kernel Memory Patching: This is an ambiguous term out of context, but in every context I've seen, it refers to updating data structures in-memory on a running kernel as part of a live kernel patch. It can also refer to techniques used by some malware to modify kernel memory to trigger an exploit. It rarely has anything to do with the amount of memory in the system, and adding and removing memory is usually called 'hotplugging'.
Kernel memory patching
1,612,418,083,000
I read in Tanenbaum's book about operating systems that there are protection rings and ring 0 belongs to the kernel. Is it in general that one may say that "kernel modules handle the I/O and memory management of ring 0" or is "kernel module" specific for linux and not applicable for example for OpenBSD and MULTICS?
The ideas presented by Andrew Tanenbaum are usually not directly applicable to Linux (or any traditional monolithic Unix kernel). The answer to your question is much simpler than you are suggesting: a Linux kernel module is kernel code that has been compiled and linked into a separate file, instead of being linked into the kernel image. This separate kernel object file (.ko) can be loaded into the kernel address space, on demand, at run time. Practically all the drivers that can be compiled as kernel modules can also be statically linked into the kernel image, without any difference in functionality once code has been loaded. The module code is kernel code and it runs with the same privilege as all other kernel code. A kernel module can in principle replace any kernel code, but to do so cleanly the kernel proper must provide a mechanism for the module to hook into. A side note on terminology: Protection Rings is a concept introduced with the Multics operating system. "Ring 0" to "Ring 3" are terms that are specific to Intel processors. Other processor architectures use other terms, like User/Supervisor mode. Although Intel processors provide four different levels of privilege, most operating systems have only used two: Ring 3 for user level code and Ring 0 for kernel code, mirroring the User/Supervisor modes of other processors. (The exception is OS/2, which used three levels of privilege.) The privilege level concept has been expanded lately with the advent of hardware level virtualization technology. For example, the ARM architecture defines three privilege levels: User, Supervisor and Hypervisor. Jokingly, it has been said that finally four rings are used on Intel-based machines: Ring 3 for user level code, Ring 0 for (virtual machine) kernel code, Ring -1 for hypervisor code and Ring -2 for SMM mode.
Are kernel modules specific for linux or a general mechanism?
1,612,418,083,000
Does the GNU Linux kernel downloadable from www.kernel.org comes with all the hardware architecture like arm, amd, ppc etc? In the arch folder, I couldnt find any architecture like amd64 ( the 64 bit intel architecture ) or is it referred as something else. Where can I see the list of architecture supported by the kernel and their corresponding abbreviations?
There’s a single kernel tree containing all the code for all the architectures it supports. The list of architectures supported by the Linux kernel (which isn’t a GNU project) is given by the list of directories in arch. Currently: alpha: Alpha arc: ARC arm: 32-bit ARM arm64: 64-bit ARM (Aarch64) avr32: 32-bit AVR blackfin: Blackfin c6x: C6x cris: ETRAX CRIS frv: Fujitsu FR-V h8300: Hitachi H8 hexagon: Qualcomm Hexagon ia64: 64-bit Itanium m32r: Renesas M32R m68k: Motorola 68000 metag: Meta FPGAs microblaze: Xilinx MicroBlaze mips: various MIPS mn10300: Panasonic MN10300 nios2: Altera Nios II openrisc: OpenRISC (also known as or1k) parisc: PA/RISC powerpc: 32- and 64-bit PowerPC s390: IBM S/390 (64-bit only nowadays) score: SunplusCT S+CORE sh: Hitachi SuperH sparc: 32- and 64-bit SPARC tile: Tilera um: user-mode Linux unicore32: UniCore-32 x86: 32- and 64-bit x86 (the latter also known as amd64) xtensa: Tensilica Xtensa You’ll note that most 32-/64-bit variants have been merged into single arch directories.
GNU Linux kernel architecture
1,612,418,083,000
When linux kernel written in C so is it possible to write in java programming language, microkernel and monolithic kernel ? Concern developing java os.
Unlike C programs which get compiled in machine language, Java programs rely on a Java runtime engine which in turn relies on an existing system (including the kernel). Even if it was conceptually possible to organize everything to get Java code to be run by the kernel, it would be inefficient and would probably require modifications of the Java engine.
Is it possible write linux kernel in Java Programming Language? [closed]
1,612,418,083,000
I have a driver I maintain, I need to compile it with Linux headers 4.1.21 but I get compile errors for aio_read, aio_write, they are missing from struct file_operations, I assume they were replaced. How do i find out what the replacements are? fs.h in kernel 4.0 has: struct file_operations { struct module *owner; loff_t (*llseek) (struct file *, loff_t, int); ssize_t (*read) (struct file *, char __user *, size_t, loff_t *); ssize_t (*write) (struct file *, const char __user *, size_t, loff_t *); ssize_t (*aio_read) (struct kiocb *, const struct iovec *, unsigned long, loff_t); ssize_t (*aio_write) (struct kiocb *, const struct iovec *, unsigned long, loff_t); ssize_t (*read_iter) (struct kiocb *, struct iov_iter *); ssize_t (*write_iter) (struct kiocb *, struct iov_iter *); ... fs.h in kernel 4.1 has: struct file_operations { struct module *owner; loff_t (*llseek) (struct file *, loff_t, int); ssize_t (*read) (struct file *, char __user *, size_t, loff_t *); ssize_t (*write) (struct file *, const char __user *, size_t, loff_t *); ssize_t (*read_iter) (struct kiocb *, struct iov_iter *); ssize_t (*write_iter) (struct kiocb *, struct iov_iter *); int (*iterate) (struct file *, struct dir_context *);
If you are doing kernel work you should have a subscription to Linux Weekly News. A very quick search turned up this article which mentions that aio_read and aio_write are being handled by the read_iter and write_iter. You can also find a brief statement to that effect in Documentation/filesystems/porting.
file operations aio_read changed in kernel 4.1
1,612,418,083,000
I have a Gentoo installation. For compiling the kernel I used the configuration from Ubuntu 14.04 kernel. I find it hard to understand why the sizes of the modules libraries are so different: In Ubuntu oz123@ubuntu $ du -sh /lib/modules/4.2.0-36-generic/ 202M /lib/modules/4.2.0-36-generic/ In Gentoo oz123@gentoo ~ # du -sh /lib/modules/4.2.8-gentoo-1/ 1.8G /lib/modules/4.2.8-gentoo-1/ Is this because of some compile time option? Am I missing something here? update I did DIR1=/lib/modules/4.2.0-36-generic/ DIR2=/mnt/gentoo/lib/modules/4.2.8-gentoo-1/ diff -r $DIR1 $DIR2 | grep $DIR2 This reveals that modules which are in both libraries are binary different, for example: Binary files /lib/modules/4.2.0-36-generic/kernel/sound/pci/ac97/snd-ac97-codec.ko and /mnt/gentoo/lib/modules/4.2.8-gentoo-1/kernel/sound/pci/ac97/snd-ac97-codec.ko differ The mystery starts to become more clear: $ du -sh /lib/modules/4.2.0-36-generic/kernel/sound/usb/misc/snd-ua101.ko 36K /lib/modules/4.2.0-36-generic/kernel/sound/usb/misc/snd-ua101.ko $ du -sh /mnt/gentoo/lib/modules/4.2.8-gentoo-1/kernel/sound/usb/misc/snd-ua101.ko 368K /mnt/gentoo/lib/modules/4.2.8-gentoo-1/kernel/sound/usb/misc/snd-ua101.ko This is consistent with a few modules I checked. So the modules compiled in Gentoo are almost 10 times bigger, why??? Ah ... stripping is the answer... laptop-oz ~ # du -sh /lib/modules/4.2.8-gentoo-1/kernel/sound/usb/snd-usbmidi-lib.ko 368K /lib/modules/4.2.8-gentoo-1/kernel/sound/usb/snd-usbmidi-lib.ko laptop-oz ~ # strip --strip-unneeded /lib/modules/4.2.8-gentoo-1/kernel/sound/usb/snd-usbmidi-lib.ko laptop-oz ~ # du -sh /lib/modules/4.2.8-gentoo-1/kernel/sound/usb/snd-usbmidi-lib.ko 44K /lib/modules/4.2.8-gentoo-1/kernel/sound/usb/snd-usbmidi-lib.ko Update 2 Stripping is not the only thing. I am suspecting also compile flags. To check this I installed figlet (version 2.2.5) on Ubuntu and compile the same version with emerge in gentoo: In Ubuntu: $ ls -l /usr/bin/figlet-figlet -rwxr-xr-x 1 root root 43504 Nov 26 2012 /usr/bin/figlet-figlet In Gentoo: # ls -l /usr/bin/figlet -rwxr-xr-x 1 root root 47384 Jun 8 16:40 /usr/bin/figlet This are my compile flags in Gentoo: -O2 -pipe -march=haswell It seems that when building figlet with -O1 I am getting a much more similar result to Ubuntu: # ls -l /usr/bin/figlet -rwxr-xr-x 1 root root 43288 Jun 8 17:10 /usr/bin/figlet The little difference is really probably due to gcc version (in Ubuntu 4.8.4, in Gentoo 4.9.3).
Since you have build the Gentoo modules yourself, you most probably forgot to remove debug info from them. Try strip --strip-unneeded snd-ua101.ko and see if it makes a difference. Next time you rebuild modules for your system, strip all modules using make INSTALL_MOD_STRIP=1 modules_install
Why is /lib/modules so huge under my gentoo, compared to Ubuntu
1,612,418,083,000
I bought a Dell XPS 13 2016 with a i7-6500u (SkyLake). This notebook is not officially linux-supported but I found a few articles on how to get it up and running. But I cannot find an answer to this question. Current ArchLinux has the Linux Kernel 4.2.5 but Skylake-Support was added with Kernel 4.3 (article). How is it possible that I can run Arch on a LiveUSB or even install it on that Notebook?
I bought MSI gs40 that has skylake 67000hq with intel HD 530 inside. I was able to run even 1 year old kernel from liveusb and you should be. Everything except X should run without any issues since as was mentioned Skylake is x8086 and kernel supports it. Another thing that intel integrated graphics are not properly supported yet. 4.3 kernel has some improvment but it's still buggy yet. You can install it with linux-pf aur package. You also need to add i915 to initramfs and i915.preliminary_hw_support=1 to kernel params. But it won't help you a lot. I tried different desktop envriroments (gnome, kde...) and some of ubuntu line os. Xorg hangs in the end of the day, sometime right after start, sometimes after a few hours of using.
How can Linux 4.2.5 boot on a Skylake CPU when they are supported from 4.3+
1,612,418,083,000
While scanning a filesystem for changes with a shellscript, I forgot to exclude /dev, so core got tested too. Strangely enough, which ever way I test it, it is not consistent for a type: $ if [ -f ./core ] ; then echo file ; else echo something else ; fi file $ ls -al core lrwxrwxrwx 1 root root 11 Sep 29 15:40 core -> /proc/kcore $ file core core: symbolic link to `/proc/kcore' $ if test -f core ; then echo file ; else echo something else ; fi file $ if /usr/bin/test -f core ; then echo file ; else echo something else ; fi file $ if test -h core ; then echo link ; else echo something else ; fi link $ if /usr/bin/test -h core ; then echo link ; else echo something else ; fi link What is going on here?
First, /dev/core is a symbolic link to the regular file /proc/kcore. So both test -h /dev/core and test -f /dev/core are true: -h because it's a symbolic link, -f because the target of the symbolic link is a regular file. Now for what /proc/kcore is. It's a regular file, but it's a bit different from the regular files that you're used to. Whenever a program does something with a file (open, read, write, etc.), this is done through a system call, i.e. by executing some code in the kernel. The code that gets invoked depends on the filesystem type. For an on-disk filesystem such as ext4 or FAT, this code works out how the pieces of data that make up the file are arranged, and makes calls to the underlying storage layer (e.g. the disk driver) to read and write those pieces. For a network filesystem such as NFS or Samba, this code sends network packets to the file server. For /proc, which is the mount point for the procfs filesystem, this code displays or modifies kernel data structures. Most of the files under /proc report information about the system, e.g. /proc/mounts reports the list of mount points, /proc/modules reports the list of loaded modules, /proc/123/stat reports status information about the process with PID 123, etc. The file /proc/kcore reports the content of the physical memory of the system, in a format suitable for a debugger, so reading bytes from /proc/kcore essentially reads the content of the physical memory. Files on the procfs filesystem can be considered “magic”, somewhat in the same way that device files are “magic”. Device files and filesystems like procfs and sysfs get their magic in different ways: device files can be created on any (well, most) filesystems, they're magic because their directory entry says “block device” of “character device” instead of “regular file”; files under /proc and /sys are magic because the whole filesystem that they're on is magic. Except that, as we saw above, there's no actual magic involved. It's just kernel code, whether that code calculates block layouts and reads them from a disk or formats kernel data structures. You can see the documentation for the procfs filesystem in the proc man page and the kernel documentation.
What type of file is /dev/core or /proc/kcore?
1,612,418,083,000
This is a question that I had in my mind long ago but just now resurfaced back after reading this article. May I know why CPU and memory do not require driver? What other hardware components do not require driver?
A driver is a translation software that sits between the hardware and the Operating System and perform multiple tasks as : control of I/O operations, initialization and configuration of the hardware device. Your Operating System doesn't need a driver for the CPU because it has been compiled to work with a determined CPU. For example Debian has the following "flavors": amd64: x86-64 architecture with 64-bit userland and supporting 32-bit software arm64: ARMv8-A architecture armel: Little-endian ARM architecture (ARMv4T instruction set) on various embedded systems (EABI) armhf: ARM hard-float architecture (ARMv7 instruction set) requiring hardware with a floating-point unit i386: IA-32 architecture with 32-bit userland, compatible with x86-64 machines mips: Big-endian MIPS architecture mipsel: Little-endian MIPS architecture powerpc: PowerPC architecture ppc64el: Little-endian PowerPC64 architecture supporting POWER7+ and POWER8 CPUs s390x: z/Architecture with 64-bit userland, intended to replace s390 If you try to install a Debian compiled for PowerPC in an Intel hardware, it won't work. Usually the hardware that needs drivers is the one that interacts with the outside world (video cards, sound cards, modems, LAN and wireless cards) because manufacturers are constantly releasing new products that obsolete the old ones in terms of features, speed, etc. It means a wide span of hardware in the market each one with its own chipset, features and configuration parameters and with its own driver.
What hardware components do not require driver and why
1,612,418,083,000
I've faced the problem installing Gentoo: after the installation is seems to be completed and LiveCD is extracted, network becomes unreachable: gentoo~ # ping 8.8.8.8 connect: Network is unreachable ifconfig shows only loopback device lo and no actual network devices (there should be also ethernet enp3s0 and wireless wlp4s0 interfaces). Here's what I see using lspci: gentoo~ # lspci | grep Eth 03:00.0 Ethernet controller: Qualcomm Atheros AR8131 Gigabit Ethernet (rev c0) I thought that probably Kernel was compiled without the driver for this device. To check that out I should find the driver's name (used http://kmuto.jp/) gentoo~ # lspci -n ... 03:00.0 0200: 1969:1063 (rev c0) ... So the name is atl1c, which relates to Atheris L1C Gigabit Ethernet, support of which was included to the Kernel before compilation.
Gentoo/Linux LiveCD Ethernet won't connect: troubleshooting steps: Re-Confirm that you have the latest .iso or Bootable USB image from the official boot media website. Motherboard/Ethernet Manufacturers like Intel introduce breaking updates and Linux developers have to fix it with software updates. Re-Confirm that your internet and wired Cat5 cable is working correctly by plugging the Cat5 into another computer and connecting. If that just works, then internet and the Cat5 cable is good. Otherwise replace those. Re-Confirm that there aren't any superfluous or bugged devices or set top boxes between your computer and the ISP service provider, such as 5 port hubs, switches, routers (bridge mode or otherwise) or other relay devices. By eliminating these network hops, you confirm that the liveCD can "see through" them. If that fixes, flash-update hub/switch/router's hardware. Re-Confirm that your computer has the BIOS/CMOS or motherboard settings enabled for Ethernet card, your computer might have support for multiple Ethernet cards, or you're simply plugging into the non-default one. Press F2/F12/Del after reboot and find any settings in regards to Ethernet, or network stack, enable/disable legacy IPv4 and IPv6 settings. Re-Confirm that your computer isn't getting hung up negotiating between two or more Ethernet cards. If you can remove a removable Ethernet card, that may isolate errors. Your Ethernet card might be knockoff 3rd party, use modprobe to load the ethernet driver your manufacturer endorses. Now the integrity of your Ethernet hardware or boot media reader is suspect. Try a different LiveCD or different Boot USB, like Ubuntu, Arch, SystemRescueCD or other. Burn that to CD or Boot USB. If that one just works, it proves at least your Ethernet card hardware hasn't given up the ghost. If Ethernet still doesn't autoconnect from any of these boot mediums. The Hardware is suspect. Remove any unnecessary PCI cards that can be removed, remove unnecessary Memory. If god forbid none of that works, perhaps Linux is getting behind the 8 ball, try installing a different kind of operating system. If none of that works, your hardware has given up the ghost. https://wiki.gentoo.org/wiki/Handbook:AMD64/Installation/Networking What ultimately remedied it for me was a recompile of the kernel. I loaded the driver but not actually installed it. The solution is: gentoo~ # cd /usr/src/linux gentoo linux # make modules_install gentoo linux # cp arch/x86/boot/bzImage /boot/kernel-genkernel-x86_64-4.0.5-gentoo gentoo linux # reboot
Why Gentoo detects no network devices? [closed]
1,612,418,083,000
Wondering why ARM based devices like Raspberry Pi, Android phones, Routers etc. are not shipped with latest Linux kernel ? Is it simply due to lack of proprietary device driver support ? Like lack of opensource drivers for GPU,DSP etc ? Or they have some limitation to run latest kernel version ?
The Raspberry Pi does not really ship with any kernel at all; it does not include software, although you can buy it from some third party retailers with a pre-formatted SD card (and you can buy such a card separately). There are a number of binary GNU/Linux distributions specifically for the A/B/+ pi (since the Pi 2 is ARMv7, it does not require this and generic ARM distros can be used); they are mostly based on existing mainstream distributions and use the same versions of software except for the kernel, which is not vanilla and does include some proprietary bits. The latest version of that is 4.1, which is the same as the latest vanilla kernel at the time of this writing. However, the pi kernel, like the official kernel, is independent of any distro and pi-centric distros do not necessarily use the latest available kernel just like normal distros do not necessarily use the latest available kernel. With regard to Android, those kernels presumably contain even more proprietary stuff, and the base kernel itself is still not, I think, the same as the vanilla kernel -- I do not know what version the latest one is, but it would not be surprising if it is a little bit behind since there is more to double check in this case than with with the pi. Actual Android manufacturers I'm familiar with do not update the kernel all that often, and they stop after a certain point because they have not promised to keep your device updateable into infinity. The reason they do not update it very often in the first place is presumably because "If it ain't broke, don't fix it" -- it is riskier to do this than to just leave it as it is. This is a sane attitude; what would be totally crazy would be for consumer device manufacturers to try and keep up with kernel.org. That is not the point. Linux is open source and its development is public; you can access the same channels of communication and git repos as the kernel developers. It is not this way because they think everyone should upgrade as soon as they release something. It is that way because the development is public and open source. I can promise you that proprietary OS's are not kept updated to the absolute latest kernel from the outfit that produces them -- they are probably months and years behind -- but because you are not privy to the state of things there, you will not notice this. In relation to that, it is also worth noting that Linux kernel development is independent of any distro. They are not working together, strictly speaking, so the purpose of a new kernel is not specifically to be deployed on Android or ARM or Debian. Those are independent entities that make their own decisions about what and what not to use. There is no reason for them to wake up in the morning and go, "Well, Mr. Torvalds released 4.2 -- better get on that". One concrete illustration of the advantage of this relationship is that if kernel 4.2 turns out to have some bugs in it, distro X will not immediately suffer from those unless it blindly updates its kernel as soon as it is released. Instead, distro X can wait until 4.2 is reasonably field tested; if there are problems, it can be skipped and they can wait for the next one. It is also probably not desirable for most end users to have their operating system kernel updated weekly. Distros don't release a 3.17.1, then a 3.17.2, then a 3.17.3. They may release a 3.17.2 then 3.18.5. The difference between these versions may not in fact be all that meaningful to most users -- so in addition to being irritating, it would be pointless. The same logic applies to routers as to Android devices.
Why ARM devices are not shipped with latest Linux kernel?
1,612,418,083,000
I had a Slack 13.1 machine with 2.6.36 kernel. Then, I updated the kernel to 3.12.1. This machine has connected: a bootable disk with three partions (/dev/sda1 --> Linux OS files..., /dev/sda2 --> data, /dev/sda3 --> more data), a "dummy" SSD just to store things (/dev/sdb1) and USB ports. The fact is that whenever I try to start Linux with a USB containing data (not a LiveUSB) connected to the machine, during the startup process there is something going on that assigns the sda device to the USB so it is not possible to mount the Linux partitions in the "bootable disk" due to a Kernel Panic: VFS: Mounted root (vfat filesystem) readonly on device 8:1. devtmpfs: error mounting -2 [...] Kernel panic - not syncing: no init found. Try passing init=.. The bootloader I am using is LILO. I don't know if there is anyway to force the boot process not to change device names or pre-assign any of them to a certain device. This is its configuration: # Linux bootable partition config begins image = /boot/vmlinuz root=/dev/sda1 append="panic=120" label=3.12.20-smp read-only /etc/fstab: /dev/sda1 / ext4 rw 1 1 As the USB device partition is considered as sda1, it obviosuly doesn't contain any kind of init process or application so I get the kernel panic. I had tried with root="LABEL=myLabel" or root="LABEL=current" with no luck...I think because it searches for the label in the root node, not in all partitions :S Any suggestion of what is going on? Is it possible to fix it? Thanks in advance!
The problem is that the disk names are created sequentially; the first disk to be detected by the kernel becomes /dev/sda, the second is /dev/sdb etc. The solution to your problem would be to disable use (i.e. detection) of USB disks (including USB drives) until after your system has completed booting. This could be done by configuring the kernel to not include the USB storage driver in the kernel itself but to build it as a module. That way, during booting only the "normal" disk is found, and only after the root filesystem has been mounted does it become possible to load the usb_storage.ko module. This is assuming you have built the kernel yourself, and you're not using an initrd (initial ramdisk).
Boot process - Dev sdX name changes
1,612,418,083,000
I've been looking on Google for 4 hours and I can't find a solution to my problem. I have a computer running CentOS 5.10 using Kernel 2.6.18-371.9.1.el5 and I would like to upgrade my kernel to 2.6.32 in order to run lxc (it need at least kernel 2.6.29). I've been following this HowTo and tried to install the 2.6.32 source package by typing: [user@stag-devCentOS]$ rpm -i http://vault.centos.org/6.5/updates/Source/SPackages/kernel-2.6.32-431.20.3.el6.src.rpm 2>&1 | grep -v mock I know it's the kernel for 6.5 version but I wasn't able to find a 2.6.32 kernel for CentOS 5.10. When I'm running this command line, I get a md5 sum mismatch like this [tanguy@stag-devCentos ~]$ rpm -i http://vault.centos.org/6.5/updates/Source/SPackages/kernel-2.6.32-431.20.3.el6.src.rpm 2>&1 | grep -v mock warning: /var/tmp/rpm-xfer.ecr3WX: Header V3 RSA/SHA1 signature: NOKEY, key ID c105b9de error: unpacking of archive failed on file /home/tanguy/rpmbuild/SOURCES/Makefile.common;53a94866: cpio: MD5 sum mismatch I tried to add --nomd5 and rebuild but it doesn't help. I've tried a manual upgrade of the kernel, everything gone well until boot. i'm getting this error: switchroot: mout failed: No such file or directory Kernel panic - not syncing Attempted to kill init! Pid: 1, comm: init Not tainted 2.6.32.27 #1 Call Trace: [<ffffffff81041d3a>] ? panic+0x86/0x13d [<ffffffff810c644e>] ? pcpu_chunk_relocate+0x10/0x6b [<ffffffff810cb3db>] ? deactivate_super+0x20/0x77 [<ffffffff8104a66c>] ? exit_ptrace+0x20/0xee [<ffffffff810448ae>] ? do_exit+0x72/0x633 [<ffffffff81044edc>] ? do_group-exit+0x6d/0x97 [<ffffffff81044f18>] ? sys_exit_group+0x12/0x16 [<ffffffff8100b96b>] ? system_call_fastpath+0x16/0x1b Do you have any idea ?
Have you looked at ELRepo? They have kernels from the 3.2 branch for EL5 (and therefore CentOS5) which should run lxc. It might save you having to compile! Have a look here.
Upgrading kernel 2.6.18 to 2.6.32 on CentOS 5.10
1,612,418,083,000
When I was reading a book(Linux kernel programming), I found an interesting/confusing paragraph about mm_segment_t addr_limit(one of the member in struct_task) as shown below: mm_segment_t addr_limit; Unlike the older kernels, since 2.4 tasks (threads) also can be within the kernel. These can access a larger address space than tasks in user space. addr_limit describes the address space, which it is possible to access using the kernel of the task. Questions: In the first point, it says "(tasks) threads also can be within the kernel" is there. What does it mean? Are threads not necessarily within the kernel? In the second sentence, "These can access a larger address space than tasks in user space." I don't understand what these means? If he is talking about threads, how can threads have larger address space than tasks?
The phrase "within the kernel" is likely referring to kernel threads which are used by the kernel itself for work that can be preformed asynchronously. You can see examples of these threads in your process tree: # ps aux | grep '\[.*\]$' | head root 2 0.0 0.0 0 0 ? S May05 0:00 [kthreadd] root 3 0.0 0.0 0 0 ? S May05 0:03 [ksoftirqd/0] root 4 0.0 0.0 0 0 ? S May05 0:00 [kworker/0:0] root 5 0.0 0.0 0 0 ? S May05 0:00 [kworker/u:0] root 6 0.0 0.0 0 0 ? S May05 0:00 [migration/0] root 7 0.0 0.0 0 0 ? S May05 0:00 [watchdog/0] root 8 0.0 0.0 0 0 ? S May05 0:00 [migration/1] root 9 0.0 0.0 0 0 ? S May05 0:00 [kworker/1:0] root 10 0.0 0.0 0 0 ? S May05 0:02 [ksoftirqd/1] root 11 0.0 0.0 0 0 ? S May05 0:00 [watchdog/1] Such threads are created by kernel code that calls kthread_create(). These threads run in kernel mode, performing various tasks that you expect from the kernel. "Tasks in user space" on the other hand, represent threads or processes as you would normally think of them, created via fork+exec or pthread_create. These run in user mode and make system calls when they require services from the kernel. The wording there is a bit odd because, of course, the kernel knows about these tasks and maintains information (such as struct_task's) about them so it can schedule time for them on the CPU. As for (2), The "these" refers specifically to "Kernel threads." I believe that kernel threads share the same address space as the kernel.
Linux kernel: task vs thread
1,612,418,083,000
I know that in general this means that I have a "bad disk". But I'm after a more specific reason for why I am getting these messages from the kernel: sd 15:0:0:0: [sda] Attached SCSI disk sd 15:0:0:0: [sda] Unhandled sense code sd 15:0:0:0: [sda] Result: hostbyte=0x10 driverbyte=0x08 sd 15:0:0:0: [sda] Sense Key : 0x3 [current] sd 15:0:0:0: [sda] ASC=0x11 ASCQ=0xc sd 15:0:0:0: [sda] CDB: cdb[0]=0x28: 28 00 00 00 00 00 00 00 20 00 end_request: critical target error, dev sda, sector 0 I've had a bit of a search and I see that 0x3 sense key means "Medium error" and ASC=0x11 means "Read error". But it is still a mystery as to what the ASCQ=0xc means. The device is a bus powered USB drive which reports its model as "TOSHIBA MQ01ABB200"
From the SCSI2-Draft standard (the only one I have that isn't a PDF): Table D.1 (continued) +=============================================================================+ | D - DIRECT ACCESS DEVICE | | . .W - WRITE ONCE READ MULTIPLE DEVICE | | . . .O - OPTICAL MEMORY DEVICE | | . . . . | | ASC ASCQ DTLPWRSOMC DESCRIPTION | | --- ---- ----------------------------------------------------- | | 11 0C D W O UNRECOVERED READ ERROR - RECOMMEND REWRITE THE DATA | (obviously, that's not the entire table D.1)
What does this key code qualifier mean?
1,612,418,083,000
I got this errors in syslog. Any idea what does it mean? The system is running on Ubuntu 12.04, kernel: 3.8.0-35-generic #52~precise1 It looks to me like failure when trying to write to disk... [151850.317166] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [151850.318428] ffff8803640f5be8 0000000000000082 0000000034c733da ffff88040e233ec0 [151850.318444] ffff880408149740 ffff880403c6c5c0 0000000000000001 ffff8803edae80a8 [151850.318468] [<ffffffff816f41c9>] schedule+0x29/0x70 [151850.318481] [<ffffffff816f30b7>] __mutex_lock_slowpath+0xd7/0x150 [151850.318497] [<ffffffff816f2cca>] mutex_lock+0x2a/0x50 [151850.318510] [<ffffffff811aa006>] path_lookupat+0x236/0x7a0 [151850.318523] [<ffffffff811aade0>] ? getname_flags.part.31+0x30/0x150 [151850.318537] [<ffffffff811aaf6e>] ? getname_flags+0x6e/0x80 [151850.318552] [<ffffffff813141f4>] ? apparmor_inode_getattr+0x54/0x60 [151850.318565] [<ffffffff811bb929>] ? mntput_no_expire+0x49/0x160 [151850.318578] [<ffffffff811ab711>] user_path_at+0x11/0x20 [151850.318588] [<ffffffff811a0c3e>] vfs_lstat+0x1e/0x20 [151850.318602] [<ffffffff816fdf5d>] system_call_fastpath+0x1a/0x1f [151850.319634] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [151850.376552] [<ffffffff816f41c9>] schedule+0x29/0x70 [151850.376565] [<ffffffff816f30b7>] __mutex_lock_slowpath+0xd7/0x150 [151850.376579] [<ffffffff816f2cca>] mutex_lock+0x2a/0x50 [151850.376590] [<ffffffff811aa006>] path_lookupat+0x236/0x7a0 [151850.376602] [<ffffffff811aade0>] ? getname_flags.part.31+0x30/0x150 [151850.376616] [<ffffffff811aaf6e>] ? getname_flags+0x6e/0x80 [151850.376680] [<ffffffff813141f4>] ? apparmor_inode_getattr+0x54/0x60 [151850.376691] [<ffffffff811bb929>] ? mntput_no_expire+0x49/0x160 [151850.376704] [<ffffffff811ab711>] user_path_at+0x11/0x20 [151850.406554] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [151850.465132] sftp-server D ffff880403bfdf60 0 15199 1 0x00000000 [151850.465158] ffff8803d2c87fd8 ffff8803d2c87fd8 ffff8803d2c87fd8 0000000000013ec0 [151850.465173] ffffffff81c15440 ffff88040583ae80 0000000000000001 ffff8803edae80a8 [151850.465263] [<ffffffff811aa006>] path_lookupat+0x236/0x7a0 [151850.465269] [<ffffffff81186441>] ? kmem_cache_alloc+0x31/0x140 [151850.465275] [<ffffffff811aade0>] ? getname_flags.part.31+0x30/0x150 [151850.465283] [<ffffffff811aa5a4>] filename_lookup+0x34/0xc0 [151850.465289] [<ffffffff811aaf6e>] ? getname_flags+0x6e/0x80 [151850.465296] [<ffffffff811ab6b9>] user_path_at_empty+0x59/0xa0 [151850.465303] [<ffffffff813141f4>] ? apparmor_inode_getattr+0x54/0x60 [151850.465309] [<ffffffff81087e9a>] ? lg_local_unlock+0x1a/0x20 [151850.465315] [<ffffffff811bb929>] ? mntput_no_expire+0x49/0x160 [151850.465320] [<ffffffff811a0946>] ? cp_new_stat+0x116/0x130 [151850.465327] [<ffffffff811ab711>] user_path_at+0x11/0x20 [151850.465332] [<ffffffff811a0bc1>] vfs_fstatat+0x51/0xb0 [151850.465337] [<ffffffff811a0c3e>] vfs_lstat+0x1e/0x20 [151850.465343] [<ffffffff811a0dea>] sys_newlstat+0x1a/0x40 [151850.465350] [<ffffffff816fdf5d>] system_call_fastpath+0x1a/0x1f
The message appears in case that certain process (in this case sftp-server) doesn't get CPU for 120s (default limit). This could be caused by high load on the system. Generally this could be caused waiting on any resource, most likely candidates are CPU, disk and network. When debugging such problems you can test writing speed on disk: $ dd if=/dev/zero of=/tmp/output conv=fdatasync bs=384k count=1k; rm -f /tmp/output 1024+0 records in 1024+0 records out 402653184 bytes (403 MB) copied, 2753.13 s, 146 kB/s You should expect values in MB/s, e.g. 50-230 MB/s depending on hard drive. In this case throughput 146 kB/s is extremely low and can cause such kernel messages. You could also use iostat -x 5 to monitor disk utilization.
kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
1,612,418,083,000
I'm trying to boot a Linux system stored in a USB drive that I got by following the Linux from Scratch manual. I finished all but the booting part, the problem is that when the kernel tries to to mount the file system the USB drive is not part of the options, only the hard drive. I suppose USB and other storage systems are enumerated in /dev only after the file system is mounted. Is there a workaround so I can mount the filesystem from a USB drive? Even if is necessary to patch the kernel source. EDIT: Sorry about the lack of information. I'm using Grub2 as bootloader I'm using sysvinit (Kernel never launch it) It's an MBR partition, I'm using PARTUUID=000337f3-01 in as root. It works in a QEMU machine To setup ramdisk I just do initrd /bzImage Yes I've build udev
The way I would solve this problem is by having what is called an "initrd". I don't know how familiar you became with initrd when doing Linux from scratch, but they have a page about initrd here: http://www.linuxfromscratch.org/blfs/view/svn/postlfs/initramfs.html What initrd does is act as a small root filesystem the kernel boots in to after loading the kernel. This simple filesystem has all of the files the kernel needs to mount the root filesystem and load other drivers the system needs to boot. Once the real boot filesystem is mounted in initrd, initrd makes that filesystem the root filesystem, terminates, and the kernel starts the init process on the actual root filesystem. It is, in fact, possible to make a micro-Linux system which never leaves initrd; I did this a few years ago when making a one-floppy Linux distro from scratch.
Linux Booting with USB File System [closed]
1,612,418,083,000
In the Linux kernel configuration, a fake LR-WPAN driver can be enabled and added to the kernel that will be compiled. Why would someone want a fake LR-WPAN driver? I assume it would be for debugging, but I do not want to only assume. The documentation and Google do not appear to have an answer.
From the description in the kernel configuration file (Kconfig in the driver directory): tristate "Fake LR-WPAN driver with several interconnected devices" depends on IEEE802154_DRIVERS ---help--- Say Y here to enable the fake driver that serves as an example of HardMAC device driver. This driver allows testing, debugging and experimenting with Linux's IEEE 802.15.4 subsystem, even if you have no corresponding hardware. Its source can also be a template to write a driver for some IEEE 802.15.4 driver. It is only of interest to programmers of IEEE 802.15.4-related tools and drivers.
Why have a fake LR-WPAN driver?
1,612,418,083,000
I've downloaded a kernel binary which I am using now. In order to use the watchdog on my system I must recompile the kernel with watchdog support. Is it possible to obtain the current kernel configuration of the binary? The binary is obtained from this page. I've used version R5.
If the kernel config is not distributed in /boot/config-* or available at /proc/config.gz, it is nearly impossible to get it. As Alex wrote, they could also have patched the kernel and included proprietary drivers. But because the kernel is under GPLv2, the owner of the site where you download the binaries, have to give you the corresponding configuration including the source code they used to compile it. In the case you get problems, contact gpl-violations.org.
Find or create kernel configuration of kernel binary
1,612,418,083,000
First of all I am running a CentOS 6.4 installation. My computer is a laptop and I am trying to install the drivers for my ethernet card, because this week end I lost my ability to connect to the Internet. So I downloaded the driver : jmebp-1.0.8.5 I installed gcc, kernel-devel and since it wasn't working I finally installed the group Development Tools (in case I had forgot something). When I make install I get this error : *** No rule to make target `internet/jmebp-1.0.8.5'. Stop. Leaving directory `/lib/modules/2.6.32-358.6.1.el6.x86_64/build' My makefile is : MODNAME := jme obj-m := $(MODNAME).o ifneq ($(KERNELRELEASE),) ######################### # kbuild part of makefile ######################### EXTRA_CFLAGS += -Wall -O3 #EXTRA_CFLAGS += -DTX_DEBUG #EXTRA_CFLAGS += -DREG_DEBUG else ######################### # Normal Makefile ######################### TEMPFILES := $(MODNAME).o $(MODNAME).mod.c $(MODNAME).mod.o Module.symvers .$(MODNAME).*.cmd .tmp_versions modules.order Module.markers Modules.symvers ifeq (,$(KVER)) KVER=$(shell uname -r) endif KSRC ?= /lib/modules/$(KVER)/build MINSTDIR ?= /lib/modules/$(KVER)/kernel/drivers/net all: modules @rm -rf $(TEMPFILES) modules: @$(MAKE) -C $(KSRC) M=$(shell pwd) modules checkstack: modules objdump -d $(obj-m) | perl $(KSRC)/scripts/checkstack.pl $(shell uname -m) @rm -rf $(TEMPFILES) namespacecheck: modules perl $(KSRC)/scripts/namespace.pl @rm -rf $(TEMPFILES) install: modules install -m 644 $(MODNAME).ko $(MINSTDIR) depmod -a $(KVER) patch: @/usr/bin/diff -uar -X dontdiff ../../trunc ./ > bc.patch || echo > /dev/null buildtest: SRCDIRS=`find ~/linux-src -mindepth 1 -maxdepth 1 -type d -name 'linux-*' | sort -r -n`; \ SRCDIRS="$${SRCDIRS} `find ~/linux-src/centos -mindepth 2 -maxdepth 2 -type d -name 'linux-*' | sort -r -n`"; \ SRCDIRS="$${SRCDIRS} `find ~/linux-src/fedora -mindepth 2 -maxdepth 2 -type d -name 'linux-*' | sort -r -n`"; \ for d in $${SRCDIRS}; do \ $(MAKE) clean && $(MAKE) -C . KSRC=$${d} modules; \ if [ $$? != 0 ]; then \ exit $$?; \ fi; \ done $(MAKE) clean clean: @rm -rf $(MODNAME).ko $(TEMPFILES) %:: $(MAKE) -C $(KSRC) M=`pwd` $@ endif Any suggestions ?
First Issue You're missing the kernel-headers package. You need these to compile kernel modules. yum install kernel-headers Assuming this where you downloaded the drivers from. When I unpacked them on a CentOS 6.4 system I got the following error: $ cd jmebp-1.0.8.5 $ ls CHANGELOG jme.c jme.h Makefile scripts $ make make: *** /lib/modules/2.6.32-279.14.1.el6.x86_64/build: No such file or directory. Stop. make: *** [modules] Error 2 After installing the kernel-headers package, I ran make a second time: $ make make: *** /lib/modules/2.6.32-279.14.1.el6.x86_64/build: No such file or directory. Stop. make: *** [modules] Error 2 Still a problem? Debugging it further I figured out my other problem, which you might encounter as well, so I'm documenting it below, just in case. Second Issue The link in the kernel directory appeared to be broken. $ pwd /lib/modules/2.6.32-279.14.1.el6.x86_64 $ ls -l | grep build lrwxrwxrwx 1 root root 51 Dec 15 14:49 build -> ../../../usr/src/kernels/2.6.32-279.14.1.el6.x86_64 lrwxrwxrwx 1 root root 5 Dec 15 14:50 source -> build $ ls -l build/ ls: cannot access build/: No such file or directory Whoops, wrong version of the kernel-headers and kernel-devel for our kernel version. We're currently running this version of the kernel: $ uname -r 2.6.32-279.14.1.el6.x86_64 But we just installed the kernel-headers and kernel-devel packages for this version: 2.6.32-358.6.1.el6.x86_64 So let's install that version of the kernel as well and reboot so we can use the newer kernel: $ yum update kernel After our reboot everything looks much better: $ ls CHANGELOG jme.c jme.h Makefile scripts $ make make[1]: Entering directory `/usr/src/kernels/2.6.32-358.6.1.el6.x86_64' CC [M] /home/sam/jmebp/jmebp-1.0.8.5/jme.o Building modules, stage 2. MODPOST 1 modules CC /home/sam/jmebp/jmebp-1.0.8.5/jme.mod.o LD [M] /home/sam/jmebp/jmebp-1.0.8.5/jme.ko.unsigned NO SIGN [M] /home/sam/jmebp/jmebp-1.0.8.5/jme.ko make[1]: Leaving directory `/usr/src/kernels/2.6.32-358.6.1.el6.x86_64' $ ls CHANGELOG jme.c jme.h jme.ko jme.ko.unsigned Makefile scripts Now we see the jme.ko kernel module. To install it: make install
Problem compiling a driver : "No rule to make target"
1,612,418,083,000
I'm looking for a way to learn about and understand this technique. Here's what I'm talking about: Slax boots, {does stuff, like copy itself to RAM}, then transitions control to the kernel/file system it just made SYSLINUX boots off a FAT32/NTFS system, {does stuff}, then boots into a kernel ISOLINUX boots off a CD/DVD, {does stuff} then boots into a kernel Is there a name for this? Is it similar in GRUB when using chainloading? GRUB boots, loads selection menu, does selection. If it's a chainloading selection, it passes control to something else. I'm looking for how I can use one kernel to extract an .iso (from a FAT32/NTFS partition) into RAM, then boot off the RAM drive as if it had been there at startup. For more details as to why I want to do this, see this question. Here, however, I'm just asking for details about how a kernel "transitions" to another. Is there a name for this? I've heard of INT13h which I believe is used in GRUB/chainloading. Is this a technique to 'reboot into a different kernel'? If not, how is this done?
I'm guessing this is how: http://linux.die.net/man/8/kexec kexec(8) - Linux man page Name kexec - directly boot into a new kernel Synopsis /sbin/kexec [-v (--version)] [-f (--force)] [-x (--no-ifdown)] [-l (--load)] [-p (--load-panic)] [-u (--unload)] [-e (--exec)] [-t (--type)] [--mem-min=addr] [--mem-max=addr] Description kexec is a system call that enables you to load and boot into another kernel from the currently running kernel. kexec performs the function of the boot loader from within the kernel. The primary difference between a standard system boot and a kexec boot is that the hardware initialization normally performed by the BIOS or firmware (depending on architecture) is not performed during a kexec boot. This has the effect of reducing the time required for a reboot. Make sure you have selected CONFIG_KEXEC=y when configuring the kernel. The CONFIG_KEXEC option enables the kexec system call.
How does kernel swapping / INT 13h interrupts work?
1,612,418,083,000
I have an old kernel ( 2.4.37.9 ) and I want to override or substitute the root=XXXXX parameter to send to the kernel inside the initrd script. I already made some attempt to do that but it seems that at the end of initrd grub alway pass to kernel the root parameter define inside the menu.lst file, while I'm tryng to define a dynamic value ( ex. hda1 or hdc1 ) depending of the layout of th mother board. title Linux-2.4.37.9_CCL_20130122 with INITRD root (hd0,0) kernel /boot/vmlinuz-2.4.37.9_CCL_20130122 ro root=XXXXXX console=ttyS0,9600 console=tty0 apm=off initrd /boot/initrd-CCL.img.gz Any suggestions ?
This is not the more polite solution to this problem, but it works and mybe should be helpfull to some other so I brifly describe here how I solved my problem to have a DOM capable to change it boot device automatically. Inside the linuxrc, the script of initrd, I detect wich device is available and based on that result I set the default startup option used by grub my linuxrx i something like this #!/bin/ash restart=0 mntdev="" target="hda" echo "--> check fdisk ${target} " mount -t ext2 /dev/${target}1 /mnt/tmp if [ -f /mnt/tmp/etc/slackware-release ]; then echo "Found $target " mntdev="/dev/${target}1" olddef=$( cat /mnt/tmp/boot/grub/default ) if [ $olddef -ne 0 ]; then echo "0" > /mnt/tmp/boot/grub/default restart=1 fi fi umount /mnt/tmp # ================================ if [ -z $dskroot ]; then target="hdc" echo "--> check fdisk ${target} " mount -t ext2 /dev/${target}1 /mnt/tmp if [ -f /mnt/tmp/etc/slackware-release ]; then echo "Found $target " mntdev="/dev/${target}1" olddef=$( cat /mnt/tmp/boot/grub/default ) if [ $olddef -ne 1 ]; then echo "1" > /mnt/tmp/boot/grub/default restart=1 fi fi umount /mnt/tmp fi # ================================ if [ $restart -eq 1 ]; then echo "Changed grub default : Rebooting PC " echo "====================================" sleep 2 mount -t ext2 $mntdev /mnt/tmp chroot /mnt/tmp <<EOF /sbin/reboot -f EOF fi And inside the grub menu I reserver the first two entry , the 0 for device hda and 1 for device hdc default saved title Linux-2.4.37.9_CCL_20130122 with INITRD hda pos 0 root (hd0,0) kernel /boot/vmlinuz-2.4.37.9_CCL_20130122 ro root=/dev/hda1 console=ttyS0,9600 console=tty0 apm=off initrd /boot/initrd-CCL.img.gz title Linux-2.4.37.9_CCL_20130122 with INITRD hdc pos 1 root (hd0,0) kernel /boot/vmlinuz-2.4.37.9_CCL_20130122 ro root=/dev/hdc1 console=ttyS0,9600 console=tty0 apm=off initrd /boot/initrd-CCL.img.gz
Kernel/grub : how override root parameter inside initrd script
1,358,808,462,000
I've attempted to install drivers for an FPGA device, but require that I remove the usbserial module. This happens to be impossible because usbserial is a built-in module. It was suggested that I compile a new kernel to make usbserial dynamically loadable and unloadable. I'm now trying to compile a custom kernel w/ Fedora. The guide being located here: http://fedoraproject.org/wiki/Building_a_custom_kernel At this moment I am using a GUI to set my kernel options, but I have no idea what options to select and deselect. Any advice would be greatly appreciated.
Try using make nconfig or make menuconfig which presents you with interactive text UI. Both have search facility for both the kernel CONFIG_* options (those which are placed in .config which governs the build) and strings within the currently selected option menu. IMHO both of these TUIs are more usable than the GUI. As for your case, you are probably looking for CONFIG_USB_SERIAL which is located in Device Drivers -> USB support -> USB Serial Converter support - you need to change this from <*> to <M> (using the M key).
Compile Linux Kernel w/ Dynamically (Un)loadable usbserial Module
1,358,808,462,000
Kernel Version 3.3.4-5.fc17.x86_64 CPU info: sashan@dhcp-au-122 ~ $ cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 42 model name : Intel(R) Core(TM) i7-2640M CPU @ 2.80GHz stepping : 7 microcode : 0x28 cpu MHz : 2793.577 cache size : 4096 KB physical id : 0 siblings : 1 core id : 0 cpu cores : 1 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc up arch_perfmon pebs bts nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm ida arat epb xsaveopt pln pts dts tpr_shadow vnmi flexpriority ept vpid bogomips : 5587.15 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: Notice that it says 1 core and that it is an i7 which has 2 (http://ark.intel.com/products/53464/Intel-Core-i7-2640M-Processor-4M-Cache-up-to-3_50-GHz)
My grub boot config had acpi=off in the kernel boot parameter list. Changed this to acpi=on. I originally had to turn acpi off when using the live cd to install this version of Linux because it wouldn't boot on this laptop from the default live cd kernel boot parameter list.
Kernel not detecting multicore cpu
1,358,808,462,000
From here: http://www.xenomai.org/documentation/xenomai-2.6/TROUBLESHOOTING Q: Which CONFIG_* items are latency killers, and should be avoided ? ... APM: The APM model assigns power management control to the BIOS, and BIOS code is never written with RT-latency in mind. If configured, APM routines are invoked with SMI priority, which breaks the rule that adeos-ipipe must be in charge of such things. DISABLE_SMI doesnt help here (more later). The problem is that I am not able to find this APM thing anywhere. "ACPI (Advanced Configuration and Power Interface) Support" results in the following menu: --- ACPI (Advanced Configuration and Power Interface) Support [*] Deprecated /proc/acpi files [*] Deprecated power /proc/acpi directories <M> ACPI 4.0 power meter < > EC read/write access through /sys/kernel/debug/ec (NEW) [*] Deprecated /proc/acpi/event support <M> AC Adapter <M> Battery {M} Button {M} Video <M> Fan [*] Dock <M> Processor < > IPMI (NEW) <M> Processor Aggregator <M> Thermal Zone -*- NUMA support () Custom DSDT Table file to include [*] Debug Statements [ ] Additionally enable ACPI function tracing <M> PCI slot detection driver {M} Container and Module Devices (EXPERIMENTAL) <M> Memory Hotplug <M> Smart Battery System < > Hardware Error Device (NEW) [ ] ACPI Platform Error Interface (APEI) (NEW) Please help.
You can find this option yourself: Press / in the menu menuconfig interface , and put CONFIG_APM there , if you find anything , it's supported I can only give you output from 3.3.7 version: But anyway , you could edit the .config file yourself , and append CONFIG_APM=y , then redo make menuconfig ,
Where is CONFIG_APM in kernel - 2.6.38.8
1,358,808,462,000
I need to install another kernel (2.6.34) into my fedora machine (x86) and i need to show the old and new boot up options in the boot menu (both new and old kernel) I have downloaded the new kernel and i need to compile it and need to build it. can you explain me the steps for doing that? I got the correct steps from this discussion and am having doubts in the steps 6 and 7 in the below link which explains the installation of new kernel. http://www.cyberciti.biz/tips/compiling-linux-kernel-26.html Also can you explain the effective configuration of 'menuconfig' and its what it actually aims?
If you just need any 2.6.34-kernel, you might head over to koji and try to find a precompiled one for you version of fedora. You can install it as root after downloading all required rpms with yum localinstall kernel-*.rpm and it will automatically appear in Grub. If you need to modify the kernel, it is best to also start with the distribution kernel and modify it to suit your needs. There is an extensive howto in the fedora wiki. Lastly if you really need to start from scratch with the sources from kernel.org, you have to download the source and extract the archive. Then you have to configure the kernel. For this, say make menuconfig for a CLI or make menuconfig for a graphical configuration. You might want to start with the old configuration of the running kernel, see Recompile Kernel to Change Stack Size. When you are finished configuring, say make to build the kernel, then make modules to build kernel modules. The following steps have to be done as root: Say make modules_install to install the modules (this will not overwrite anything of the old kernel) and finally make install which will automatically install the kernel into /boot and modify the Grub configuration, so that you can start the new kernel alongside the old one.
Installing new kernel (by commandline) as side of old kernel and effective configuration of ' menuconfig'
1,358,808,462,000
ip addr show output from within a Kubernetes pod root@customer-fd99fb7dc-82hrr:/app# ip -c addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 8e:e3:d2:b5:d2:94 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 192.168.171.139/32 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::8ce3:d2ff:feb5:d294/64 scope link valid_lft forever preferred_lft forever etho0@1f6 is one end of the veth pair. This explains qdisc noqueue My understanding of noqueue qdisc is that, it sends the network packet immediately (if it can) or would drop it otherwise. So, I assumed that noqueue is not backed by any queues. But, qlen 1000 is contradicting with my understanding. Does it mean noqueue has an internal queue? Can I think of noqueue qdisc as pfifo_fast minus the 3 internal classes/bands?
qlen is the interface parameter set with ip link set eth0 txqueuelen 1000 or ifconfig eth0 txqueuelen 1000 (see ifconfig(8)) In the kernel it is called dev->tx_queue_len and defaults to DEFAULT_TX_QUEUE_LEN = 1000. This is what ip link ls shows. When a queueing discipline is attached to a device it takes the the device qlen setting and uses that. When you replace the qdisc with another, it still uses the same qlen. Or ignores it, as noqueue qdisc does. (disclaimer: not a kernel expert)
What does qlen 1000 mean for a noqueue qdisc
1,358,808,462,000
On the RHEL 9.3 I have renamed the logical volume (LV) /dev/lvm01/root to /dev/lvm01/root.vol. I did everything to make the new name correctly recognized: changed /etc/fstab entry reloaded systemd configuration remounted / And I also modified the /etc/default/grub entry: GRUB_CMDLINE_LINUX="root=/dev/mapper/lvm01-root.vol ro crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M resume=/dev/mapper/lvm01-swap.vol rd.lvm.lv=lvm01/root.vol rd.lvm.lv=lvm01/swap.vol" Then I expected the grub2-mkconfig -o /boot/grub2/grub.cfg to do the rest of the job and rebooted. But the system ended up with dracut message that the root partition is not found (or something like that). After short investigation, I realized the kernel parameters had not been modified as expected. The manual change helped to get the OS booted. Interesting that the /boot/grub2/grub.cfg was updated. But what was not updated was the /boot/loader/entries/* files. And it was the issue.
Well, the temporary solution seemed to be # grub2-mkconfig --update-bls-cmdline -o /boot/grub2/grub.cfg But it only fixed the actual state. The update of kernel still used the previous LV name. The only solution which helped also for kernel updates seems to be grubby: # grubby --update-kernel=ALL --args="root=/dev/mapper/lvm01-root.vol ro crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M resume=/dev/mapper/lvm01-swap.vol rd.lvm.lv=lvm01/root.vol rd.lvm.lv=lvm01/swap.vol" Even after the kernel update, the content of new conf file in the /boot/loader/entries/<machine-id>-<kernel version>.conf looks good and the system boot is fine. In fact, the problem may be related to this RH ticket, which is opened for quite a some time (but I'm not sure): https://issues.redhat.com/browse/RHEL-4313
The grub2-mkconfig does not propagate renamed root logical volume on RHEL 9
1,358,808,462,000
Somewhere along the line, the nested kvm kernel module became enabled by default. As hard as it may be to believe in this day and age, not everyone in the world had a need to run kvms on every host that gets built. Is there any performance cost to leaving this default enabled even if you have zero intention to deploy kvms? All my googling on this subject has come up with no useful info on pros/cons to having this module present - performance, security, or any other potential impact. Or, is this just case of "if it ain't broke, don't fix it"?
Or, is this just case of "if it ain't broke, don't fix it"? I'd go with that :) There's no cost to the module being there. There might be a cost to the CPU having the feature enabled (which, on some older x86_64 you could disable in the UEFI setup, not sure this is still the case on modern machines), namely that of course nested page tables add a layer of page table redirection. But seeing you'd only use a single value for that redirection: that is effectless. Things get more complicated once you consider features that can only work when virtualization is enabled – mostly IOMMU (if you actually use IOMMU groups, there is a minor performance overhead. Not enough for people with billions of dollars in servers to abandon that; draw your own conclusion). So, in a sense, it really doesn't cost you anything (ok, a couple kilobytes in module memory, maybe) to keep it loaded. Then again, the kernel can work just as well without. If you're building a slimmed down kernel anyway, it'd be OK to exclude it.
Is there any benefit in disabling kvm module on bare metal that won't ever run kvm guests?
1,358,808,462,000
Why the check mechanism is changed? Because of the MTRR code upgrade? Or was that a bug in checking before? https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/arch/x86/mm/pgtable.c?h=v6.5.7&id=12f0dd8df14285a5604f35ed3af8b8c33e8fd97f x86/mm: Only check uniform after calling mtrr_type_lookup() Today pud_set_huge() and pmd_set_huge() test for the MTRR type to be WB or INVALID after calling mtrr_type_lookup(). Those tests can be dropped as the only reason not to use a large mapping would be uniform being 0. Any MTRR type can be accepted as long as it applies to the whole memory range covered by the mapping, as the alternative would only be to map the same region with smaller pages instead, using the same PAT type as for the large mapping.
The code was changed because the semantics of uniform weren’t correct. This was discovered in an earlier iteration of the patchset; Juergen Gross asked The problem arises in case a large mapping is spanning multiple MTRRs, even if they define the same caching type (uniform is set to 0 in this case). So the basic question for me is: shouldn't the semantics of uniform be adpated? Today it means "the range is covered by only one MTRR or by none". Looking at the use cases I'm wondering whether it shouldn't be "the whole range has the same caching type". to which Linus Torvalds replied Oh, I think then you should fix uniform to be 1. IOW, we should not think "multiple MTRRs" means "non-uniform". Only "different actual memory types" should mean non-uniformity. If I remember correctly, there were good reasons to have overlapping MTRR's. In fact, you can generate a single MTRR that described a memory ttype that wasn't even contiguous if you had odd memory setups. Intel definitely defines how overlapping MTRR's work, and "same types overlaps" is documented as a real thing. This led to the patch you’re asking about, which is marked as suggested by Linus. (In such circumstances, to find the email which counts as the suggestion, look for emails from the suggester to the patch author.)
Only check uniform after calling mtrr_type_lookup()
1,358,808,462,000
I just got the following error when compiling linux-5.14.2.tar.gz and patch-5.14.2-rt21.patch on CONFIG_DEBUG_INFO_BTF=y: AS arch/x86/lib/iomap_copy_64.o arch/x86/lib/iomap_copy_64.S: Assembler messages: arch/x86/lib/iomap_copy_64.S:13: 警告:found `movsd'; assuming `movsl' was meant AR arch/x86/lib/built-in.a GEN .version CHK include/generated/compile.h LD vmlinux.o ld: warning: arch/x86/power/hibernate_asm_64.o: missing .note.GNU-stack section implies executable stack ld: NOTE: This behaviour is deprecated and will be removed in a future version of the linker MODPOST vmlinux.symvers MODINFO modules.builtin.modinfo GEN modules.builtin LD .tmp_vmlinux.btf ld: warning: arch/x86/power/hibernate_asm_64.o: missing .note.GNU-stack section implies executable stack ld: NOTE: This behaviour is deprecated and will be removed in a future version of the linker ld: warning: .tmp_vmlinux.btf has a LOAD segment with RWX permissions BTF .btf.vmlinux.bin.o LD .tmp_vmlinux.kallsyms1 ld: warning: .btf.vmlinux.bin.o: missing .note.GNU-stack section implies executable stack ld: NOTE: This behaviour is deprecated and will be removed in a future version of the linker ld: warning: .tmp_vmlinux.kallsyms1 has a LOAD segment with RWX permissions KSYMS .tmp_vmlinux.kallsyms1.S AS .tmp_vmlinux.kallsyms1.S LD .tmp_vmlinux.kallsyms2 ld: warning: .btf.vmlinux.bin.o: missing .note.GNU-stack section implies executable stack ld: NOTE: This behaviour is deprecated and will be removed in a future version of the linker ld: warning: .tmp_vmlinux.kallsyms2 has a LOAD segment with RWX permissions KSYMS .tmp_vmlinux.kallsyms2.S AS .tmp_vmlinux.kallsyms2.S LD vmlinux ld: warning: .btf.vmlinux.bin.o: missing .note.GNU-stack section implies executable stack ld: NOTE: This behaviour is deprecated and will be removed in a future version of the linker ld: warning: vmlinux has a LOAD segment with RWX permissions BTFIDS vmlinux failed: load btf from vmlinux: invalid argument make: make: *** [Makefile:1176: vmlinux] Error 255 make: *** Deleting file 'vmlinux' I know that if CONFIG_DEBUG_INFO_BTF is set to n, compilation will not report errors, but I don't want CONFIG_DEBUG_INFO_BTF to be set to n. From this issue, it seems that the virtual memory is too small, but the initiator of this problem is using a virtual machine, and I am a physical machine Debian12. What should I do? ~/kernel/5.14.2/linux-5.14.2$ free -h total used free shared buff/cache available 内存: 7.7Gi 508Mi 3.4Gi 1.2Mi 4.1Gi 7.2Gi 交换: 976Mi 0B 976Mi
I resolved this issue by changing the kernel version. linux-6.4.tar.gz and patch-6.4.6-rt8.patch.
failed: load btf from vmlinux: invalid argument make on CONFIG_DEBUG_INFO_BTF=y
1,358,808,462,000
I made a bit of research on the matter and also asked ChatGPT, but there doesn't seem to be a way to measure full boot time of an alpine distribution with OpenRC. Using dmesg, I can see the last message [ 0.689037] Run /sbin/init as init process But I want to know how much time it took to "complete" the init process. My use case is boot time sensitive, that's why I'm asking. Note I was able to boot an ubuntu 22.04 with systemd in 1s including kernel boot. In systemd, systemd-analyze is very useful. So I hope there is an equivalent (or a way) in openRC.
First, install coreutils in order to get millisecond precision in alpine. apk add coreutils Then, we will create two services: In /etc/init.d/boot-start #!/sbin/openrc-run description="Boot Start Service" start() { ebegin "Starting boot-start" date +%s%3N > /var/boot-time.log eend $? } And in /etc/init.d/boot-end #!/sbin/openrc-run description="Boot End Service" depend() { after * } start() { ebegin "Starting boot-end" boot_start_time=$(cat /var/boot-time.log) boot_end_time=$(date +%s%3N) duration=$((boot_start_time - boot_end_time)) echo "Boot started at: $boot_start_time" > /var/boot-time.log echo "Boot ended at: $boot_end_time" >> /var/boot-time.log echo "Total duration: $duration milliseconds" >> /var/boot-time.log eend $? } And then we'll start the first at boot and the second last chmod +x /etc/init.d/boot-start rc-update add boot-start boot chmod +x /etc/init.d/boot-end rc-update add boot-end default The result will look like this Boot started at: 1686811355808 Boot ended at: 1686811357194 Total duration: -1386 milliseconds
How to know the boot time (kernel + init) in Alpine using openRC?
1,358,808,462,000
We have a Centos 5.8 system that was experiencing CMOS battery errors, which we resolved by replacing the battery. However, after replacing the battery, the system displays a "kernel panic" error message every time it boots up. What steps can we take to resolve this issue, and are there any additional details that would be helpful to know?
The messages No volume groups found mount: could not find filesystem '/dev/root' all hint to that the kernel can't find the disk containing the root filesystem. While the disk can be accessed by the BIOS and the bootloader using the BIOS, it seems like it is inaccessible using the Linux drivers built in the kernel or provided by the initramfs under the current BIOS settings. On most mainboards the BIOS settings are kept in volatile memory that is kept alive by the CMOS battery when the system is not connected to a power supply. By removing the battery all BIOS settings were likely reset. You should check if the BIOS has settings for an SATA controller mode. Many BIOSes support running the SATA controller in "Legacy", "AHCI" or "RAID" mode. If this setting exists, try out "Legacy" or "AHCI" and check if it boots. "RAID" mode may cause data loss if it wasn't setup that way initially, so make a backup before trying it out.
Kernel Panic after replacing CMOS battery
1,358,808,462,000
After upgrading the linux-kernel on a Debian testing System: Operating System: Debian GNU/Linux 12 (bookworm) Kernel: Linux 6.1.0-7-amd64 Architecture: x86-64 Hardware Vendor: TOSHIBA Hardware Model: PORTEGE R30-A Firmware Version: Version 4.20 GNOME Shell: 43.4 I get the following lines printed while the boot-image is being generated: update-initramfs: Generating /boot/initrd.img-6.1.0-7-amd64 W: Possible missing firmware /lib/firmware/i915/skl_huc_2.0.0.bin for module i915 W: Possible missing firmware /lib/firmware/i915/bxt_huc_2.0.0.bin for module i915 W: Possible missing firmware /lib/firmware/i915/kbl_huc_4.0.0.bin for module i915 W: Possible missing firmware /lib/firmware/i915/glk_huc_4.0.0.bin for module i915 W: Possible missing firmware /lib/firmware/i915/kbl_huc_4.0.0.bin for module i915 W: Possible missing firmware /lib/firmware/i915/kbl_huc_4.0.0.bin for module i915 W: Possible missing firmware /lib/firmware/i915/cml_huc_4.0.0.bin for module i915 W: Possible missing firmware /lib/firmware/i915/icl_huc_9.0.0.bin for module i915 W: Possible missing firmware /lib/firmware/i915/ehl_huc_9.0.0.bin for module i915 W: Possible missing firmware /lib/firmware/i915/ehl_huc_9.0.0.bin for module i915 W: Possible missing firmware /lib/firmware/i915/tgl_huc_7.9.3.bin for module i915 W: Possible missing firmware /lib/firmware/i915/tgl_huc_7.9.3.bin for module i915 W: Possible missing firmware /lib/firmware/i915/dg1_huc.bin for module i915 W: Possible missing firmware /lib/firmware/i915/tgl_huc_7.9.3.bin for module i915 W: Possible missing firmware /lib/firmware/i915/tgl_huc.bin for module i915 W: Possible missing firmware /lib/firmware/i915/tgl_huc_7.9.3.bin for module i915 W: Possible missing firmware /lib/firmware/i915/tgl_huc.bin for module i915 W: Possible missing firmware /lib/firmware/i915/skl_guc_70.1.1.bin for module i915 W: Possible missing firmware /lib/firmware/i915/bxt_guc_70.1.1.bin for module i915 W: Possible missing firmware /lib/firmware/i915/kbl_guc_70.1.1.bin for module i915 W: Possible missing firmware /lib/firmware/i915/glk_guc_70.1.1.bin for module i915 W: Possible missing firmware /lib/firmware/i915/kbl_guc_70.1.1.bin for module i915 W: Possible missing firmware /lib/firmware/i915/kbl_guc_70.1.1.bin for module i915 W: Possible missing firmware /lib/firmware/i915/cml_guc_70.1.1.bin for module i915 W: Possible missing firmware /lib/firmware/i915/icl_guc_70.1.1.bin for module i915 W: Possible missing firmware /lib/firmware/i915/ehl_guc_70.1.1.bin for module i915 W: Possible missing firmware /lib/firmware/i915/ehl_guc_70.1.1.bin for module i915 W: Possible missing firmware /lib/firmware/i915/tgl_guc_70.1.1.bin for module i915 W: Possible missing firmware /lib/firmware/i915/tgl_guc_70.1.1.bin for module i915 W: Possible missing firmware /lib/firmware/i915/dg1_guc_70.bin for module i915 W: Possible missing firmware /lib/firmware/i915/tgl_guc_69.0.3.bin for module i915 W: Possible missing firmware /lib/firmware/i915/tgl_guc_70.1.1.bin for module i915 W: Possible missing firmware /lib/firmware/i915/tgl_guc_70.bin for module i915 W: Possible missing firmware /lib/firmware/i915/adlp_guc_69.0.3.bin for module i915 W: Possible missing firmware /lib/firmware/i915/adlp_guc_70.1.1.bin for module i915 W: Possible missing firmware /lib/firmware/i915/adlp_guc_70.bin for module i915 W: Possible missing firmware /lib/firmware/i915/dg2_guc_70.bin for module i915 W: Possible missing firmware /lib/firmware/i915/bxt_dmc_ver1_07.bin for module i915 W: Possible missing firmware /lib/firmware/i915/skl_dmc_ver1_27.bin for module i915 W: Possible missing firmware /lib/firmware/i915/kbl_dmc_ver1_04.bin for module i915 W: Possible missing firmware /lib/firmware/i915/glk_dmc_ver1_04.bin for module i915 W: Possible missing firmware /lib/firmware/i915/icl_dmc_ver1_09.bin for module i915 W: Possible missing firmware /lib/firmware/i915/tgl_dmc_ver2_12.bin for module i915 W: Possible missing firmware /lib/firmware/i915/rkl_dmc_ver2_03.bin for module i915 W: Possible missing firmware /lib/firmware/i915/dg1_dmc_ver2_02.bin for module i915 W: Possible missing firmware /lib/firmware/i915/adls_dmc_ver2_01.bin for module i915 W: Possible missing firmware /lib/firmware/i915/adlp_dmc_ver2_16.bin for module i915 W: Possible missing firmware /lib/firmware/i915/dg2_dmc_ver2_07.bin for module i915 Does anyone know, what packet(s) needs to be installed to supply my system with the appropriate drivers/kernel-modules?
Note that these are only “possibly missing”, and in fact your laptop doesn’t need them and there’s no benefit in installing them for that specific model (which has an Intel HD Graphics 4600 integrated GPU, which doesn’t use host-provided firmware). These firmware files are provided by firmware-misc-nonfree. To install that, you need to enable the non-free-firmware repositories: sudo sed -i.bak 's/bookworm[^ ]* main$/& non-free-firmware/g' /etc/apt/sources.list This adds the non-free-firmware repository to any line describing a Bookworm repository with only the main component. Then update and install: sudo apt update sudo apt install firmware-misc-nonfree See also How to solve "W: Possible missing firmware /lib/firmware/i915/skl_huc_2.0.0.bin for module i915" with FOSS only? (without any non-free packages)
Debian GNU/Linux 12 (bookworm) - Possible missing firmware (*.huc*; *.guc*; *.dmc* for module i915)
1,358,808,462,000
I've recently bought a Lenovo IdeaPad 5 14ABA7 (Type 82SE) with a AMD Ryzen 5 5625U processor and Integrated AMD Radeon Graphics. The laptop comes without any OS and I installed Debian 11 on it (current kernel: 5.10.0-19-amd64). Unfortunately there is no specific documentation for this hardware. The installation seemed to have been successfully with the exception of the WiFi adapter that was not working out of the box. Luckily I managed to solve the problem following this thread. Unfortunately, I soon realised that there are a number of things that do not work: Suspension: if I push the suspend button, the system remain turned on (cursor on a black screen) Screen brightness: it seems the screen is always at its maximum brightness and cannot be adjusted (if a push the increase/decrease buttons the brightness does not change). Animation: I cannot see any animation. I suspect that the hardware graphic acceleration is not working Battery life: the battery life seems to me extremely short for a new laptop whose battery life is supposed to last up to 11 hours. I suspect there is something odd with the CPU governor or frequency. I suspect that most of these problems are due to the fact that the hardware of Lenovo IdeaPad 5 is still not supported by the current Debian kernel. It is frustrating having a new laptop and not being able to run Linux on it. I'm not new to Linux but this time it seems that I have too many problems at once and my knowledge is very limited. Please, if anybody can help with any of this issues, I would appreciate it very much. Thanks in advance for your time and kindness.
I have finally found a solution, thanks also to the suggestion of Aubs. I post that here in case it's of help for other people with a similar problem. I have tried to update to Kernel 6.0 via backport as proposed by Aubs and it almost solved all the problems! This means: suspension, screen brightness, animation and battery life. The only issue with that is that the WiFi adapter (whose problem was initially solved as explain in my original post) stopped working. This is because the solution found there was for the Kernel 5.10 and not for Kernel 6.0. Luckily, I finally found a solution also for the WiFi adapter in combination with the Kernel 6.0 and now everything seems to work as expected! These are the steps I have followed: # 1. Edit the source.list file (for example with vi) to include the bullseye-backports line: vi /etc/apt/sources.list deb http://deb.debian.org/debian bullseye-backports main # 2. Update the packages: apt update # 3. Localise the backported kernel files: apt list -a linux-image-amd64 linux-headers-amd64 # 4. Update the backported kernel files and reboot: apt -t bullseye-backports install linux-image-amd64 linux-headers-amd64 reboot # 5. Turn off your Security Boot in BIOS (if it's enable, in my case it wasn't) # 6. Clone the rtl8852be driver by HRex39 using git (o simply download it from the URL in the following command): git clone https://github.com/HRex39/rtl8852be.git -b dev # 7. Enter the directory, compile the drivers and reboot: cd rtl8852be make -j8 make install reboot I could execute the steps above requiring an Internet connection through an old, pluggable USB-WiFi adapter. In any case, if you don't have one, the required files can be downloaded in a machine with Internet access and then moved to the Lenovo IdeaPad 5. In my case the above procedure solved all the problems I was having with my Lenovo IdeaPad 5. Thank you very much to @Aubs for the suggestions without which I could have no solved the problems. I think this solution will work also for Ubuntu and other Debian-based Linux distros.
Several problems installing Debian on Lenovo IdeaPad 5 (AMD Ryzen 5 and AMD Radeon)
1,358,808,462,000
When I plug in USB disk, I get a message like this: kernel: usb 2-10.2: new SuperSpeed USB device number 5 using xhci_hcd kernel: usb-storage 2-10.2:1.0: USB Mass Storage device detected kernel: scsi 10:0:0:0: Direct-Access SanDisk Ultra 1.00 PQ: 0 ANSI: 6 kernel: sd 10:0:0:0: [sdf] 30031872 512-byte logical blocks: (15.4 GB/14.3 GiB) kernel: sdf: sdf1 kernel: sd 10:0:0:0: [sdf] Attached SCSI removable disk when I disconnect the USB disk, I get message which is much less helpful: kernel: usb 2-10.2: USB disconnect, device number 5 why does kernel not tell me which disk was disconnected? Suppose I have several USB disks plugged in. I would like to know that I have disconnected disk sde. Can anything be dome about this? I am using rsyslog as login daemon, on Debian 10.
This does not satisfy as an answer, yet I came across your question and here is my observation. Below outputs are taken from journalctl -fa Following this askubuntu topic I found out the lines outputting [sdb] are actually due to eject command or some equivelant of it. Output of eject /dev/sdb: Sep 05 14:30:54 knight kernel: sdb: detected capacity change from 60493824 to 0 Output of eject -t /dev/sdb: Sep 05 14:30:59 knight kernel: sd 2:0:0:0: [sdb] 60493824 512-byte logical blocks: (31.0 GB/28.8 GiB) Sep 05 14:30:59 knight kernel: sdb: detected capacity change from 0 to 60493824 Sep 05 14:30:59 knight kernel: sdb: sdb1 sdb2 Sep 05 14:30:59 knight kernel: sdb: sdb1 sdb2 What was done above is safely removing (same output if performed from Thunar file manager as well), and replugging it in software (rather than physically). You are pulling the rug under sd's feet, it is not acknowledged and does not log it. But it logs if is properly acknowledged. However, I don't know why sd does not log it afterwards either. It seems plausible that it could state a missing disk, since somehow these lines were output: Sep 05 14:31:24 knight kernel: usb 4-1: USB disconnect, device number 10 Sep 05 14:31:35 knight kernel: xhci_hcd 0000:00:10.0: Abort failed to stop command ring: -110 Sep 05 14:31:35 knight kernel: xhci_hcd 0000:00:10.0: Host halt failed, -110 Sep 05 14:31:35 knight kernel: xhci_hcd 0000:00:10.0: xHCI host controller not responding, assume dead Sep 05 14:31:35 knight kernel: xhci_hcd 0000:00:10.0: HC died; cleaning up Sep 05 14:31:35 knight kernel: xhci_hcd 0000:00:10.0: Timeout while waiting for setup device command Sep 05 14:31:35 knight kernel: usb 4-1: device not accepting address 11, error -108 Sep 05 14:31:35 knight kernel: usb usb4-port1: couldn't allocate usb_device But again, I am not knowledgeable enough on SCSI nor the kernel to answer the question at hand.
kernel: USB disconnect message
1,358,808,462,000
ERROR: type should be string, got "\nhttps://github.com/torvalds/linux/blob/bf272460d744112bacd4c4d562592decbf0edf64/arch/x86/kernel/cpu/mce/core.c#L1543\n if ((m.cs & 3) == 3) {\n /* If this triggers there is no way to recover. Die hard. */\n BUG_ON(!on_thread_stack() || !user_mode(regs));\n\nas above, what is die hard? what is die soft?\nWill the rest of the code continues running after BUG_ON() execution?\n"
The "die hard" means killing the thread. As opposed to letting it carry on running, possibly by returning a failure result, that calling code can then process to exit/continue gracefully. From the BUG FAQ that @don-aman referred to, BUG_ON( condition ); is the same as if ( condition ) BUG(); So if the condition is false then the BUG_ON does not get tripped, and the code can continue - which is why you can see more code following in that if branch in core.c . Also, you can test BUG() yourself: >>cat h.c #include <stdio.h> #define BUG() __asm__ __volatile__("ud2\n") int main() { printf ("hi\n"); BUG(); printf ("ho\n"); } >>cc -o h h.c >>./h hi Illegal instruction (core dumped) >>
What is die hard in linux? is there a die soft?
1,358,808,462,000
After changing a disk size in VMware (for example increasing it by 10 more GB), the next step is to rescan it in Linux so that kernel identifies this size change. For this we use this command: echo 1>/sys/class/block/sda/device/rescan In our scripts, we rescan every couple of minutes from a cron job, in order to verify if we need to resize the relevant disks. I want to know if there is some way to identify if a disk size was changed, without rescanning, and if the disk size was really changed, only then to rescan. So far we have not found a way to verify if the disk size was changed without rescanning, but we hope we can get answers here. The reason for my question is that we do not feel comfortable with rescanning every couple of minutes, even though this activity isn't risky. Reference: https://kerneltalks.com/disk-management/how-to-rescan-disk-in-linux-after-extending-vmware-disk/
The way to identify that a block device has been resized is to rescan it. That’s it. There’s no need to find another way of rescanning the block device in order to decide whether to rescan it. In a virtualised environment it should be safe to run this every two minutes; there will be a very slight performance hit whenever a rescan is run, because the rescan acquires interrupt locks, but rescanning a virtual block device is very fast. If you’re uncomfortable with rescans every two minutes, you can reduce the frequency — do you really need to react to disk resizes within two minutes? (Note that some paravirtualised storage drivers automatically update the size seen by the guest, so rescanning isn’t necessary; this is apparently not the case for VMware.) You may want to look at open-vm-tools which supposedly allows workloads to be triggered inside guests from the host: that way, you could resize the disk externally, and trigger a job inside the guest to rescan and resize the volumes. I’ve never done this so I don’t know if it’s actually possible.
On Linux, is it possible to know if disk size was changed without rescanning the disk?
1,358,808,462,000
I'm trying to recompile the kernel (following the official Arch Linux guide: https://wiki.archlinux.org/title/Kernel/Traditional_compilation) but every time I get compilation errors: In file included from help.c:12: In function ‘xrealloc’, inlined from ‘add_cmdname’ at help.c:24:2: subcmd-util.h:56:23: error: pointer may be used after ‘realloc’ [-Werror=use-after-free] 56 | ret = realloc(ptr, size); | ^~~~~~~~~~~~~~~~~~ subcmd-util.h:52:21: note: call to ‘realloc’ here 52 | void *ret = realloc(ptr, size); | ^~~~~~~~~~~~~~~~~~ subcmd-util.h:58:31: error: pointer may be used after ‘realloc’ [-Werror=use-after-free] 58 | ret = realloc(ptr, 1); | ^~~~~~~~~~~~~~~ subcmd-util.h:52:21: note: call to ‘realloc’ here 52 | void *ret = realloc(ptr, size); | ^~~~~~~~~~~~~~~~~~ cc1: all warnings being treated as errors make[4]: *** [/home/jenusi/Downloads/linux-5.15/tools/build/Makefile.build:97: /home/jenusi/Downloads/linux-5.15/tools/objtool/help.o] Error 1 make[3]: *** [Makefile:59: /home/jenusi/Downloads/linux-5.15/tools/objtool/libsubcmd-in.o] Error 2 make[2]: *** [Makefile:63: /home/jenusi/Downloads/linux-5.15/tools/objtool/libsubcmd.a] Error 2 make[1]: *** [Makefile:69: objtool] Error 2 make: *** [Makefile:1371: tools/objtool] Error 2 Kernel: 5.15.54, GCC: 12.1.0
First of all, your make is not "crashing", it's exiting due to GCC errors. GCC 12.1 is ill-advised to compile certain kernels because it enables new stricter checks in terms of code [quality] which means that various -Werror options ("treat warnings as errors") could result in errors which were not present in earlier versions of the compiler. You have several options: Use an older GCC version, e.g. GCC 11.4 With GCC 12.1: edit Makefile(s) and remove -Werror=use-after-free Wait for kernel patches (which may or may not come) to fix these errors
Linux kernel 5.15.54 compilation errors with GCC 12.1
1,358,808,462,000
I'm currently learning about terminals, pseudo-terminals, etc. and I'm curious - today we all use the pseudo-terminals such as xterm or gnome-terminal that are part of the Linux GUI, and less often the virtual consoles which are part of the kernel. In this area I pretty much know how things work, what happens when and what players are there. What if I want to log in to my Linux machine using an external device which will simulate the old TTYs (like the famous VT100). I can use another linux machine for this, or a Raspberry pi, Arduino, whatever. For the manner of sake, I want to use a USB-to-serial converter. How is this done?
On the host side you need to run something which will listen on the serial port for a connection then hand off to /bin/login when a connection is negotiated. That something is typically a program named getty. On the device where the screen and the keyboard are you need some sort of terminal emulator. Minicom has been the most popular choice on Linux for a number of years.
Open a terminal from an external device?
1,358,808,462,000
I'm attempting to build the Linux Kernel (version 5.16). I know that there's a compile-time option to randomize various structure fields (indicated by macros like randomized_struct_fields_start). However, I'm looking through make menuconfig and I can't find the right option.
The options you need to enable are in “General architecture-dependent options”, but they depend on GCC plugins. For the latter to work, $(gcc -print-file-name=plugin)/include/plugin-version.h must exist; on Debian for example, that means you need to install gcc-10-plugin-dev. Once that’s done, enable “GCC plugins”, then “Randomize layout of sensitive kernel structures”:
Build Linux Kernel with randomized struct fields
1,358,808,462,000
my goal is to make my kernel-headers to have the same version as my kernel and kernel-devel. Currently I have: $ sudo yum install kernel-headers Last metadata expiration check: 1:55:19 ago ... Package kernel-headers-5.15.4-200.fc35.x86_64 is already installed. Dependencies resolved. Nothing to do. Complete! $ yum --showduplicate list kernel-headers Last metadata expiration check: 0:39:37 ago ... Installed Packages kernel-headers.x86_64 5.15.4-200.fc35 @updates Available Packages kernel-headers.i686 5.14.9-300.fc35 fedora kernel-headers.x86_64 5.14.9-300.fc35 fedora kernel-headers.x86_64 5.15.4-200.fc35 updates $ uname -a Linux fedora 5.15.13-200.fc35.x86_64 I want to have kernel-headers set to 5.15.13-200.fc35.x86_64. I have tried everything from [1] https://askubuntu.com/questions/1045451/linux-kernel-header-files-to-match-the-current-kernel [2] yum installs kernel-devel different from my kernel version [3] kernel headers and kernel devel My problem is that I am able to install kernel-headers that are only in the available packages list. What I tried (and failed) was: $ sudo yum install kernel-headers-generic Last metadata expiration check: 1:59:50 ago ... No match for argument: kernel-headers-generic Error: Unable to find a match: kernel-headers-generic $ sudo yum install kernel-headers-5.15.13-200.fc35.x86_64 Last metadata expiration check: 2:00:09 ago ... No match for argument: kernel-headers-5.15.13-200.fc35.x86_64 Error: Unable to find a match: kernel-headers-5.15.13-200.fc35.x86_64 $ sudo yum install kernel-headers-generic-5.15.13-200.fc35.x86_64 Last metadata expiration check: 2:00:15 ago ... No match for argument: kernel-headers-generic-5.15.13-200.fc35.x86_64 Error: Unable to find a match: kernel-headers-generic-5.15.13-200.fc35.x86_64 So question is how to install kernel-header 5.15.13-200.fc35.x86_64. And since the "usual" methods did not help, I am not sure if it is relevant/equivalent to ask, how to expand the list of Available Packages which appears with --showduplicates.
According to an answer to a similar question, this package isn't released with every version. Yum doesn't omit any packages here. I'd recommend just trying the devel, as recommended in the first linked forum.
Cannot install any specifc version of `kernel-headers` on Fedora
1,358,808,462,000
I installed the newer 5.15.0 kernel onto my Linux Mint 19.3 (Ubuntu 18.04) by https://github.com/pimlie/ubuntu-mainline-kernel.sh . All worked fine, but not for kernel headers which gave me an installation error due to the expected LIBC version that must be >= 2.34, instead of Ubuntu 18.04 that is shipped with the 2.27. Is is possible to solve this issue without compromise the system stability? Thank you so much!
Issue solved compiling the new kernel on my own.
Install kernel 5.15.0 on Ubuntu 18.04
1,358,808,462,000
After downloading ubuntu source from kernel.ubuntu.com, I tried to set configuration for arm64. When I do command below, LANG=C fakeroot debian/rules ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- editconfigs and select to change config for arm64, the config menu appears and below is the screen when I searched for STACKPROTECT (by /STACKPROTECT). I wanted to set STACKPROTECTOR_PER_TASK to 'y' but for that I have to set CC_HAVE_STACKPROTECTOR_SYSREG. But this seems to be related to the target processor and I'll have to select the arm64 processor (generation). Where can I set it? I couldn't find it in the General Setup.
Options that contain HAVE_ are generally things that depend on your build environment, not options per se. You can see this one being defined in arch/arm64/Kconfig: config CC_HAVE_STACKPROTECTOR_SYSREG def_bool $(cc-option,-mstack-protector-guard=sysreg -mstack-protector-guard-reg=sp_el0 -mstack-protector-guard-offset=0) So you can test if your compiler supports it, for example my x86_64 gcc obviously doesn't support it but the aarch64 one (which would be used for an arm64 kernel build!) does for me: $ echo "int main() { return 0; }" | gcc -x c - -c -o /dev/null -mstack-protector-guard=sysreg -mstack-protector-guard-reg=sp_el0 -mstack-protector-guard-offset=0 gcc: error: unrecognized argument in option ‘-mstack-protector-guard=sysreg’ gcc: note: valid arguments to ‘-mstack-protector-guard=’ are: global tls $ echo "int main() { return 0; }" | aarch64-linux-gnu-gcc -x c - -c -o /dev/null -mstack-protector-guard=sysreg -mstack-protector-guard-reg=sp_el0 -mstack-protector-guard-offset=0 $ try that command and see why it fails, you might just need a newer gcc? People online complain the sp_el0 is not present in GCC 8, so I believe it was introduced around gcc 9 or 10.
where can I find CC_HAVE_STACKPROTECTOR_SYSREG
1,358,808,462,000
TL;DR: The kernel module sht3x (https://www.kernel.org/doc/html/latest/hwmon/sht3x.html) seems to be missing in a standard debian installation. I need it in order to read an external sensor. How can I install this kernel module? The whole story I try to connect an SHT31 temperature/humidity sensor to my Debian notebook. In order to do so, I flashed an ATTiny85 micro controller to act as i2c-tiny-usb interface. I got this part working - lsusb lists the device as Bus 003 Device 003: ID 0403:c631 Future Technology Devices International, Ltd i2c-tiny-usb interface and I also get a promising response from i2cdetect $ sudo i2cdetect -l i2c-3 i2c i915 gmbus dpc I2C adapter i2c-1 i2c i915 gmbus vga I2C adapter i2c-8 i2c i2c-tiny-usb at bus 001 device 017 I2C adapter i2c-6 i2c AUX B/port B I2C adapter i2c-4 i2c i915 gmbus dpb I2C adapter i2c-2 i2c i915 gmbus panel I2C adapter i2c-0 i2c i915 gmbus ssc I2C adapter i2c-7 i2c AUX D/port D I2C adapter i2c-5 i2c i915 gmbus dpd I2C adapter $ sudo i2cdetect 8 WARNING! This program can confuse your I2C bus, cause data loss and worse! I will probe file /dev/i2c-8. I will probe address range 0x08-0x77. Continue? [Y/n] Y 0 1 2 3 4 5 6 7 8 9 a b c d e f 00: -- -- -- -- -- -- -- -- 10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 40: -- -- -- -- -- 45 -- -- -- -- -- -- -- -- -- -- 50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 70: -- -- -- -- -- -- -- -- However, I cannot read sensor data, because the kernel module sht3x is not installed on my (standard Debian) system and is not listed in lsmod. Question How can I install and make use of the sht3x kernel module on my Debian notebook?
I’m assuming you’re running Debian 10, but the instructions for later versions are similar. The module you’re after is supported by the kernel version used in Debian 10, but it is not enabled; let’s fix that. Install the kernel source for the default version in your release: sudo apt install linux-source Extract it: cd /usr/src tar xf linux-source-*.tar.xz (assuming there’s only a single linux-source tarball available, which will be the case unless you’ve installed multiple linux-source packages). Copy the current kernel configuration: cd linux-source-*/ cp /boot/config-$(uname -r) .config Enable the configuration for the sht3x module: make menuconfig (this might complain about missing tools, such as a compiler; sudo apt install build-essential should fix things). To find which option needs to be enabled, and where it is, press / and enter “SHT3X”: This gives a number of pieces of information: the option is called SENSORS_SHT3X; it is listed under “Device Drivers”, “Hardware Monitoring Support”; the options it depends on are already enabled; but it is disabled. Press Enter to exit the search results, go down to “Device Drivers”, press Enter, then go down to “Hardware Monitoring Support”, press Enter again, find the “SHT3x” option, and press M to enable it as a module. Press Tab until “Save” is highlighted, then Enter, confirm the name of the file to write (.config), and select “Exit” several times until you’re back at the prompt. Finally, build the module: make drivers/hwmon/sht3x.ko This might require additional dependencies, at least libelf-dev and libssl-dev (sudo apt install libelf-dev libssl-dev). If all goes well, you’ll end up with a drivers/hwmon/sht3x.ko file which you can load as a module.
hwmon: add missing kernel module
1,358,808,462,000
I have been testing binfmt_misc feature of Linux on Debian 10, and have found that setting the flags to "OC", to use the credentials of the binary instead of interpreter, causes execution to fail silently. In the POC below, /tmp/test.sh is the interpreter, while qux.go is the binary. Why is /tmp/test.sh executed successfully without flags, when it fails silently with flags "OC"? POC: $ touch qux.go $ chmod +x qux.go $ cat <<EOF >/tmp/test.sh > #!/bin/sh > echo Golang > EOF $ chmod +x /tmp/test.sh $ echo ':golang:E::go::/tmp/test.sh:' | sudo tee /proc/sys/fs/binfmt_misc/register :golang:E::go::/tmp/test.sh: $ ./qux.go Golang $ echo -1 | sudo tee /proc/sys/fs/binfmt_misc/golang -1 $ echo ':golang:E::go::/tmp/test.sh:OC' | sudo tee /proc/sys/fs/binfmt_misc/register :golang:E::go::/tmp/test.sh:OC $ ./qux.go # no output Also: mount | grep binfmt_misc systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=28,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=658) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,nosuid,nodev,noexec,relatime) Bonus: Some resources claim that binfmt_misc could be used for container-to-host escapes. However, as I see it, the interpreter path is evaluated within the chroot'd filesystem of the container, and execution of the interpreter is happening within the container, i.e. ls -la / shows the container root (not the host root). Resource: https://www.kernel.org/doc/html/latest/admin-guide/binfmt-misc.html
You’re being tripped up by two features. The first is that, when exec fails, the shell will look at the contents of the file you’re attempting to run, and if it looks like a shell script, it will interpret it itself. An empty file looks like a shell script. You can see this by running strace -f ./qux.go, which shows the failing exec, and by changing qux.go: $ echo echo Failed Golang > qux.go $ ./qux.go Failed Golang The other feature is that the O flag doesn’t work with cascading interpreters: in your case, qux.go needs an interpreter, but that interpreter itself needs an interpreter, /bin/sh, and there are thus two files to interpret, test.sh and qux.go — but only one final executable file can be handled in O mode. The following works: $ cat <<EOF > /tmp/test.c #include <stdio.h> int main(int argc, char **argv) { puts("Golang"); return 0; } EOF $ make /tmp/test cc /tmp/test.c -o /tmp/test $ echo ':golang:E::go::/tmp/test:OC' | sudo tee /proc/sys/fs/binfmt_misc/register :golang:E::go::/tmp/test:OC $ ./qux.go Golang
Why don't binfmt_misc with flags "OC", when it works without any flags?
1,358,808,462,000
I've been already made a bootable Linux-based OS using the kernel, busybox only, and testing it in the QEMU emulator is successful. The next step I need is to install dpkg and APT (Debian package manager) on my custom Linux. How can I do that? Just install the Debian Package manager source code and compile and then porting to rfs of my system? I don't know what to do exactly. (installing the Debian package manager is the only way to achieve my goal because my professor instructs like that. So other ways are not useful to me)
Before I continue let me disclaim: If your professor is using apt, then she is using a debian-based system. I recommend using the same distro instead of your own "from-scratch" system. If you and using a from-scratch system because you want a custom kernel, then you can install that kernel a debian-based system. If you do use apt, you'll end up downloading packages from somewhere. Where do you want that to be? Ubuntu's archive? Debian's archive? Elementary's archive? You'll also need to know which version of things to download. Debian stretch has many incompatibilities with Debian buster and Ubuntu bionic. When you use apt to install packages, you'll run into dependency problems if you're not installing all software from the same version of the same distro. Therefore, if you're installing everything from a distro anyways, just start with that distro and replace the kernel if that's something you're trying to do. On a debian-based system, Everything is tracked by dpkg, including dpkg itself and and very low-level dependencies. Most things depend on libc and you probably already have it installed. That will cause a conflict as it was installed without dpkg, dpkg can't know that it exists on your system and can't verify the version of libc you have is compatible with the package you are trying to install. I think the simplest way is to download the dpkg binary package (dpkg_<version>_<arch>.deb) for your architecture and unzip it manually. Certainly get that working before you make any attempt with apt. If you're using a debian-based host, you can apt download dpkg, otherwise, go get the binaries from the archive directly. You'll need to manually perform on dpkg*.deb what dpkg normally does (the only maintainer script for dpkg is postrm so that's not going to be relevant). Download dpkg_<version>_<arch>.deb Extract contents with ar -x *.deb Extract control.tar.xz with tar -xf Inspect the control file and check that all dependencies are installed Extract data.tar.xz to / After that, you should be able to dpkg -i *.deb on any deb package you install assuming you first install its dependencies installed with dpkg. The general advice is: If you want to use dpkg or apt, use a distribution that comes with it. There are too many things that can go wrong if you don't.
How do install the Debian package manager (dpk and apt) in my custom Linux distribution? [closed]
1,358,808,462,000
A few months ago I installed Debian 10 on my laptop, I have already managed to use it regularly for my daily activities, so I am starting to customize my settings. And started by validating the drivers that are installed for each component of my laptop. I have a Dell Inspiron 15-3567 laptop According to the details of the specifications manual, the laptop has a 7th generation Intel Core I3 processor. Validate it through the command grep 'vendor_id' /proc/cpuinfo ; grep 'model name' /proc/cpuinfo ; grep 'cpu MHz' /proc/cpuinfo obtaining the following information: vendor_id : GenuineIntel vendor_id : GenuineIntel vendor_id : GenuineIntel vendor_id : GenuineIntel model name : Intel(R) Core(TM) i3-7020U CPU @ 2.30GHz model name : Intel(R) Core(TM) i3-7020U CPU @ 2.30GHz model name : Intel(R) Core(TM) i3-7020U CPU @ 2.30GHz model name : Intel(R) Core(TM) i3-7020U CPU @ 2.30GHz cpu MHz : 600.002 cpu MHz : 600.045 cpu MHz : 600.082 cpu MHz : 600.004 Then use the lspci command to see the PCI controller that the kernel had associated with the processor, finding the following: diego@computer:~$ lspci -v 00:00.0 Host bridge: Intel Corporation Xeon E3-1200 v6/7th Gen Core Processor Host Bridge/DRAM Registers (rev 03) Subsystem: Dell Xeon E3-1200 v6/7th Gen Core Processor Host Bridge/DRAM Registers Flags: bus master, fast devsel, latency 0 Capabilities: <access denied> Kernel driver in use: skl_uncore 00:02.0 VGA compatible controller: Intel Corporation Device 5921 (rev 06) (prog-if 00 [VGA controller]) Subsystem: Dell Device 078b Flags: bus master, fast devsel, latency 0, IRQ 127 Memory at d0000000 (64-bit, non-prefetchable) [size=16M] Memory at c0000000 (64-bit, prefetchable) [size=256M] I/O ports at f000 [size=64] [virtual] Expansion ROM at 000c0000 [disabled] [size=128K] Capabilities: <access denied> Kernel driver in use: i915 Kernel modules: i915 00:04.0 Signal processing controller: Intel Corporation Skylake Processor Thermal Subsystem (rev 03) Subsystem: Dell Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor Thermal Subsystem Flags: fast devsel, IRQ 16 Memory at d1320000 (64-bit, non-prefetchable) [size=32K] Capabilities: <access denied> Kernel driver in use: proc_thermal Kernel modules: processor_thermal_device 00:14.0 USB controller: Intel Corporation Sunrise Point-LP USB 3.0 xHCI Controller (rev 21) (prog-if 30 [XHCI]) Subsystem: Dell Sunrise Point-LP USB 3.0 xHCI Controller Flags: bus master, medium devsel, latency 0, IRQ 124 Memory at d1310000 (64-bit, non-prefetchable) [size=64K] Capabilities: <access denied> Kernel driver in use: xhci_hcd Kernel modules: xhci_pci The first detail that I observe is that the processor is recognized as an "Intel Corporation Xeon E3-1200 v6 / 7th Gen Core Processor Host Bridge" which does not agree with what was obtained from the command grep 'model name' /proc/cpuinfo My questions are about what would be the procedures for: How to find a controller associated with the type of processor my laptop really has (7th generation core i3). How to compare it with the driver that is currently installed If the driver I find is better, how should I change the driver? So far I have found tutorials where they tell me how to know the drivers that are installed but not one where they tell me how I could change or optimize them to make the laptop more efficient. Thanks for the answers.
I believe the 'Host Bridge' that lspci is referring to is the PCI host bridge that connects the CPU to the PCI bus. I have a 3rd generation Core i5 and my host bridge description says: 00:00.0 Host bridge: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor DRAM Controller (rev 09) I think what it means is that the host bridge is designed for use with the Xeon E3-1200, but it also happens to be compatible with the i3/i5, which is presumably why it is being used on the motherboard. So, I don't think you have the 'wrong' PCI controller. It is a compatible PCI controller that just happens to be labelled with a description that refers to a different CPU. Also, I would think that description information from lspci is most likely coming directly from the controller on the motherboard itself (i.e. a built-in chip), rather than from a driver. You are not going to be able to change that, as it is part of the motherboard. Also, it is unlikely you would see any noticeable performance benefit from trying to optimize the driver for the PCI bus. Are you having any problems that suggest the PCI bus is not working correctly?
How to update the driver used by the Debian Linux kernel for my processor?
1,358,808,462,000
Earlier today I was upgrading my computer from Ubuntu 19.04 to 19.10. The update temporarily broke my computer, booting into GRUB safe mode. With the accepted answer from grub error: you need to load kernel first I was able to load the kernel, specifically the following snippet: insmod linux linux /vmlinuz root=/dev/sda2 initrd /initrd.img boot That allowed me to load into my OS, and I used the solution proposed in Login loop on Ubuntu 19.10 to help me finish updating. Now, my computer works fine, but I still need to input the code that to which I referred earlier. How can I have it so that it will automatically boot into my Ubuntu kernel? EDIT: Thankfully solved by the comments below.
I solved the problem using Freddy's comments, specifically sudo grub-install /dev/sda and then sudo update-grub.
Ubuntu 19.10 boots into GRUB and I have to manually load the kernel. How do I have it load automatically?
1,358,808,462,000
I know that the Linux kernel keeps track of the status of each physical page frame with the help of a C structure struct page. The structures for all page frames form an array mem_map[]of type struct page so that you can easily retrieve the structure for a specific page frame by using the page frame number as the index. On a x86 32 bit system with 4 GiB of physical memory the array is made up of 220 entries, because of a page size of 4096 bytes, each of size of 32 bytes, so the complete array consumes 32 MiB. Now, when I transfer this to a x86 64 bit system with for example 4 TiB of physical memory, then the array is made up of 230 entries, each of size of 32 bytes, so the array consumes 32 GiB. Of course, with more physical memory the array could increase to a maximum of 32 TiB if the platform specific limit of 4 PiB (252 bytes on amd64 architecture) is reached. Does the kernel really have to initialize that big amount of data at startup or is there something I miss?
There are three different physical memory models used by the Linux kernel: the flat model which you describe, the discontiguous memory model, and the sparse memory model. Most (if not all) 64-bit architectures supported by Linux use the latter by default. It relies on memory sections, and a virtually allocated memory map on most architectures. The entries required to represent the physical memory actually present are allocated and initialised as necessary. On large systems, this initialisation can even be deferred so that it doesn’t delay the boot.
How does the Linux kernel initialize the mem_map array on 64 bit systems?
1,358,808,462,000
I need to secure erase SSD disk with hdparm on my server: hdparm --user-master u --security-erase NULL /dev/sda but the disk is currently "frozen", as reported by hdparm: hdparm -I /dev/sda | grep frozen All instructions suggest I should put my server to sleep to unfreeze. But my kernel does not have suspend compiled in. How can I unfreeze the SSD ?
The problem could well be that your BIOS freezes the disk when booting, that's why suspend/resume helps because it power cycles the drive without the BIOS getting its hands on it again (see this convo/comment over at the Ubuntu Forums). An alternative to suspending/resuming would be (if your hardware and BIOS allow it) to configure the port the disk is attached to AHCI, i.e. make it hot-pluggable and then unplug and after a while replug the drive (when nothing on that disk is in use, of course).
unfreeze SSD disk when kernel does not support "suspend"
1,358,808,462,000
I am working on custom made embedded board. Its currently running 3.10 kernel I am trying to upgrade from 3.10 to 4.19. So based on the kernel config options in 3.10, I am enabling/ disabling default kernel options in 4.19. While doing that, I must have messed up something as I am getting this: [ 0.000000] Memory: 433580K/458752K available (4837K kernel code, 307K rwdata, 1136K rodata, 348K init, 165K bss, 25172K reserved, 0K cma-reserved) [ 0.000000] Virtual kernel memory layout: [ 0.000000] vector : 0xffff0000 - 0xffff1000 ( 4 kB) [ 0.000000] fixmap : 0xffc00000 - 0xfff00000 (3072 kB) [ 0.000000] vmalloc : 0x9c800000 - 0xff800000 (1584 MB) [ 0.000000] lowmem : 0x80000000 - 0x9c000000 ( 448 MB) [ 0.000000] modules : 0x7f000000 - 0x80000000 ( 16 MB) [ 0.000000] .text : 0x(ptrval) - 0x(ptrval) (4839 kB) [ 0.000000] .init : 0x(ptrval) - 0x(ptrval) ( 348 kB) [ 0.000000] .data : 0x(ptrval) - 0x(ptrval) ( 308 kB) [ 0.000000] .bss : 0x(ptrval) - 0x(ptrval) ( 166 kB) I want to understand which kernel config option is responsible to set those addresses ? how should I debug this ? Any pointers / starting point would be highly appreciated
The values are there, they're just not printed. Linux has updated the print functions to not expose kernel addresses. See the "Plain Pointers" section in the kernel printk documentation: The kernel will print (ptrval) until it gathers enough entropy. This can be disabled with the debug_boot_weak_hash kernel boot parameter, but you’ll still get a hash, not the real pointer value.
Linux kernel 4.19.82 - Virtual Kernel Memory Layout - .text, .init, .data, .bss - unable to set the addresses
1,358,808,462,000
I am reading "Linux Kernel Development" by Robert Love and he wrote that the system call executes in process context and is capable of sleeping. The current pointer will refer to the current task, which is the process that issued the system call. What I don't understand is if a system call can sleep, how does execution return to the system call? If it runs in process context, it could be awakened and re-scheduled, but user processes cannot execute in kernel space. Does the kernel create a task/process to execute the system call when it is called? I know the system call from user space causes a trap to switch to kernel mode and execute the corresponding system call, but I was under the assumption before reading this that system calls couldn't sleep and be rescheduled, but I understand why they should be able to.
The key part is this: user processes cannot execute in kernel space This is incorrect. When Robert Love writes that the system call executes in process context, basically it means that the process runs in kernel mode to run the system call. When the kernel is handling a system call, it’s still running in a process, the calling process. If it decides to re-schedule, the process is suspended, and execution continues in whatever other process is scheduled instead. When the suspended process resumes, it continues execution in the system call, in kernel mode. The big change in 2.6 with regards to scheduling was that previously, processes could only be interrupted in user mode; with a pre-emptible kernel, processes can also be interrupted in kernel mode (except when they disable pre-emption, which is done around critical sections of kernel code).
If the kernel can sleep when handling a system call, how does execution return to the system call?
1,358,808,462,000
I have a robot on local server that has the task of creating rules a number of times per day on remote server.The purpose of the robot is to calculate the traffic consumed by each port. Access to the remote server is via SSH and the robot is written in Python. The operating system of both servers is Ubuntu. The rules that bot created: sudo iptables -I OUTPUT -p tcp --sport port_number -j DROP sudo iptables -I OUTPUT -p tcp --sport port_number -m quota --quota 500000000 -j ACCEPT iptables -nvL OUTPUT --line-numbers|grep "\<tcp spt:port\>" At that moment I check the remote server iptables -nvL OUTPUT --line-numbers Okay the rules are created. But after a while (2 or 3 hours) the rules are deleted. All the rules that the robot is allowed to make are listed above. None of the rules can not clear or flush the OUTPUT chain. Clearing rules causes the script to run poorly and cannot control the bandwidth of each port. Now I realize the robot execute below command every few hours firewall-cmd --reload This command removes the rules. How can I reload the firewall and prevent rules from being deleted?
The fact that running firewall-cmd --reload removes the manually added rules indicates your system is running firewalld. It is a management system for IPTables firewall (and its future replacement nftables). Since your robot is manipulating IPTables directly instead of telling firewalld what it wants done, firewalld will override the directly-added settings and reset all the IPtables settings according to what firewalld's configuration says they should be. You should replace any iptables commands that make changes to firewall settings with the equivalent firewall-cmd commands. Viewing the existing firewall settings with iptables -nvL can probably be kept as-is. The line iptables -nvL OUTPUT --line-numbers|grep "\<tcp spt:port\>" will not create any rules, it just displays the contents of the OUTPUT table with line numbers included, and filters out any lines that won't include the string "". sudo iptables -I OUTPUT -p tcp --sport port_number -j DROP sudo iptables -I OUTPUT -p tcp --sport port_number -m quota --quota 500000000 -j ACCEPT Since both of these commands prepend the new rule to any existing rules, the result of these two commands will be two rules in this order: 1.) on output, any IPv4 TCP traffic originating from port_number will have a quota of 500000000 applied to it, and accepted only if the quota allows it 2.) if the quota did not cause the previously-mentioned traffic to be accepted, it will be dropped. To replicate this exactly with firewall-cmd, it seems you'll have to use direct rules: firewall-cmd --permanent --direct --add-rule ipv4 filter OUTPUT 0 -p tcp --sport port_number -m quota --quota 500000000 -j ACCEPT firewall-cmd --permanent --direct --add-rule ipv4 filter OUTPUT 1 -p tcp --sport port_number -j DROP firewall-cmd --reload The first two commands will record the new rules to the permanent firewalld configuration, and the third command makes the new configuration effective. Note that you'll have to use the priority numbers to specify the intended ordering of direct rules, otherwise the proper ordering will not be guaranteed. Also, your bot cannot just keep inserting new pairs of rules to the beginning of the chain, making the chain longer and longer. I'll have to ask: what is the purpose of the periodic firewall-cmd --reload executed by your bot? It may be resetting the quota counter, as it causes all the rules to be reloaded. (I guess this may be precisely why your bot is doing this, but if you need the quota counter values for e.g. statistics, make sure the old values are read and stored somewhere before this command is executed.)
Why the rules I create in the iptables OUTPUT chain are deleted after a while?
1,358,808,462,000
I'm following this answer, trying to generate some major page faults with mmap: #include <fcntl.h> #include <stdio.h> #include <sys/mman.h> #include <sys/stat.h> int main(int argc, char ** argv) { int fd = open(argv[1], O_RDONLY); struct stat stats; fstat(fd, &stats); posix_fadvise(fd, 0, stats.st_size, POSIX_FADV_DONTNEED); char * map = (char *) mmap(NULL, stats.st_size, PROT_READ, MAP_SHARED, fd, 0); if (map == MAP_FAILED) { perror("Failed to mmap"); return 1; } int result = 0; int i; for (i = 0; i < stats.st_size; i++) { result += map[i]; } munmap(map, stats.st_size); return result; } I tried to map a 1.6G file then read but only 1 major page fault occurred. Major (requiring I/O) page faults: 1 Minor (reclaiming a frame) page faults: 38139 When I read data randomly by // hopefully this won't trigger extra page faults unsigned int idx = 0; for (i = 0; i < stats.st_size; i++) { result += map[idx % stats.st_size]; idx += i; } the page faults surged to 16415 Major (requiring I/O) page faults: 16415 Minor (reclaiming a frame) page faults: 37665 Is there something like prefetching in kernel to preload mmap data? How can I tell this by /usr/bin/time or perf? I'm using gcc 6.5.0 and Ubuntu 18.04 with 4.15.0-54-generic.
Yes, the kernel does readahead by default (what you called prefetching), see https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/mm/filemap.c?h=v5.5#n2476 You can disable readahead on this memory region by calling posix_madvise() after mmap() with the POSIX_MADV_RANDOM advice.
Weird major page fault number when reading sequentially / randomly in mmap region
1,358,808,462,000
I have built the linux distro Boot2Qt from source with the yocto tools for the board Colibri iMX6ULL which has the integrated wifi chip Marvell W8997-M1216. I installed the whole linux firmware-stack and i think also the correct kernel modules for the wifi chip. There is no mlan interface showing up. What exactly creates the mlan interface? Is there something else i need to install? Edit: I also am thankful for general answers on what prerequisites a linux os needs to have functional wifi, and what software exactly creates a wireless interface.
I managed to find the correct kernel modules and its working now. Here the complete process: Add the following lines to your local.conf: BB_DANGLINGAPPENDS_WARNONLY ?= "true" MACHINE ?= "colibri-imx6ull" DISTRO_FEATURES_append = " wifi packagegroup-base-wifi dhcp-client" // add wifi tools like iw and a dhcp client MACHINE_FEATURES_append = " wifi" // add wifi at machine level IMAGE_INSTALL_append = " linux-firmware dhcp-client" // install all firmware (needed for wifi) and the dhcp client Next start menuconfig to add kernel modules by entering the build directory: BUILD_DIR/meta-boot2qt/build-colibri-imx6ull/ And run the command bitbake virtual/kernel -c menuconfig If you get errors of undefined sysmbols you are probably missing the ncurses library. Install it with sudo apt-get install libncurses-dev From the menu that opened in a console tab go to Networking support > Wireless. Check the following modules: Next go back to the main menu and enter Device Drivers > Network device support > Wireless LAN Select the marvell drivers: Save the changes and run: bitbake b2qt-embedded-qt5-image Now the necessary firmware, drivers and tools should be installed.
Yocto Boot2Qt Build for Colibri iMX6ULL no wifi interface
1,358,808,462,000
Is it possible to cause kernel panic by removing a device driver and then making a system call that uses that driver so that kernel goes in panic?
It is possible when driver is not properly removed and application tried to use. If driver is removed properly kernel will give error like driver not found but never crash(In 99.99% cases until it hit any bug).
How to cause kernel panic by deleting a device driver (module)?
1,358,808,462,000
few words about shared memory Shared memory allows processes to access common structures and data by placing them in shared memory segments. It is the fastest form of inter-process communication available since no kernel involvement occurs when data is passed between the processes. In fact, data does not need to be copied between the processes. we notice that value in redhat machines is huge as the following cat /proc/sys/kernel/shmmax 17446744003692774391 sysctl -a | grep kernel.shmmax kernel.shmmax = 17446744003692774391 when I calculated to Giga its - 16248546544.17632 is it logical? , do we miss something here machines are with 64G and 16 CPU and are use in hadoop cluster
The default value for shmmax is #define SHMMAX (ULONG_MAX - (1UL << 24)) This is an upper bound, chosen to be as large as possible while limiting the risk of overflow: SHMMNI, SHMMAX and SHMALL are default upper limits which can be modified by sysctl. The SHMMAX and SHMALL values have been chosen to be as large possible without facilitating scenarios where userspace causes overflows when adjusting the limits via operations of the form "retrieve current limit; add X; update limit". It is therefore not advised to make SHMMAX and SHMALL any larger. These limits are suitable for both 32 and 64-bit systems. The value is fine as-is; it is set correctly, there is no mistake.
SHMMAX + how effected kernel parameter that not set correctly by mistake
1,358,808,462,000
I need a more up to date kernel than what even buster ships with (5.1.x to be precise) due to the hardware I'm using. Building that kernel is nor problem whatsoever 8building it with make deb-pkg) and even when I install the kernel package on the live system it works just fine. Also when I modify the ISO and add that kernel package in the package repo of the ISO and add that CD as a local package source I can install it from there just as well, so I know I generated the package indexes correctly. However when I use the d-i base-installer/kernel/image setting in the preseed file and set it to linux-image-5.1.2, the installation fails with the lovely message: Cannot install kernel The installer cannot find a suitable kernel package to install. Upon further inspection of the syslog, I found this message: May 16 13:43:22 base-installer: info: kernel linux-image-5.1.2 not usable on amd64 May 16 13:43:22 base-installer: info: Found kernels '' May 16 13:43:22 base-installer: error: exiting on error base-installer/kernel/no-kernels-found (Full syslog here: https://gist.github.com/BrainStone/0a0b3ea476ee875b2cabdd67685264b4) dpkg --info on the package gives me this info: new Debian package, version 2.0. size 3937412 bytes: control archive=1536 bytes. 348 bytes, 12 lines control 2073 bytes, 28 lines md5sums 281 bytes, 12 lines * postinst #!/bin/sh 277 bytes, 12 lines * postrm #!/bin/sh 279 bytes, 12 lines * preinst #!/bin/sh 275 bytes, 12 lines * prerm #!/bin/sh Package: linux-image-5.1.2 Source: linux-5.1.2 Version: 5.1.2-1 Architecture: amd64 Maintainer: root <root@e2c42c34410b> Installed-Size: 5943 Section: kernel Priority: optional Homepage: http://www.kernel.org/ Description: Linux kernel, version 5.1.2 This package contains the Linux kernel, modules and corresponding other files, version: 5.1.2. So it definately is built for amd64. I'm guessing that I'm pretty close to solution and must not be missing more than like 1-2 lines of config or scripts. But I cannot figure out what I'm doing wrong.
The failing check is here: you need to have -amd64 in your package name (in a similar fashion to linux-image-5.0.0-trunk-amd64). More accurately, your package name has to end with -amd64, or contain -amd64-. One way to do this is to set LOCALVERSION, in the “General Setup” section of the kernel configuration.
Custom built kernel supposedly not usable on amd64 during preseeded installation
1,358,808,462,000
I have an issue on my Debian computer with Intel Stick. Link to screenshot => I have extra character in console tty like ^[ This issue is only with supperior 4.11.12 kernel Debian kernel and 4.11.12 kernel is ok I dont know why
This is not your TTY settings, and the reset command (which you could not even execute anyway, as you have not logged in) will gain you nothing. Indeed, in most modern Linux operating systems the terminal has just been reset before running the login program (by dint of systemd's TTYReset=yes setting or otherwise), and that has clearly had no effect. This is your ⎇ Alt key being stuck down, or the kernel's built-in terminal emulator thinking that it is. When the kernel's built-in terminal emulator sees ⎇ Alt plus an alphanumeric key, it transmits the character sequence ␛ (U+001B) followed by the character for the key to the input of the line discipline. And ^[ is how the line discipline echoes back an ␛ character when it receives one as input (and it is in canonical mode, as it is here). So ^[r is how ⎇ Alt+r is echoed back to you. It's possible for the terminal emulator's idea of what modifier keys are currently pressed to become out of synch with the actual keyboard state, for reasons beyond the scope of this answer. Press and release all of your ⎇ Alt keys, without pressing any of the key chords that switch amongst the kernel VTs, and the state should become synchronized again. If that does not work, then the next thing to investigate is whether the modifier key is indeed physically stuck down. Further reading https://unix.stackexchange.com/a/391968/5132 https://unix.stackexchange.com/a/428865/5132
Keyboard issue console terminal
1,358,808,462,000
I have a kernel module with netfilter hooks at different points in the packet route, and the hooks use shared resources. In addition, the module has a char device that may be written to, that also affects these resources. I am not sure if I need to use locks when different handlers access these resources. I read that interrupts can't sleep, does that mean I am guaranteed that my handlers (hooks and read handlers) will be executed one after the other, or do I need to use locks to prevent simultaneous access to the same resources from different functions? thanks.
Depending on what you've written and what data structures it uses, it's hard to say, but: I read that interrupts can't sleep, does that mean I am guaranteed that my handlers (hooks and read handlers) will be executed one after the other, or do I need to use locks to prevent simultaneous access to the same resources from different functions? While it's true that interrupts aren't allowed to sleep, you also have to consider than an interrupt interfacing with this datastructure can also simultaneously run on another CPU, or another interrupt might stack on top of your your current interrupt being acted on, taking it temporarily off the CPU. In either case, you need to handle the deadlocking case, and the case that two threads compete for writes/reads. So yes, there's no reason to believe just based on what you've written that you don't need a synchronisation mechanism of some kind. Depending on your particular case, you might find synchronisation simpler if you disable further interrupts on that CPU (eg. in the case of percpu variables). What the appropriate mechanism is will depend on what you're guarding access to and how lengthy and costly that is likely to be, although since you are executing an interrupt, you're somewhat limited in that you can only really choose non-blocking primitives.
Resource locking in interrupts
1,358,808,462,000
I use various Linux Distros, mostly Debian-based, usually all default (I change nothing in the kernel/shell or internal-utilities (utilities that come with the distro). I usually install Apache, MySQL and PHP on these systems and doesn't change there anything either. I never did a kernel upgrade to any system as I don't recall ever having such a need or getting some local mail requiring that. I know that configuration-management (CM) tools, like Ansible, use to orchestrate, deploy and maybe also automate basically everything above the OS layer (which includes the kernel, of course) but of curiosity - can one "dive down" with Ansible to the kernel and automate kernel upgrades with it as well? Please also share if you think it's a best practice in a basically all-default system (a system where the distro itself - its kernel, shell(s) and internal utilities aren't changed at all).
With most modern Linux distros, the kernel is distributed as a package, just like any other piece of software/library. Therefore, with Ansible as the example, you can have a task such as: - name: Ensure that latest kernel is installed apt: name: linux-image-amd64 state: latest update_cache: yes notify: reboot_server # You would need a corresponding handler that reboots the system and this will ensure that each time the play is run, the latest kernel package will be installed. The kernel is however different to most other software packages in that: Multiple versions can be installed simultaneously, so you need to manage the removal of older versions. You don't necessarily want to do this automatically because: To enable a newly installed kernel, you need to reboot the system, so that needs to be managed, both from a business process POV and also technically. This is not an entirely risk free operation, so dependent on the nature of the system architecture, is often not seen as being an appropriate task to simply automate. There are methods to activate a newly installed kernel without a reboot, but they are still not really a mainstream approach. As to whether you should do kernel updates, in general yes. Given the litany of high profile security failures as a result of out of date software (and the high profile failures likely being just the tip of the iceberg), all software should be kept up to date. The recent Meltdown and Spectre vulnerabilities underline that the kernel is not special, and needs to be kept up to date like any other package. Maintaining an effective patching policy needs serious thought given the trade off between the failures that can occur during the process, versus the failures that can occur if it is not done. Automation can certainly help, but each environment is different so you have to examine your own requirements to assess to what extent it is appropriate in your own case. Either way, if when you say: (a system where the distro itself - its kernel, shell(s) and internal utilities aren't changed at all) you mean that once installed, you never revisit and patch/update/upgrade those elements, you are significantly increasing the risk of your system being compromised.
Can kernel upgrades be done with configuration management (CM) tools for above-OS software?
1,545,111,506,000
Reading one of Stephen's excellent replies, When the operating system shuts down, processes are shut down using SIGTERM and SIGKILL, but those signals don’t come from the kernel (or not directly — calling kill() with a pid of 0 or a negative pid will result in the kernel sending the signal to a number of processes). They come from a service manager terminating its services and from various last-ditch-kill-everything application-mode programs that are part of the system management mechanism: e.g. the killprocs van Smoorenburg rc script, the killprocs OpenRC script, and the systemd-shutdown program. When the OS shuts down, How does a service manager know that it should terminates its services? Is the service manager notified by receiving SIGKILL or SIGTERM or some other signal from the kernel or some process? Similarly how do various last-ditch-kill-everything application-mode programs that are part of the system management mechanism know that they should send out SIGTERM and SIGKILL? Thanks.
A service manager knows it should terminate its services because the system administrator asked it to halt or reboot the system. When the administrator runs reboot, or the user chooses the corresponding option in his/her desktop environment, the init process is told to reboot (not the kernel, at this point). The init process takes care of everything it’s been configured to do before asking the kernel to actually reboot. The last-ditch, kill-everything phase is part of the shutdown procedure: once the shutdown procedure has asked all running services to stop, it typically waits a short while, then kills any remaining processes. The various init systems have different implementations of all this. With sysvinit, halting or rebooting is a runlevel transition, started by asking the running init to switch to the appropriate runlevel (see the telinit manpage for details). With systemd, it’s a target, which ends up running the systemd-halt service.
When the operating system shuts down, how does a service manager know that it should sends SIGTERM and SIGKILL to its services?
1,545,111,506,000
Looking for a list of libraries which can be called from a custom Kernel Module. I understand that there are restrictions compared to user space and libraries like <stdio.h> and <string.h> can't be used. Which are the most popular ones which can be used, or even better is there a rule of thumb to help me distinguish when a library can be called inside a Kernel Module. I am currently looking to map memory using mmap(), which is part of sys/mman.h library, but i am pretty sure this won't be the only thing i will need. So : linux/<MANY_DIFFERENT_NAME> is available i have seen including asm/uaccess.h in kernel code is sys/<ANY> also available for Kernel Modules? any other?
None of the standard user space libraries are available from kernel code. There are some functions in the kernel that behave similar to the corresponding user space functions, but you should always verify that there are no differences. Concerning the mmap function, this function is just a system call to the kernel. It doesn't make sense to have a call to the kernel inside the kernel. Instead, there is the function that implements mmap. Unless you want to map anonymous memory, which would be easier to achieve with memory allocation, you need a file descriptor. Processes have file descriptors, kernel modules don't. In general, programming a kernel module is different from user space programming. A good approach would be to look for kernel modules doing something similar to what you intend and use that as a starting point.
Which Libraries can be called inside a Kernel Module
1,545,111,506,000
We have a kernel module that was building fine for RedHat family of Linux distribution, until the recent RHEL7.5. When trying to build on RHEL7.5, we've got an error of: ...error: ‘GENL_ID_GENERATE’ undeclared... Did some reading, and it seems like this is an change since kernel 4.11+, but RHEL7.5 is based on kernel 3.10+. What happened? Anyway, I know that the value of GENL_ID_GENERATE is simply 0. But can I used use 0 to replace the macro? Will there be a problem with user mode module to communicate with this kernel module? Or, what should be the proper way to fix the problem? Any advice? Thanks and regards, Weishan
Looking at the git commits for netlink it looks like several changes were made to the structure in version 4.11: First, you can omit the .id field completely from your initializer in genl_family as Linux has removed static family IDs. As well, the genl_register_family_with_ops function is not used any more. Instead, as noted in the Linux HOWTO documentation for netlink: Up to linux 4.10, use genl_register_family_with_ops(). On 4.10 and later, include a reference to your genl_ops struct as an element in the genl_family struct (element .ops), as well as the number of commands (element .n_ops).
netlink: GNEL_ID_GENERATE definition removed from RHEL7.5 kernel library