date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,425,951,077,000 |
I encountered this problem a while ago when upgrading my kernel but put off upgrading until now.
On my system I can happily mount network shares using CIFS running kernel 3.7.10, however when I've tried with newer kernels (currently trying with 3.13.1, but have been trying since 3.12.6) I get the following errors when I attempt to mount them with /etc/init.d/netmount start (I'm running Gentoo):
# /etc/init.d/netmount restart
* Unmounting network filesystems ... [ ok ]
* Mounting network filesystems ...
mount error(12): Cannot allocate memory
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
mount error(12): Cannot allocate memory
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
mount error(12): Cannot allocate memory
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
Attempting to mount manually results in the same error...
# mount -t cifs //Server/to_mount1 /mnt/network1 -o credentials=/etc/nfs_share.credentials,users,rw,uid=slackline,gid=slackline
mount error(12): Cannot allocate memory
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
The error occurs three times as I've three network shares I'm attempting to mount, here are my /etc/fstab entries (which are completely unchanged between kernel versions):
# Network drives
//Server/to_mount1/mnt/network1 cifs credentials=/etc/nfs_share.credentials,users,rw,uid=slackline,gid=slackline 0 0
//Server/to_mount2/another/dir /mnt/network2 cifs credentials=/etc/nfs_share.credentials,users,rw,uid=slackline,gid=slackline 0 0
//Server/to_mount3 /mnt/network3 cifs credentials=/etc/nfs_share.credentials,users,rw,uid=slackline,gid=slackline 0 0
Searching around I found quite an old solution to this problem which requires access to the Windows server to make some modifications, this is detailed here.
Unfortunately this is at work and not only do I not have access to the Windows server to test whether these changes would make any difference, but also its only happening with the newer 3.12.6 kernel, I can reboot into the 3.7.10 kernel and the network shares are mounted without any problem.
This leads me to think that there is a problem with the newer kernel, so I've looked at the CIFS options under both the 3.7.10 kernel configuration:
# grep -i cifs /usr/src/linux-3.7.10-gentoo-r1/.config
CONFIG_CIFS=y
CONFIG_CIFS_STATS=y
# CONFIG_CIFS_STATS2 is not set
CONFIG_CIFS_WEAK_PW_HASH=y
# CONFIG_CIFS_UPCALL is not set
CONFIG_CIFS_XATTR=y
CONFIG_CIFS_POSIX=y
CONFIG_CIFS_ACL=y
# CONFIG_CIFS_DEBUG2 is not set
CONFIG_CIFS_DFS_UPCALL=y
CONFIG_CIFS_SMB2=y
# CONFIG_CIFS_FSCACHE is not set
...and under the 3.12.6 configuration:
# grep -i cifs /usr/src/linux-3.13.1-gentoo/.config
CONFIG_CIFS=y
CONFIG_CIFS_STATS=y
# CONFIG_CIFS_STATS2 is not set
CONFIG_CIFS_WEAK_PW_HASH=y
# CONFIG_CIFS_UPCALL is not set
CONFIG_CIFS_XATTR=y
CONFIG_CIFS_POSIX=y
CONFIG_CIFS_ACL=y
CONFIG_CIFS_DEBUG=y
# CONFIG_CIFS_DEBUG2 is not set
CONFIG_CIFS_DFS_UPCALL=y
CONFIG_CIFS_SMB2=y
# CONFIG_CIFS_FSCACHE is not set
...and they are the same (no real surprise there since I didn't change anything!).
I re-emerged net-fs/cifs-utils just in case there was something awry there but it made no difference.
Is there a way I can work around this without having access to the Windows share to test the suggested solution (if that is indeed the underlying cause) or is there something else that is causing the problem?
|
I've finally solved this the solution was to add the ''sec=ntlm'' option for mount.cifs because the default behaviour changed. From
man mount.cifs
...
The default in mainline kernel versions prior to v3.8 was sec=ntlm. In v3.8, the default was changed to sec=ntlmssp.
...
So my /etc/fstab entries now look like...
# Network drives
//Server/to_mount1/mnt/network1 cifs credentials=/etc/nfs_share.credentials,users,rw,uid=slackline,gid=slackline, sec=ntlm 0 0
//Server/to_mount2/another/dir /mnt/network2 cifs credentials=/etc/nfs_share.credentials,users,rw,uid=slackline,gid=slackline,sec=ntlm 0 0
//Server/to_mount3 /mnt/network3 cifs credentials=/etc/nfs_share.credentials,users,rw,uid=slackline,gid=slackline,sec=ntlm 0 0
| mount.cifs : mount error(12) : Cannot allocate memory |
1,425,951,077,000 |
As I understand it, process descriptors are stored in a doubly linked list data structure. But fork can be used to create multiple children for the same process, so that makes me think that there is a tree structure, because multiple processes will point to one parent process. Which is correct? Are process descriptors different from processes?
|
Your confusion stems from mixing two things: (1) keeping the process descriptors organized, and (2) the parent/child relationship.
You don't need the parent/child relationship to decide which process to run next, or (in general) which process to deliver a signal to. So, the Linux task_struct (which I found in linux/sched.h for the 3.11.5 kernel source) has:
struct task_struct __rcu *real_parent; /* real parent process */
struct task_struct __rcu *parent; /* recipient of SIGCHLD, wait4() reports */
/*
* children/sibling forms the list of my natural children
*/
struct list_head children; /* list of my children */
struct list_head sibling; /* linkage in my parent's children list */
You're correct, a tree struct exists for the child/parent relationship, but it seems to be concealed in another list, and a pointer to the parent.
The famed doubly-linked list isn't obvious in the 3.11.5 struct task_struct structure definition. If I read the code correctly, the uncommented struct element struct list_head tasks; is the "organizing" doubly-linked list, but I could be wrong.
| Internal organization (with regard to family relations) of processes in Linux |
1,425,951,077,000 |
I've built a kernel in tmpfs, then I rebooted.
Now I see a message when I compile the 3rd party module,
NO SIGN [M] XXXX.ko
How can I get it signed? The key pair generated during rpmbuild is lost already I guess
|
This was surprisingly lacking in documentation. I found this file, module-signing.txt, which is part of the RHEL6 Kernel Documentation. In this document it shows how to generate signing keys, assuming you want to sign all your modules as part of a kernel build:
cat >genkey <<EOF
%pubring kernel.pub
%secring kernel.sec
Key-Type: DSA
Key-Length: 512
Name-Real: A. N. Other
Name-Comment: Kernel Module GPG key
%commit
EOF
make scripts/bin2c
gpg --homedir . --batch --gen-key genkey
gpg --homedir . --export --keyring kernel.pub keyname |
scripts/bin2c ksign_def_public_key __initdata >crypto/signature/key.h
Also the article from Linux Journal titled: Signed Kernel Modules has some good details and steps on how to do pieces of this, but I couldn't find the user space tools, extract_pkey and mod that it references.
You might want to poke around Greg Kroah's site, you may find something useful in one of his presentations.
References
Signed Kernel Modules - linux journal article
Howto Use Signed Kernel Modules
| Sign a module after kernel compilation |
1,425,951,077,000 |
The CFQ IO scheduler supports priorities though I am not sure that Deadline does (I believe not). The premise is that when I renice a task it gets a larger share of CPU under the Completely Fair Scheduler. Since this task is likely to run more often it would call for IO more often as well when needed, correct?
I am wondering if even though the IO scheduler does not support priorities that the task would get more/less IO when reniced? Or is the disk/memory management completely separate?
|
Disk and memory scheduling are entirely different. In the absence of an IO priority scheduler, IO will be handled on a first come first served basis. If the system is IO bound, then all processes run in a more or less round robin basis until all are waiting for I/O. The nice priority of a process will have little impact on its scheduling frequency.
Recent versions of Linux have added an ionice facility. The idle priority is intended to prevent IO degradation which may occur when the heads are moved to a different area of the disk delaying writes for other processes.
Renicing an I/O bound process is unlikely to significantly slow its I/O rate unless the load average exceeds the number of CPUs. If unused CPU cycles are available, the process will likely be scheduled frequently enough to keep its I/O rate close to what it would be at a regular priority.
Recent Linux kernels will modify the IO priority of reniced process which have not had an IO priority set. The 40 CPU priority levels are mapped to 8 IO priority levels, so a significant nice change may be required to change the IO priority.
Having a significant number of CPU bound processes running at or above the I/O bound processes priority may slow its I/O rate. The process will still get time slices resulting in I/O occurring.
| Does IO prioritise by the very nature of renicing a task? |
1,425,951,077,000 |
Based on my research on mmap(), I understand that mmap uses demand paging to copy in data to the kernel page cache only when the virtual memory address is touched, through page fault.
If we are reading files that are bigger than the page cache, then some stale page in the page cache will have to be swapped out reclaimed. So my question is, will the page table be updated to map the corresponding virtual memory address to the address of the old stale page in the cache (now containing new data)? How does this happen? Is this part of the mmap() system call?
|
will the page table be updated to map the corresponding virtual memory address to the address of the old stale page in the cache (now containing new data)? How does this happen?
When mmap() is called, it creates a mapping in the process's virtual address space to the file specified. This mapping merely sets up the ability for these pages to be loaded when they are actually accessed, it doesn't load anything into memory yet. When you then access the pages, a page fault is generated, the page table entries are updated to map the virtual addresses to the physical addresses of the newly loaded pages, and you can then access the file. This happens in filemap_fault.
This is also how it works if you access a mapped page which has been evicted: the kernel handles the page fault, puts the file content back into the pages, and from the application's perspective, nothing happened.
There's nothing special about mmap() here per se -- this is how demand paging works inside the Linux kernel in general, as used for almost everything -- even regular program memory and file cache entries.
[...] map the corresponding virtual memory address [...]
Note that, when reading in with mmap(), the kernel typically will use readahead in order to load more content than just the single page you've generated a page fault on, unless there is an indication that this would be unhelpful, like MADV_RANDOM (indicated by user), or MMAP_LOTSAMISS (kernel heuristic).
| Does mmap() update the page table after every page fault? |
1,425,951,077,000 |
I'm trying to use a Khepera III robot. The documentation says I need a Linux OS with kernel 2.6.x. An Ubuntu 8 image was provided at the website. When I created a virtual machine with that image, I tried to install the packages I need for using Kheperra III but I couldn't. I tried also to install the updates but I couldn't since this version of Ubuntu is not supported anymore.
What Linux OS still supports kernel 2.6.x and allows me to install modern packages?
|
Ubuntu 10.04 Lucid Lynx is uses the 2.6.x kernel and the server edition is supported until 2015-04. You can download it here - http://releases.ubuntu.com/10.04/
For more on the differences between server editions and desktop editions of Ubuntu, see this question on Ask Ubuntu.
The main issue seems to be that there is no desktop environment included in the default installation. As such there is no GUI installation, although what they give should be intuitive enough to use. You will get other packages installed which you usually get on a server too. Lucid is also old enough to have a server optimised kernel, I'm not sure what the exact differences are but they should me minor enough not to noticeably affect anything.
It should also be ok to install the desktop edition too, it can be downloaded here - http://old-releases.ubuntu.com/releases/10.04.3/ (get a 10.04.4 download for more included updates). The repositories are the same for both anyway, it is just that 'server support' probably means that only the server relevant packages are updated. For example the server optimised kernel will probably get security updates while the desktop kernel won't.
| What distribution still supports the 2.6.x kernel? |
1,425,951,077,000 |
I've read in different textbooks that say Linux is light-weight(e.g. It could fit on a 1.4MB floppy). So why is the download from the Ubuntu or Fedora CD sized or larger?
Do the device drivers extend the kernel? For example: if I have new hardware and I have installed the device driver, will my kernel code get extended or is the driver installed as a service for the kernel to use?
When using a LiveCD such as Ubuntu, when system boots does all 700MB of the OS get loaded to RAM or just parts of it?
I ask these questions because I feel they're common beginner questions and I think it would be good to have them all in one place.
|
It's just barely possible to fit an extremely minimal Linux system on a floppy. (Here are a few examples; beware that many of these span several floppies.) With just 1.44MB, there is no have any room for any application; I think you can get a minimalistic command line with no interesting command to run.
As an example of a more realistic tiny system, my home router runs Linux. The whole disk image (kernel plus programs) fits in 4MB (in fact, I think it's close to 2.8MB). That's a dedicated system, with an old kernel version, only the drivers needed for that particular device, and not many programs — mostly networking administration tools, including a small web server, an SSH client and server, a shell.
A distribution like Ubuntu or Fedora comes with thousands of programs. Some of these programs take tens of MB on their own. Some of these programs' documentation takes tens of MB. Just the device drivers for all the peripherals, network protocols and other parts of the kernel take about 100MB these days — there are so many different devices one can connect to a PC.
For a basic system with a GUI and a web browser, you'll need a couple of hundred MB. For a more complete system with a full desktop environment, a word processor and so on, count on a couple of GB. If you start having multiple alternatives for each program (Gnome and KDE, Firefox and Chrome, …), the sky's the limit.
If you feel like comparing with the size of Windows, keep in mind that a Linux distribution contains much more than the equivalent of Windows: distributions like Ubuntu and Fedora ship a lot of applications that you would need to install separately on Windows.
| Why are Linux installs bigger than I've read? Some beginner questions |
1,425,951,077,000 |
I am using CentOS 7 with "3.10.0-123.4.2.el7.x86_64" version, but I don't know why the kernel threads named flush aren't present in this kernel version.
I tried to look in the kernel change log, but I didn't find anything.
[root@localhost ~]# ps aux | grep flush
[root@localhost ~]# echo $?
1
Without these kernel threads, how are the dirty data flushed?
|
There are no more dedicated flush threads anymore.
The Linux kernel has moved on to a worker thread model, where several generic worker threads can handle a variety of different tasks. You will see these in the process list as [kworker/#.##]
Unfortunately this new design makes it a bit difficult to tell exactly what any given kworker thread is doing at any given time. But you can rest assured that dirty pages are still being written to disk by one or more of the kworker threads.
| Linux kernel 3.10.0-123.4.2 processes [flush] aren't present |
1,425,951,077,000 |
Several of my coworkers have been building custom kernels for their workstations and servers.
What advantages/disadvantages are there to building a custom kernel?
Why should I build a custom kernel?
|
There are multiple reasons why one may choose to manually compile a kernel. I'll try and list the ones that I can think of, off the top of my head here:
An oft-cited, but often wrong reason is that it will work faster. By customizing the compiler flags for the specific system, you could theoretically create better byte-code. But with modern compilers, this is not often the case.
Change certain compile time flags. E.g.: To compile the kernel with debugging information included. Although this may not apply in your case.
To remove unnecessary parts from the kernel. But as stated by @jordanm, these parts are almost always modules and may not matter. But in some cases, the difference may be significant.
For academic purposes. Compiling your own kernel helps you learn a lot about the build process. Also, one gets to learn about the various configuration options.
To replace certain modules. For example, to use the con kolivas patchset, real time patchset, etc.
In certain situations, especially in embedded scenarios, one may add custom system calls to the kernel. These would require a custom compilation too.
Some people like me, just love to tinker with the system. And will randomly change some configuration options to see how the final system responds to them. However such people are few and rare.
| Why should I build a custom kernel? [closed] |
1,425,951,077,000 |
I don't want to load the kernel module nouveau on my debian box at startup, so I put the following in /etc/modprobe.d/blacklist.conf:
blacklist ttm
blacklist drm
blacklist nouveau
I even did a update-initramfs -u but nonetheless those three modules get loaded each time I boot.
Does anyone know why and how to fix this.
|
You can find the answer in the wiki: the idea is that one does not use /etc/modprobe.d/blacklist.conf.
Instead, say you want to blacklist pcspkr. You create a pcspkr.conf file in /etc/modprobe.d and put blacklist pcspkr inside. Then run
depmod -ae && update-initramfs -u
| Excluding kernel modules through /etc/modprobe.d/blacklist.conf does not work |
1,425,951,077,000 |
I ask this only in regards to memory addressing.
I know that a PAE kernel allows the OS to access more memory than a standard 32-bit kernel, however, what are the other implications? What specific differences are there between a 64-bit kernel and a 32-bit PAE kernel?
According to Wikipedia, a processes address space remains at 32-bits, meaning it can only access a maximum of 4GB of memory. The OS however can access a 64GB address space, allocating 4GB chunks to processes.
To me it seems like this is a big distinction that seems to be ignored by many people.
|
The kernel sees the physical memory and provides a view to the processes. If you ever wondered how a process can have a 4 GB memory space if your whole machine got only 512 MB of RAM, that's why. Each process has its own virtual memory space. The addresses in that address space are mapped either to physical pages or to swap space. If to swap space, they'll have to be swapped back into physical memory before your process can access a page to modify it.
The example from Torvalds in XQYZ's answer (DOS highmem) is not too far fetched, although I disagree about his conclusion that PAE is generally a bad thing. It solved specific problems and has its merits - but all of that is argumentative. For example the implementer of a library may not perceive the implementation as easy, while the user of that library may perceive this library as very useful and easy to use. Torvalds is an implementer, so he's bound to say what the statement says. For an end user this solves a problem and that's what the end user cares about.
For one PAE helps solve another legacy problem on 32bit machines. It allows the kernel to map the full 4 GB of memory and work around the BIOS memory hole that exists on many machines and causes a pure 32bit kernel without PAE to "see" only 3.1 or 3.2 GB of memory, despite the physical 4 GB.
Anyway, for the 64bit kernel it's a symmetrical relation between the page physical and the virtual pages (leaving swap space and other details aside). However, the PAE kernel maps between a 32bit pointer within the process' address space and a 36bit address in physical memory. More book-keeping is needed here. Keyword: "Extended Page-Table". But this is somewhat more of a programming question. This is the main difference. More book-keeping compared to a full linear address space. For PAE it's chunks of 4 GB as you mentioned.
Aside from that both PAE and 64bit allow for large pages (instead of the standard 4 KB pages in 32bit).
Chapter 3 of Volume 1 of the Intel Processor Manual has some overview and Chapter 3 of Volume 3A ("Protected Mode Memory Management") has more details, if you want to read up on it.
To me it seems like this is a big
distinction that seems to be ignored
by many people.
You're right. However, the majority of people are users, not implementers. That's why they won't care. And as long as you don't require huge amounts of memory for your application, many people don't care (especially since there are compatibility layers).
| What is the difference between 32-bit PAE and 64-bit kernels? |
1,425,951,077,000 |
So i am on a debian buster 10 system and i installed virtualbox and i encountered an error which tells me to load some kernel modules manually.
sudo ./vboxconfig
[sudo] password for user:
vboxdrv.sh: Stopping VirtualBox services.
vboxdrv.sh: Starting VirtualBox services.
vboxdrv.sh: You must sign these kernel modules before using VirtualBox:
vboxdrv vboxnetflt vboxnetadp
See the documenatation for your Linux distribution..
vboxdrv.sh: Building VirtualBox kernel modules
So i just need some help to load the vboxdrv, vboxnetflt and vboxnetadp kernel modules to complete my virtual box installation and i am not too sure how this is done. I am using a UEFI system which has secure boot enabled.
|
There are three steps involved in signing modules:
create a Machine Owner Key
enroll it
sign kernel modules with it
The first two steps only need to be done once, the last will need to be redone every time the modules are built.
To create a MOK:
openssl req -new -x509 -newkey rsa:2048 -keyout MOK.priv -outform DER -out MOK.der -days 36500 -subj "/CN=My Name/" -nodes
replacing My Name with something appropriate. (The following instructions assume you run this as root, in root’s home directory, /root.)
To enroll it:
mokutil --import MOK.der
This will prompt for a password, which is a temporary password used on the next boot only. Reboot your system, and you will enter the UEFI MOK management tool; see this handy guide with screenshots and follow the instructions to enroll your key.
This will reboot again, and you will then be able to check that your key is loaded:
dmesg | grep cert
To sign modules with your key, go to the directory containing the modules, and run
/usr/lib/linux-kbuild-4.19/scripts/sign-file sha256 /root/MOK.priv /root/MOK.der vboxdrv.ko
replacing “4.19” and vboxdrv.ko as appropriate.
| Sign Kernel Modules |
1,425,951,077,000 |
It appears that Grub-EFI will only boot a "signed" Linux kernel. Is there some command that will allow me to query a given kernel image to find out what signatures (if any) are present on it?
|
It depends on what kind of signature you're talking about. If you have an EFI system, you can have signed EFI executables (*.efi) and force your EFI firmware to only execute those with a known signature. This is known as Secure Boot. To check an EFI binary for a signature you can use the tool sbverify:
$ sbverify --no-verify signed-binary.efi
Signature verification OK
$ sbverify --no-verify unsigned-binary.efi
No signature table present
Unable to read signature data from unsigned-binary.efi
Signature verification failed
Unfortunately I didn't see an easy way of extracting and displaying the EFI signature. :(
What's more likely what you are looking for is GRUB's own ability to check its modules and kernels to be booted for valid signatures (Secure Boot just affects the GRUB binary itself, everything GRUB loads does not necessarily need to be EFI-signed). Those are (as far as I understand) plain old detached GPG signatures (so for example for a kernel called vmlinuz-1.2.3 you'd have a file vmlinuz-1.2.3.sig with the signature). Those can simply be displayed and verified with
$ gpg --verify vmlinuz-1.2.3.sig vmlinuz-1.2.3
gpg: Signature made Tue Apr 1 12:34:56 2014 CEST using RSA key ID d3adb33f
gpg: Good signature from "John Doe <[email protected]>"
If you don't have a *.sig file to your kernel, it is obviously not signed.
You can disable signature checking in GRUB by entering set check_signature=no at the GRUB command prompt. You can get more information on that topic here (this functionality is rather new and the official GRUB website only has the manual for version 2.00 online, which lacks this feature). This also explains how to sign your modules and kernel with your own key and to tell GRUB about it.
| How to query EFI signature |
1,425,951,077,000 |
When configuring the kernel I see an option to add read-write support for NTFS. Then when mounting my NTFS partition I still have to install ntfs-3g and pass ntfs-3g as the type. I thought if I add NTFS support in the kernel then I wouldn't have to install a library for it. Why is it so?
|
The kernel driver is still read only and has no full write support yet, only with many restrictions.
| Why do I need ntfs-3g when I have already enabled NTFS support in the kernel? |
1,425,951,077,000 |
We know that sysctl command can change kernel parameters with :
# sysctl -w kernel.domainname="example.com"
or by directly editing the file in /proc/sys directory. And for persistent changes, the parameters must be written to /etc/sysctl.d/<moduleName>.conf files as:
# echo kernel.domainname="example.com" > /etc/sysctl.d/domainname.conf
However, we can also change the kernel parameters using the modprobe command:
# modprobe kernel domainname="example.com"
And then there's the modprobe.conf file in the /etc/modprobe.d directories, which is present in multiple locations: /etc/modprobe.d and /usr/lib/modprobe.d. It contains multiple .conf files, and the options can be provided in the appropriate conf file for the module as:
options kernel domainname="example.com"
So, what's the difference between each of these methods? Which method should be used under what specific circumstances?
|
As far as I know, you can use modprobe to adjust parameters only when the feature in question has been compiled as a module - and you're loading the module in the first place. For setting module parameters persistently, you'll have the /etc/modprobe.d directory. (Generally you should leave /usr/lib/modprobe.d for distribution's default settings - any files in there may get overwritten by package updates.)
If the module in question has been built into the main kernel, then you must use the <module_name>.<parameter_name>=<value> syntax, typically as a boot option. If the parameter in question is available as a sysctl setting, then you can use the sysctl -w command to adjust it too.
All the available sysctl parameters are presented under /proc/sys: for example, kernel.domainname is at /proc/sys/kernel/domainname. Not all module parameters are available as sysctls, but some might be.
If a loadable module has already been loaded, and you wish to change its parameters immediately without unloading it, then you can write the new value to /sys/module/<module_name>/parameters/<parameter_name>. If the module cannot accept dynamic reconfiguration for that parameter, the file will be read-only.
At least on my system, kernel.domainname is a sysctl parameter for the main kernel, and trying to change it with modprobe won't work:
# sysctl kernel.domainname
kernel.domainname = (none)
# modprobe kernel domainname="example.com"
modprobe: FATAL: Module kernel not found in directory /lib/modules/<kernel_version>
# sysctl kernel.domainname
kernel.domainname = (none)
In a nutshell: If you are unsure, first look into /proc/sys or the output of sysctl -a: if the parameter you're looking for is not there, it is not a sysctl parameter and is probably a module parameter (or the module that would provide the sysctl is not currently loaded, in which case it's better to set the value as a module parameter anyway - trying to set a sysctl belonging to a module that is not currently loaded will just produce an error).
Then, find out which module the parameter belongs to. If the module is built into the kernel, you'll probably have to use a boot option; if it is loadable with modprobe (i.e. the respective <module>.ko file exists somewhere in the /lib/modules/<kernel version>/ directory tree), then you can use modprobe and/or /etc/modprobe.d/.
| Difference between modprobe and sysctl -w in terms of setting system parameters? |
1,425,951,077,000 |
What's the maximum allowed number of TTYs?
I found pty.max, but is there anything similar for TTYs? Or is it defined as a fixed value in kernel headers, I couldn't find that
|
In case you're referring Linux virtual consoles as TTYs, their number by default is 64 and this is defined in include/uapi/linux/vt.h inside the Linux kernel source tree. The thing you're looking for is NR_CONSOLES.
| What is the maximum allowed number of TTY, is it defined anywhere in kernel headers? |
1,425,951,077,000 |
I experience relatively often that the partition table of a USB stick or SD card is suddenly no longer recognized by the kernel while (g)parted and fdisk still see it, as do other systems. I can even instruct gparted to do a fsck on one of the partitions but it fails of course because the device files let's say /dev/sdbX don't exist.
I'll attach the dmesg output:
[ 8771.136129] usb 1-5: new high-speed USB device number 4 using ehci_hcd
[ 8771.330322] Initializing USB Mass Storage driver...
[ 8771.330766] scsi4 : usb-storage 1-5:1.0
[ 8771.331108] usbcore: registered new interface driver usb-storage
[ 8771.331118] USB Mass Storage support registered.
[ 8772.329734] scsi 4:0:0:0: Direct-Access Generic STORAGE DEVICE 0207 PQ: 0 ANSI: 0
[ 8772.334359] sd 4:0:0:0: Attached scsi generic sg1 type 0
[ 8772.619619] sd 4:0:0:0: [sdb] 31586304 512-byte logical blocks: (16.1 GB/15.0 GiB)
[ 8772.620955] sd 4:0:0:0: [sdb] Write Protect is off
[ 8772.620971] sd 4:0:0:0: [sdb] Mode Sense: 0b 00 00 08
[ 8772.622303] sd 4:0:0:0: [sdb] No Caching mode page present
[ 8772.622317] sd 4:0:0:0: [sdb] Assuming drive cache: write through
[ 8772.629970] sd 4:0:0:0: [sdb] No Caching mode page present
[ 8772.629992] sd 4:0:0:0: [sdb] Assuming drive cache: write through
[ 8775.030231] sd 4:0:0:0: [sdb] Unhandled sense code
[ 8775.030240] sd 4:0:0:0: [sdb] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 8775.030249] sd 4:0:0:0: [sdb] Sense Key : Medium Error [current]
[ 8775.030259] sd 4:0:0:0: [sdb] Add. Sense: Data phase CRC error detected
[ 8775.030271] sd 4:0:0:0: [sdb] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00
[ 8775.030291] end_request: I/O error, dev sdb, sector 0
[ 8775.030300] quiet_error: 30 callbacks suppressed
[ 8775.030306] Buffer I/O error on device sdb, logical block 0
[ 8775.033781] ldm_validate_partition_table(): Disk read failed.
[ 8775.033813] Dev sdb: unable to read RDB block 0
[ 8775.037147] sdb: unable to read partition table
[ 8775.047170] sd 4:0:0:0: [sdb] No Caching mode page present
[ 8775.047185] sd 4:0:0:0: [sdb] Assuming drive cache: write through
[ 8775.047196] sd 4:0:0:0: [sdb] Attached SCSI removable disk
Here, on the other hand, is what parted has to say about the same disk, at the same time:
(parted) print
Model: Generic STORAGE DEVICE (scsi)
Disk /dev/sdb: 16.2GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
1 4194kB 62.9MB 58.7MB primary fat16 lba
2 62.9MB 16.2GB 16.1GB primary ext4
It's not only parted, even the older fdisk has no trouble with that partition table:
Command (m for help): p
Disk /dev/sdb: 16.2 GB, 16172187648 bytes
64 heads, 32 sectors/track, 15423 cylinders, total 31586304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000dbfc6
Device Boot Start End Blocks Id System
/dev/sdb1 8192 122879 57344 c W95 FAT32 (LBA)
/dev/sdb2 122880 31586303 15731712 83 Linux
I'm really clueless. It would be easy to say the partition table is corrupted but then why can gparted still read it without complaints (and there are none) or how can I reconstruct the partition table from what (g)parted miraculously found out?
|
For some reason your kernel fails to read the partition table:
[ 8775.030291] end_request: I/O error, dev sdb, sector 0
[ 8775.030300] quiet_error: 30 callbacks suppressed
[ 8775.030306] Buffer I/O error on device sdb, logical block 0
[ 8775.033781] ldm_validate_partition_table(): Disk read failed.
Thus, it can't create devices for partitions as it did not read the partition table. Later when you try to see the partition table with parted or fdisk the IO is performed successfully.
Try to use partprobe /dev/sdX when your kernel did not recognized the partitions at boot time.
man partprobe:
PARTPROBE(8) GNU Parted Manual PARTPROBE(8)
NAME
partprobe - inform the OS of partition table changes
SYNOPSIS
partprobe [-d] [-s] [devices...]
DESCRIPTION
This manual page documents briefly the partprobe command.
partprobe is a program that informs the operating system kernel of partition table changes, by requesting that the operating system re-read the
partition table.
| Partition table not recognized by Linux kernel |
1,425,951,077,000 |
After reading this question, I was a little confused; it sounds like some daemon reacting by rebooting a system. Is that right? Is it a common occurrence in embedded *nixes?
|
Having a watchdog on an embedded system will dramatically improve the availability of the device. Instead of waiting for the user to see that the device is frozen or broken, it will reset if the software fails to update at some interval. Some examples:
Linux System http://linux.die.net/man/8/watchdog
VxWorks (RTOS) http://fixunix.com/vxworks/48664-about-vxworks-watchdog.html
QNX Watchdog http://www.qnx.com/solutions/industries/netcom/ha.html
The device is designed in such a way that its state is saved somewhere periodically(like Juniper routers that run FreeBSD, Android phones, and dvrs that run linux). So even if it is rebooted it should re-enter a working configuration.
| What is a "watchdog reset"? |
1,425,951,077,000 |
As I understand it, the Linux kernel has had good support for Intel and AMD CPUs (pretty obvious since your OS installs and runs fine!).
But now that AMD is releasing their new Fusion APUs, is it just a gimmick marketing scheme and can be treated as a CPU by the Linux kernel, or is this APU something new and new kernel support needs to be added? Since the Fusion APUs are slated to include the functions of the GPU, will Linux be able to take advantage of all its functions?
This might have implications on whether my next Linux machine can and/or should be based on AMD Fusion hardware or not.
|
Kernel 2.6.38 and above will support AMD Fusion Ontario and Zacate APUs.
| Implications of Linux support for AMD Fusion APUs? |
1,425,951,077,000 |
I need to recompile my kernel on RHEL WS5 with only two changes.
Change stack size from 4k to 8k
Limit usable memory to 4096.
How do I recompile the kernel without changing anything else but these two items?
|
To change only the new values you will need the config the old kernel was build from.
In RHEL you can find this in: /boot/config-$(\uname -r)
Copy this file to the kernel source and change the values you want. Use make menuconfig for a ncurses gui.
For other distributions: If the config option CONFIG_IKCONFIG_PROC was set, your kernel configuration is available under /proc/config.gz
| Recompile Kernel to Change Stack Size |
1,425,951,077,000 |
I have Linux machine dedicated to serving static contents and PHP pages with Apache. Apache also work as a reverse proxy in a subdomain. I moved the PostgreSQL database to another Linux machine.
Is it safe to disable the OOM killer in the kernel?
|
Probably not.
If the OOM killer is running then it is likely that the OOM killer needs to be run to avoid the machine simple grinding to a halt as nothing, even the kernel, can allocate new memory if needed. The OOM killer exists because it is generally better to have some services fall over due to the killer than the whole machine to fall off the face of the 'net.
If you see the OOM killer in action with any regularity then you should either reconfigure the services on the machine to use less RAM, or you may need to add more RAM to the machine.
| Is it safe to disable OOM killer in a web server/reverse proxy? |
1,425,951,077,000 |
I find that centos7.8 is using "net.ifnames=0" without "biosdevname=0" in its kernel parameters, the result seems be same: I got traditional nic name such as eth0.
Just curious, what is the difference of "net.ifnames=0" and "biosdevname=0"?
|
From dell docs:
Biosdevname is a udev helper utility developed by Dell and released under the GNU General Public License (GPL). It provides a consistent naming mechanism for network devices based on their physical location as suggested by the system BIOS.
From manpages
biosdevname takes a kernel device name as an argument,
and returns the BIOS-given name it "should" be.
The biosdevname is enabled by default on systems running RedHat based on Dell hardware.
The net.ifnames=0 is a kernel parameter that disables the Predictable Network Interface renaming behavior.
kernel-command-line manpages
net.ifnames=
Network interfaces are renamed to give them predictable names
when possible. It is enabled by default; specifying 0
disables it
| Linux kernel parameters: what is the difference of net.ifnames=0 and biosdevname=0 |
1,425,951,077,000 |
I was reading through all the things that are run during bootup and have seen that after mounting the rootfs, /sbin/fsck.ext4 is run and after that systemd is run. I was wondering where or how fsck is run, because I was searching for it in the kernel source code and couldn't find it and its not part of the init scripts. So what runs fsck? The distro I am using is mint.
EDIT: In this image it is shown that fsck is run after mounting the root file sytem
|
Edit 2: checked sources
I've found the ubuntu initramfs-tools sources. Here you can see clearly, the Begin: "Mounting root file system" message is printed first, but in the mount_root function fsck is run before the actual mounting. I have ommited some non-relevant code, just to indicate the order. (If you would inspect the linked sources you will find also the other reported scripts from the screenshot).
/init line 256
log_begin_msg "Mounting root file system"
# Always load local and nfs (since these might be needed for /etc or
# /usr, irrespective of the boot script used to mount the rootfs).
. /scripts/local
. /scripts/nfs
. /scripts/${BOOT}
parse_numeric ${ROOT}
maybe_break mountroot
mount_top
mount_premount
mountroot
log_end_msg
/scripts/local @line 244
mountroot()
{
local_mount_root
}
/scripts/local @line 131
local_mount_root()
{
# Some code ommited
# FIXME This has no error checking
[ -n "${FSTYPE}" ] && modprobe ${FSTYPE}
checkfs ${ROOT} root "${FSTYPE}"
# FIXME This has no error checking
# Mount root
mount ${roflag} ${FSTYPE:+-t ${FSTYPE} }${ROOTFLAGS} ${ROOT} ${rootmnt}
mountroot_status="$?"
if [ "$LOOP" ]; then
if [ "$mountroot_status" != 0 ]; then
if [ ${FSTYPE} = ntfs ] || [ ${FSTYPE} = vfat ]; then
panic "<Error message ommited>"
fi
fi
mkdir -p /host
mount -o move ${rootmnt} /host
# Some code ommitted
}
Original answer, retained for historical reasons
Two options:
Root is mounted read-only during boot and the init implementation is running fsck. Systemd is the init implementation on mint, and since you already checked if it exists there, this option does not apply.
/sbin/fsck.ext4 is run in the "early user space", set up by an initramfs. Which is most probably the case in your system.
Systemd
Even if you noticed that /sbin/fsck.ext4 was run before systemd, I want tot elaborate a bit. Systemd is perfectly capable of running fsck itself, on a read-only mounted filesystem. See [email protected] documentation. Most probably this service is not enabled by default in mint, since it will be redundant with the early user space one.
Initramfs
I don't know which implementation of an initramfs mint is running, but I will use dracut as an example. (used in Debian, openSuse and more) It states the following in its mount preperation documentation:
When the root file system finally becomes visible:
Any maintenance tasks which cannot run on a mounted root file system are done.
The root file system is mounted read-only.
Any processes which must continue running (such as the rd.splash screen helper and its command FIFO) are hoisted into the newly-mounted
root file system.
And maintenance tasks includes fsck. Further evidence, there is a possibility in dracut cmdline options to switch off fsck:
rd.skipfsck
skip fsck for rootfs and /usr. If you’re mounting /usr read-only and the init system performs fsck before remount, you might
want to use this option to avoid duplication
Implementations of initramfs
An dynamic (udev based) and flexible initramfs can be implemented using the systemd infrastructure. Dracut is such an implementation and probably there are distro's out there that want to write their own.
Another option would be a script based initramfs. In such a case busybox ash is used as a scripting shell and maybe even replacing udev with mdev, or maybe just completely static. I found some people being dropped to a busybox shell due to some fsck error int mint, so this implementation could apply to mint.
If you really want to know for sure, try to decompress the initramfs file in /boot and see what's in there. It might also be possible to see it mounted under /initramfs.
| Where is fsck run? |
1,404,386,586,000 |
I need a specific kernel version to compile it with some additional modules.
When I typing:
uname -r
I get
3.8.0-29-generic
I need this one.
uname -a
Linux "..." 3.8.0-29-generic #42~precise1-Ubuntu SMP Wed Aug 14 16:19:23 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
Where can I find this version? Here https://www.kernel.org/ seems that there isn't..
|
You could just use apt to get the source,
check that the file /etc/apt/sources.list contains a line starting with
deb-src
and then use the command:
apt-get update
#if you want to download the source into the current directory use:
apt-get source linux-image-$(uname -r)
#if you want the source to be installed in the system directory use:
apt-get install linux-source-$(uname -r)
It will download the source in the folder:
/usr/src or /usr/sources
| get kernel source code |
1,404,386,586,000 |
When my Ubuntu server starts up, I see a message that says: kernel: [11.895392] init: failsafe main process (631) killed by TERM signal. I would like to know what process this is but I'm not sure where to look. When I search through my syslog and kernel.log, I don't see any evidence of processes starting and being given an identifier (PID).
I'd like to investigate my boot message (failsafe main process killed...) but first, I need to answer the question: When a process starts, where is that logged and does the PID that is assigned to it get logged as well?
I understand that a process will write a PID file to reference later as necessary, but once the process is killed, can I find out what PID it used to have?
|
First of all your message already contains the program name:
kernel: [11.895392] init: failsafe main process (631) killed by TERM signal
This means that the program failsafe with the pid 631 received a TERM signal.
To answer your original question, no most Linux Distributions don't log the pids of created processes per default but you can use the audit framework and create the necessary rules to log all created processes. https://www.wzdftpd.net/docs/selinux/audit.html provides an introduction into these rules and should help you get started.
| Does Linux log when a process is started, and the PID that gets assigned to it? Where can I find that log? [duplicate] |
1,404,386,586,000 |
When installing kernel modules, I have the option to strip out debugging symbols using INSTALL_MOD_STRIP=1. This saves significant disk space.
Does it also save memory? Why should one keep the debugging symbols in the kernel modules?
|
Debugging symbols just add extra information to an executable that helps when running a debugger such as GDB. It lets the debugger recreate the source code from the executable to show you where things such as segfaults occur during runtime.
If you are testing / hacking / making something inter-operate with the module then you need them. During normal operation they just take up space and can pose a security risk on a production machine. For example if someone breaks into your system with user privileges they can use a debugger to look for weaknesses in the current running modules to gain root access.
It will save a small amount of space to remove them as well.
| When should I keep debugging symbols in kernel modules? |
1,404,386,586,000 |
The linux kernel documentation claims:
Rootfs is a special instance of ramfs (or tmpfs, if that's enabled),
which is always present in 2.6 systems. You can't unmount rootfs …
On all linux systems I tested (kernel > 2.6 and afaik normal boot procedure, e.g ubuntu 12.04), mount does not show a rootfs entry.
However, with a buildroot image when booting with an external .cpio archive, it's present.
In what cases is there a rootfs entry in mount?
|
On old systems, mount may disagree with /proc/mounts
Most of the time you won't see rootfs in /proc/mounts, but it is still mounted.
Can we prove that rootfs is still mounted?
1. On old systems, mount may disagree with /proc/mounts
man mount says: "The programs mount and umount traditionally maintained a list of currently mounted filesystems in the file /etc/mtab."
The old approach does not really work for the root filesystem. The root filesystem may have been mounted by the kernel, not by mount. Therefore entries for / in the /etc/mtab may be quite contrived, and not necessarily in sync with the kernel's current list of mounts.
I haven't checked for sure, but in practice I don't think any system that uses the old scheme will initialize mtab to show a line with rootfs. (In theory, whether mount shows rootfs would depend on the software that first installed the mtab file).
man mount continues: "the real mtab file is still supported, but on current Linux systems it is better to make it a symlink to /proc/mounts instead, because a regular mtab file maintained in userspace cannot reliably work with namespaces, containers and other advanced Linux features."
mtab is converted into a symlink in Debian 7, and in Ubuntu 15.04.
1.1 Sources
Debian report #494001 - "debian-installer: /etc/mtab must be a symlink to /proc/mounts with linux >= 2.6.26"
#494001 is resolved in sysvinit-2.88dsf-14. See the closing message, dated 14 Dec 2011. The change is included in Debian 7 "Wheezy", released on 4 May 2013. (It uses sysvinit-2.88dsf-41).
Ubuntu delayed this change until sysvinit_2.88dsf-53.2ubuntu1. That changelog page shows the change enters "vivid", which is the codename for Ubuntu 15.04.
2. Most of the time you won't see rootfs in /proc/mounts, but it is still mounted
As of Linux v4.17, this kernel documentation is still up to date. rootfs is always present, and it can never be unmounted. But most of the time you cannot see it in /proc/mounts.
You can see rootfs if you boot into an initramfs shell. If your initramfs is dracut, as in Fedora Linux, you can do this by adding the option rd.break to the kernel command line. (E.g. inside the GRUB boot loader).
switch_root:/# grep rootfs /proc/mounts
rootfs / rootfs rw 0 0
When dracut switches the system to the real root filesystem, you can no longer see rootfs in /proc/mounts. dracut can use either switch_root or systemd to do this. Both of these follow the same sequence of operations, which are advised in the linked kernel doc.
In some other posts, people can see rootfs in /proc/mounts after switching out of the initramfs. For example on Debian 7: 'How can I find out about "rootfs"'. I think this must be because the kernel changed how it shows /proc/mounts, at some point between the kernel version in Debian 7 and my current kernel v4.17. From further searches, I think rootfs is shown on Ubuntu 14.04, but is not shown on Ubuntu 16.04 with Ubuntu kernel 4.4.0-28-generic.
Even if I don't use an initramfs, and have the kernel mount the root filesystem instead, I cannot see rootfs in /proc/mounts. This makes sense as the kernel code also seems to follow the same sequence of operations.
The operation which hides rootfs is chroot.
switch_root:/# cd /sysroot
switch_root:/sysroot# mount --bind /proc proc
switch_root:/sysroot# grep rootfs proc/mounts
rootfs / rootfs rw 0 0
switch_root:/sysroot# chroot .
sh-4.4# cat proc/mounts
/dev/sda3 / ext4 ro,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
3. Can we prove that rootfs is still mounted?
Notoriously, a simple chroot can be escaped from when you are running as a privileged user. If switch_root did nothing more than chroot, we could reverse it and see the rootfs again.
sh-4.4# python3
...
>>> import os
>>> os.system('mount --bind / /mnt')
>>> os.system('cat proc/mounts')
/dev/sda3 / ext4 ro,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
/dev/sda3 /mnt ext4 ro,relatime 0 0
>>> os.chroot('/mnt')
>>>
>>> # now the root, "/", is the old "/mnt"...
>>> # but the current directory, ".", is outside the root :-)
>>>
>>> os.system('cat proc/mounts')
/dev/sda3 / ext4 ro,relatime 0 0
>>> os.chdir('..')
>>> os.system('bash')
shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
bash-4.4# chroot .
sh-4.4# grep rootfs proc/mounts
rootfs / rootfs rw 0 0
However, the full switch_root sequence can not be reversed by this technique. The full sequence does
Change the current working directory (as in /proc/self/cwd), to the mount point of the new filesystem:
cd /newmount
Move the new filesystem, i.e. change its mount point, so that it sits directly on top of the root directory.
mount --move . /
Change the current root directory (as in /proc/self/root) to match the current working directory.
chroot .
In the chroot escape above, we were able to traverse from the root directory of the ext4 filesystem back to rootfs using .., because the ext4 filesystem was mounted on a subdirectory of the rootfs. The escape method does not work when the ext4 filesystem is mounted on the root directory of the rootfs.
I was able to find the rootfs using a different method. (At least one important kernel developer thinks of this as a bug in Linux).
http://archive.today/2018.07.22-161140/https://lore.kernel.org/lkml/[email protected]/
/* CURSED.c - DO NOT RUN THIS PROGRAM INSIDE YOUR MAIN MOUNT NAMESPACE */
#define _GNU_SOURCE
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h> /* open() */
#include <sys/mount.h>
#include <sched.h> /* setns() */
#include <sys/statfs.h>
int main() {
int fd = open("/proc/self/ns/mnt", O_RDONLY);
/* "umount -l /" - lazy unmount everything we can see */
umount2("/", MNT_DETACH);
/* reset root, by re-entering our mount namespace */
setns(fd, CLONE_NEWNS);
/* "stat -f /" - inspect the root */
struct statfs fs;
statfs("/", &fs);
}
Tested on Linux 4.17.3-200.fc28.x86_64:
$ make CURSED
cc CURSED.c -o CURSED
$ sudo unshare -m strace ./CURSED
...
openat(AT_FDCWD, "/proc/self/ns/mnt", O_RDONLY) = 3
umount2("/", MNT_DETACH) = 0
setns(3, CLONE_NEWNS) = 0
statfs("/", {f_type=RAMFS_MAGIC, f_bsize=4096, f_blocks=0, f_bfree=0, f_bavail=0, f_files=0, f_ffree=0, f_fsid={val=[0, 0]}, f_namelen=255, f_frsize=4096, f_flags=ST_VALID}) = 0
^
^ result: rootfs uses ramfs code on this system
(I also confirmed that this filesystem is empty as expected, and writeable).
| Why is there no rootfs file system present on my system? |
1,404,386,586,000 |
I would like to know how can i search for the BSP(Board Specific Packages) sections in the Linux Source code?
All comments are welcomed.
|
A board support package may have pieces spread out in the kernel, but the typical parts are in arch/, and if your board requires drivers that aren't already part of the kernel, there may be some pieces in drivers/.
Each arch/ is set up a bit differently. Arm is an interesting one: look in arch/arm/, you'll see several cpu types and platforms there. If you look inside a cpu type, like arch/arm/mach-at91/, you'll see lots of files for the various specific cpus as well as board-*.c files, where board-specific peripherals are set up.
| How to find the BSP sections in the Linux source code? |
1,404,386,586,000 |
If one program, for example grep, is curretly running, and a user executes another instance, do the two instances share the read-only .text sections between them to save memory? Would the sharing of the main executable text sharing be done similarly to shared libraries?
Is this behavior exhibited in Linux? If so, do other Unices do so as well?
If this is not done in Linux, would any benefit come from implementing executables that often run multiple instances in parallel as shared libraries, with the invoked executable simply calling a main function in the library?
|
Unix shares executables, and shared libraries are called shared (duh...) because their in-memory images are shared between all users.
I.e., if I run two instances of bash(1), and in one of them run, say, vim(1), I'll have one copy each of the bash and the vim executables in memory, and (as both programs use the C library) one copy of libc.
But even better: Linux pages from the disk copies of the above executables/libraries (files). So what stays in memory is just those pages that have been used recently. So, code for rarely used vim commands or bash error handling, not used functions in libc, and so on just use up disk space, not memory.
| Are .text sections shared between loaded ELF executables? |
1,404,386,586,000 |
I'm currently having to recompile my wireless driver from source every time I get a new kernel release. Thinking it would be awesomely hackerish to automate this process, I symlinked my Bash build script to /etc/kernel/postinst.d. I've verified that it does, in fact, run when the latest kernel update is installed, but one thing is left as a problem: the driver compiles for the existing running version of the kernel.
For example, if I'm running 3.0.0-14-generic and apt-get dist-upgrade to kernel 3.0.0-15-generic, then it compiles for kernel 3.0.0-14-generic, which doesn't really help me at all.
Is there a way to tell from my kernel postinst script which version of the kernel has been installed so I can pass it to my make call so it can be compiled for the newly installed kernel?
|
This is no actual answer to your question, just a pointer to a tool that might be related and helpful:
Do you have dkms installed? (Some information here, the alioth page seems down at the moment.) It's supposed to do just that, if I'm not misled. It requires the appropriate linux-headers package and the module/firmware/something-like-that package to be installed; and it works for all installed linux-image packages. (I can't say anything about a generic module, but it worked fine when I used it with the non-free nvidia module.)
(There're more links here, like the manpage and this linuxjournal.com article which provides a non-Debian-ecosystem-centric explanation of the program.)
| Running a script every time a new kernel is installed |
1,404,386,586,000 |
Using either NetworkManager or Wicd to manage wireless networking on my Wi-Fi-powered laptop, too often I get randomly disconnected, and I see these messages on /var/log/syslog:
Dec 10 05:21:26 debian dhclient: DHCPREQUEST on wlan0 to 10.0.0.2 port 67
Dec 10 05:21:26 debian dhclient: DHCPACK from 10.0.0.2
Dec 10 05:21:26 debian dhclient: bound to 10.0.0.4 -- renewal in 1662 seconds.
Dec 10 05:40:38 debian kernel: [105982.935232] iwlagn 0000:06:00.0: Microcode SW error detected. Restarting 0x82000000.
Dec 10 05:40:38 debian kernel: [105983.182330] Registered led device: iwl-phy0::radio
Dec 10 05:40:38 debian kernel: [105983.182586] Registered led device: iwl-phy0::assoc
Dec 10 05:40:38 debian kernel: [105983.182658] Registered led device: iwl-phy0::RX
Dec 10 05:40:38 debian kernel: [105983.182872] Registered led device: iwl-phy0::TX
I think 05:40:38 is the first line that indicates trouble
With NetworkManager, I recover from this situation with the following command each time this happens:
nmcli nm wifi off && nmcli nm wifi on
Output of uname -a:
Linux debian 2.6.32-5-686-bigmem #1 SMP Thu Nov 25 19:30:54 UTC 2010 i686 GNU/Linux
notes:
Wi-Fi chipset is Intel's 4965.
I reproduced this problem on 2.6.32, 2.6.36 and 2.6.37-rc4 kernel versions.
|
This is a known bug with the iwlagn driver. Updating the driver and/or firmware may fix your issue:
https://bugs.launchpad.net/linux/+bug/200509
| Wi-Fi constantly disconnects |
1,404,386,586,000 |
I know that it's possible to extract from a linux kernel the embedded initramfs cpio. I want to know if the reverse operation is possible; i.e., given both a compiled kernel and initramfs archive, how does one combine them?
I'm trying to achieve the same effect of this kernel config...
CONFIG_EFI=y
CONFIG_EFI_STUB=y
...
CONFIG_FB_EFI=y
...
CONFIG_CMDLINE_BOOL=y
CONFIG_CMDLINE="root=..."
...
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE="my_initrd.cpio"
...without actually configuring and/or compiling the kernel.
References:
http://www.kroah.com/log/blog/2013/09/02/booting-a-self-signed-linux-kernel/
|
Here is my solution:
you can create a combined .efi image
with kernel, command line and initramfs inside,
called a "unified kernel image".
Reference: Preparing a unified kernel image – ArchWiki.
You might want to focus on the “Manually” subsection.
| How to combine linux kernel and initrd without compiling? |
1,404,386,586,000 |
I'm trying to understand the Debian "booting from Hard Disk" installation manual.
The process is as follows: I copy a kernel image, a ramdisk initrd and an ISO with installer to the hard drive and then configure GRUB to start the kernel and ramdisk, but also I have to tell GRUB, where is the root filesystem (it should be located at the ISO), so that the kernel could pivot root to it. But the debian-supplied grub configurations seem to specify the whole hard drive as the root filesystem, not the ISO file within it:
GRUB1:
title New Install
root (hd0,0)
kernel /boot/newinstall/vmlinuz
initrd /boot/newinstall/initrd.gz
GRUB2:
menuentry 'New Install' {
insmod part_msdos
insmod ext2
set root='(hd0,msdos1)'
linux /boot/newinstall/vmlinuz
initrd /boot/newinstall/initrd.gz
}
Why would that work? Is GRUB so smart to mount the ISO file on the hard disk as root filesystem, not the whole hard disk? Or do I have to dd the contents of ISO right onto the hard disk? Debian is vague on this.
|
Copied from frostshutz comment:
the initrd.gz (initramfs) contains the busybox userland and Debian scripts written to that purpose. GRUB2 also supports loop mounting ISO directly, but usually just to grab the kernel/initrd from the ISO itself, and once that's loaded again the ISO has to be found and loopmounted by the kernel/initrd.
| How to specify an ISO as the location of root filesystem for GRUB? |
1,404,386,586,000 |
When booting NetBSD, the old Tecra 720CDT that I have, works quite nicely in 1024x768x15 mode with vesa fb.
I always have to activate VESA when booting the system:
> vesa on
> vesa 0x116
> boot netbsd
Now, I was able to somewhat automatize this process by editing /boot.cfg:
menu=Boot normally:rndseed /var/db/entropy-file;vesa on;vesa 0x116;boot netbsd
No idea if this is preferable. I'd actually like to set this kind of behavior in the kernel itself. on OpenBSD, I'd simply use config to change the kernel settings. That, however, does not work on NetBSD. I'd have to recompile the kernel (that's my understanding).
Now, when looking through the config file, I couldn't find things related to vesa or switching to framebuffer mode while booting. Is this even possible? If so, how do I do that?
|
AFAIK, editing /boot.cfg is the preferred method. You can even specify more human-readable modes; I am using (on -current, 7.99 in a VirtualBox VM)
menu=Boot normally:rndseed /var/db/entropy-file:;vesa 1024x768x32; boot netbsd
I think having this in the kernel somehow without it being compiled in would be bad - if you update your kernel you'd lose the setting. The /boot.cfg method is persistent and human-readable.
| How to enable VESA framebuffer as default in NetBSD 6.1 |
1,404,386,586,000 |
I have an Marwell Kirkwood ARM-based NAS server Zyxel NSA 310. I compiled my own 3.8 kernel and enabled ZCACHE, but I still see 256 MB of RAM. I am not sure if GNU free utility should show extra amount of RAM. How do I find out it is really working? Do I need to do some extra steps to make use of it? I have added "zcache" to the command line.
root@nas:~# free -m
total used free shared buffers cached
Mem: 247 218 29 0 7 166
-/+ buffers/cache: 43 203
Swap: 1427 0 1427
root@nas:~# zgrep CACHE /proc/config.gz
CONFIG_CLEANCACHE=y
CONFIG_ZCACHE=y
root@nas:~# dmesg | grep zcache
Kernel command line: console=ttyS0,115200 root=/dev/sda3 zcache
zcache: using lzo compressor
zcache: cleancache enabled using kernel transcendent memory and compression buddies
zcache: cleancache: ignorenonactive = 1
zcache: frontswap enabled using kernel transcendent memory and compression buddies
zcache: frontswap: excl gets = 1 active only = 1
zcache: created ephemeral local tmem pool, id=0
zcache: created persistent local tmem pool, id=1
zcache: created ephemeral local tmem pool, id=2
zcache: created ephemeral local tmem pool, id=3
zcache: created ephemeral local tmem pool, id=4
root@nas:~# cat /proc/cmdline
console=ttyS0,115200 root=/dev/sda3 zcache
I know it is "merging" pages compressing them, but how to see the compression ratio or something like that.
|
zcache is buried inside the ram and not easily visible with current tools. To see details you need to mount debugfs and look in /sys/kernel/debug/zcache where there are a whole bunch of statistics, more than you could ever want. The frontswap is reported as pers_* (for persistent) and cleancache as eph_* (for ephemeral), although these could include other categories in the future. There are separate directories for cleancache and frontswap stores and loads.
| How do I determine ZCACHE works on my box? |
1,404,386,586,000 |
I have the following output from running lspci -vv -s 00:00 on my single board computer running Linux.
07:05.0 RAID bus controller: Adaptec AAC-RAID (Rocket) (rev 03)
Subsystem: Adaptec ASR-2230S + ASR-2230SLP PCI-X (Lancer)
Control: I/O- Mem+ BusMaster+ SpecCycle+ MemWINV+ VGASnoop-
ParErr- Stepping- SERR+ FastB2B-
Status: Cap+ 66MHz+ UDF- FastB2B- ParErr- DEVSEL=medium >TAbort- SERR-
Latency: 64 (250ns min, 250ns max), Cache Line Size: 64 bytes
Interrupt: pin A routed to IRQ 74
Region 0: Memory at f7a00000 (64-bit, non-prefetchable) [size=2M]
Region 2: Memory at f79ff000 (64-bit, non-prefetchable)
[Remaining output truncated]
The above is only example output and not exactly what I am getting but it contains the items of interest.
I understand most of the output from the lspci command, but I would like someone to explain to me the lines that begin with Region ... What type of memory am I looking at here specified by the Region line? How might I access it? With that asked, I am trying to accomplish communication between two single board computers connected over the PCI bus. I should be able to talk directly. All there is a PCI arbiter running the bus. This is what I've accomplished so far...
I created a Linux kernel module for outbound PCI traffic. Basically it maps all the way down from userspace (with a user space application) using the driver mmap implementation. I write to the location returned by mmap and I actually see the traffic with a bus analyzer! Now on the other single board computer I try read its sysfs resource for the PCI device but only see all FFs and no changes.
Any advice or explanation on how all of this memory mapping occurs, involving PCI, would be greatly appreciated.
|
lspci show the info about your PCI devices (depending on options), you can check the man page for futher info.
Regarding the REGION header on the output, these lines details where is allocated the registers used for this component. It is related with the memory mapping and how the memory is used for each component.
Region 0: Memory at f7a00000 (64-bit, non-prefetchable) [size=2M]
Region 2: Memory at f79ff000 (64-bit, non-prefetchable)
On these lines are specified the register addresses used, the size and the address size (64 bits to point a register).
Look for futher info about computer architecture if you want to go deeper on the way this addresses are used.
| Would someone please explain lspci -vv output? |
1,404,386,586,000 |
I have been struggling with this issue for a while and have done an exhaustive search for answers here and elsewhere before posting this question.
On my Asus X101H, the touchpad is not recognized as a touchpad. I have noticed that this problem does not only occur with my netbook, or Asus netbooks, but a whole host of netbooks.
The devices are identified as "Glidepads". From what I have gathered, it is a kernel issue. And it is up to those working on the kernel to be resolved. However, people making bug reports on this issue have had issues with them being closed without being resolved, etc.
All I want is for this miserable "glidepad" to be disabled while I am typing so it doesn't ruin what I am trying to write. On my notebooks, it works fine. But on netbooks, the only options present are for a mouse.
I tried everyone's suggestion of installing "gpointing...", but that doesn't work even when I choose to "disable touchpad while typing". It has no effect.
Does anyone have a solution for this issue? It affects Ubuntu, Kubuntu, Debian, and Mint (and probably many, many more.)
|
Try running sudo modprobe -r psmouse and report how it went. It solved issue for me. If you want to enable it again run sudo modprobe psmouse proto=imps, also you can make a simple script with these commands that controls enabling/disabling touchpad this way.
| Asus X101H - Touchpad not recognized (want to disable while typing) |
1,404,386,586,000 |
Wikipedia says that current Debian 8.2 Jessie is based on kernel
3.16.0, so I was wondering when a native version based on kernel 4.x
would be released and if the live kernel patching will be there as
feature with the 4.x.
I searched for a Debian roadmap on Google, but I found nothing about
the kernel.
|
While it is still not a given, officially most probably the last quarter of 2016, with the release of Debian 9. In the meanwhile, you can start using testing, compile it yourself or using a version compiled by someone else. I am using armbian in a Raspberry Pi like-device (Lamobo R1), which is Jessie, and using a v4.x put together by the armbian guys. On my Intel servers at work I plan to go soon to v4 too with Debian 8.
| When will Debian switch to Linux 4.x? ( and support the live kernel patching ? ) |
1,404,386,586,000 |
I'd like to upgrade my kernel to try to fix a persistent issue I have with intermittent freezing.
I've tried manually installing the kernel, but it throws errors during configuration and then upon sudo apt upgrade it shows:
linux-headers-5.16.0-051600-generic : Depends: libssl3 (>= 3.0.0~~alpha1) but it is not installable
Is this something that can be worked around?
As it stands my Linux installation is unusable and I've been holding out for this kernel as my last thing to try before being forced back to Windows.
|
WARNING: the below method may break your system. You have been warned.
Ubuntu mainline kernel 5.15.7+ and 5.16 bumps the requirement from libssl1.1 (>= 1.1.0) to libssl3 (>= 3.0.0~~alpha1).
You can find the change from the header packages:
dpkg -I linux-headers-5.15.6-051506-generic_5.15.6-051506.202112010437_amd64.deb | grep Depends
# Depends: linux-headers-5.15.6-051506, libc6 (>= 2.34), libelf1 (>= 0.142), libssl1.1 (>= 1.1.0), zlib1g (>= 1:1.2.3.3)
dpkg -I linux-headers-5.15.7-051507-generic_5.15.7-051507.202112080459_amd64.deb | grep Depends
# Depends: linux-headers-5.15.7-051507, libc6 (>= 2.34), libelf1 (>= 0.142), libssl3 (>= 3.0.0~~alpha1), zlib1g (>= 1:1.2.3.3)
However, the package libssl3 is only available to Ubuntu 22.04: libssl3
Same as its parent package libssl-dev, 3.0+ is only available to Ubuntu 22.04 too: libssl-dev
Therefore, if you're running Ubuntu 21.10 (or below), apt could not find the required libssl3>3.0.
You could try manually downloading and installing the package from Ubuntu 22.04:
https://packages.ubuntu.com/jammy/amd64/libssl3/download
# wget http://mirrors.kernel.org/ubuntu/pool/main/o/openssl/libssl3_3.0.1-0ubuntu1_amd64.deb
# sudo dpkg -i libssl3_3.0.1-0ubuntu1_amd64.deb
This is NOT recommended, as libssl3 is not included in Ubuntu 21.10 or below and Ubuntu 22.04 has not been formally announced until April. However, libssl3 has *almost the same dependency as libssl1.1. There should be no issue in using it on Ubuntu 21.10.
update
If you really needs these new kernels for ubuntu 20.04, download the following debs from ubuntu 22.04:
libc6_2.34-0ubuntu3_amd64.deb
libc6-dev_2.34-0ubuntu3_amd64.deb
libc-bin_2.34-0ubuntu3_amd64.deb
libc-dev-bin_2.34-0ubuntu3_amd64.deb
libnsl2_1.3.0-2build1_amd64.deb
libnsl-dev_1.3.0-2build1_amd64.deb
libssl3_3.0.1-0ubuntu1_amd64.deb
locales_2.34-0ubuntu3_all.deb
rpcsvc-proto_1.4.2-0ubuntu5_amd64.deb
If you trust me, I made a copy to Google Drive: Google drive
once downloaded all above into one folder, run:
# assume root and in this folder
dpkg --force-depends --install *.deb
apt --fix-broken install
Your Ubuntu 20.04 is now good for kernel 5.16. It was tested on my server for a week and nothing went wrong.
However, it is known that this still NOT works on some systems and breaks them! Use at your own risk! Please wait for Ubuntu 22.04 in the coming April.
| How to use 5.16 kernel with Ubuntu 21.10? |
1,404,386,586,000 |
I using Debian (Wheezy) with the 3.2.0-4 amd64 kernel version and want to upgrade my kernel to newest version, like 3.13.3 stable, but I have no internet connection to my Debian OS. I have installed both of "usb-modeswitch" and "usb-modeswitch-data" packages but Debian doesn't detect my 3g-modem-usb dongle.
I've downloaded the 3.13.3 tar.xz kernel source from kernel.org. How can I compile and upgrade from 3.2 to 3.13.3 without an internet connection?
|
The easy way: backports
I assume you need the new kernel to get your modem to work. If you can live with 3.12 instead of 3.13, at least for now, then instead of recompiling the kernel from source, you can just use Debian Backports. [update: Now backports is up to 3.14]
You can manually grab the package from packages.debian.org (update: now 3.14) on a computer with an Internet connection. Also grab initramfs-tools (there will be a link on the page). Put both on a USB stick, and install with dpkg -i.
You can also look at the linux-image-amd64 package, or the similar one for your architecture, to find out the most-recent version.
Once you have an Internet connection on the computer, the backports webpage has full instructions on how to set it up so you get updates, but in short:
Edit /etc/apt/sources.list and add deb http://YOURMIRROR.debian.org/debian wheezy-backports main
To install a package from backports, use -t wheezy-backports, e.g., aptitude -t wheezy-backports install linux-image-amd64
The hard way: upstream kernel sources
Note that you'll lose Debian patches this way, unless you hand-apply them.
Configure the kernel as in Configuring, compiling and installing a custom Linux kernel but do not install it. (You could also swipe the Debian configuration file, but beware it builds almost everything, so will take a very long time to compile). Instead, run make deb-pkg. This will generate several Debian packages. You'll want to install the linux-image- and possibly linux-headers- and linux-firmware-image- ones. You don't need to install the (absolutely huge) linux-image-*-dbg package.
| How do I upgrade the Debian Wheezy kernel offline? |
1,404,386,586,000 |
Is it true that a single application can not allocate more than 2 GiBs even if the system has GiBs more free memory when using a 32-bit x86 PAE Linux kernel? Is this limit loosened by 64-bit x86 Linux kernels?
|
A 32-bit process has a 32-bit address space, by definition: “32-bit” means that memory addresses in the process are 32 bits wide, and if you have 232 distinct addresses you can address at most 232 bytes (4GB). A 32-bit Linux kernel can only execute 32-bit processes. Depending on the kernel compilation options, each process can only allocate 1GB, 2GB or 3GB of memory (the rest is reserved for the kernel when it's processing system calls). This is an amount of virtual memory, unrelated to any breakdown between RAM, swap, and mmapped files.
A 64-bit kernel can run 64-bit processes as well as 32-bit processes. A 64-bit process can address up to 264 bytes (16EB) in principle. On the x86_64 architecture, partly due to the design of x86_64 MMUs, there is currently a limitation to 128TB of address space per process.
| How much RAM can an application allocate on 64-bit x86 Linux systems? |
1,404,386,586,000 |
Writing to a custom character device using
cat 123 > /dev/chardev
gives
cat: write error: Invalid argument
I have changed the permissions to 666 and even tried it with sudo. Still the same results.
Also tried echo in similar way
I use Arch linux 4.8.
Edit: The code for the driver
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/fs.h>
#include <asm/uaccess.h>
//Prototypes
static int __init init(void);
static void __exit cleanup(void);
static int device_open(struct inode *,struct file *);
static int device_release(struct inode *, struct file *);
static ssize_t device_read(struct file *, char *, size_t, loff_t *);
static ssize_t device_write(struct file *, const char *, size_t, loff_t *);
#define SUCCESS 0
#define DEVICE_NAME "chardev" /* Dev name as it appears in /proc/devices*/
#define BUF_LEN 80 /* Max length of the message from the device */
static int Major; //Major number of the devices
static int Device_Open = 0;
static char msg[BUF_LEN]; //Message given when asked
static char *msg_Ptr;
static struct file_operations fops = {
.read = device_read,
.write = device_write,
.open = device_open,
.release = device_release
};
static int __init init(){
Major = register_chrdev(0,DEVICE_NAME,&fops);
if(Major < 0){
printk(KERN_ALERT "Failure in registering the device. %d\n", Major);
return Major;
}
printk(KERN_INFO "%s registered with major %d \n",DEVICE_NAME,Major);
printk(KERN_INFO "create a device with 'mknod /dev/%s c %d 0'\n",DEVICE_NAME,Major);
printk(KERN_INFO "Try to cat and echo the file and shit man.\n");
return SUCCESS;
}
static void __exit cleanup(){
unregister_chrdev(Major, DEVICE_NAME);
printk(KERN_ALERT "Unregistered the device %s i guess? \n"DEVICE_NAME);
}
static int device_open(struct inode *inode,struct file *file){
static int counter = 0;
if(Device_Open)
return -EBUSY;
Device_Open++;
sprintf(msg, "I already told you %d times Hello world!\n", counter++);
msg_Ptr = msg;
try_module_get(THIS_MODULE);
return SUCCESS;
}
static int device_release(struct inode *inode,struct file *file){
Device_Open--;
module_put(THIS_MODULE);
return 0;
}
static ssize_t device_read(struct file *filp, char *buffer, size_t length, loff_t * offset){
int bytes_read = 0;
if(*msg_Ptr == 0)
return 0;
while(length && *msg_Ptr){
put_user(*(msg_Ptr++),buffer++);
length--;
bytes_read++;
}
return bytes_read;
}
static ssize_t device_write(struct file *filp,const char *buff, size_t len, loff_t *off){
printk(KERN_ALERT "You cannot write to this device.\n");
return -EINVAL;
}
module_init(init);
module_exit(cleanup);
So here we can see that i have even used a device_write function and assigned it in the fops struct to .write. So isn't it supposed to accept write command and print that statement in the log?
|
In the kernel, each driver provides a series of methods for the various operations that can be performed on a file: open, close, read, write, seek, ioctl, etc. These methods are stored in a struct file_operations. For devices, the methods are provided by the driver that registered that particular device (i.e. that particular combination of block/char, major number and minor number).
A driver may implement only some of the methods; defaults are provided. The defaults generally do nothing and return either success (if it's sensible to do nothing for that method) or EINVAL (if there is no sensible default and the lack of a method means that the feature is not supported).
“Write error: Invalid argument” means that the write method of the driver returns EINVAL. The most likely explanation is that this driver doesn't have a write method at all. It is fairly routine for drivers not to support certain actions, e.g. some drivers only support ioctl and not read/write, some drivers are intrinsically unidirectional (e.g. an input device) and only support read and not write or vice versa.
“Invalid argument” has nothing to do with permissions, it's what the device is able to do. You'd get a permission error if you didn't have write permission, but you do have permission to talk to the driver. It's just that what you're asking the driver to do is something that it has no concept of.
| cat: write error: Invalid argument |
1,404,386,586,000 |
I'm completely confused about explanation of virtual memory in TLDP:
http://www.tldp.org/LDP/tlk/kernel/processes.html#tthFtNtAAB
They say:
Each individual process runs in its own virtual address space and is not capable of interacting with another process except through secure, kernel managed mechanisms.
"Own virtual address space" for me reads as own 4Gb RAM in 32-bit mode: 0000:0000 - FFFF:FFFF. But they didn't mean that, right? If two processes point to virtual address 1111:1111, they mean the same physical address, so the same 4 Gb virtual address space is shared by all processes?
Besides, I've read about Windows here, that they really have individual virtual address spaces for each process, separate 2Gb RAM for user mode and shared 2Gb for kernel mode, so 2 different processes can both point to 1111:1111, which maps to different physical memory. Do they? :)
UPDATE: illustrations to my question. Which of the pictures is right for Linux:
Case 1:
Case 2
|
Linux as well as Windows, work pretty much the same here. Every process gets it's own "virtual" address space. This doesn't mean that the memory is actually physically available (obviously most 32bit computers never had enough memory), that's, why it's virtual.
Also the addresses used there don't correspond to the physical addresses. Thereby physical memory segment at AAAA:0000 could correspond to 9128:2af2, the point is you don't have to care. All an application is concerned with is where the thing of interest resides in it's own memory segment. And yes that also means that two applications can point to the same address in their own view of the memory and get different things.
There are also a lot of interesting things that could be mapped into there other than an actual physical memory page of the process, for example addresses belonging to devices (think video card), dynamically linked libraries or memory that's shared between processes (that's part of what's meant by "secure, kernel managed mechanisms").
Let me recommend you a textbook like Tanenbaum, Operating systems, if you want to delve a little deeper into virtual memory and process address space layout or if you can't get a hold of one easily http://duartes.org/gustavo/blog/post/anatomy-of-a-program-in-memory also makes a good read.
| Misleading explanation of Virtual Memory in TLDP |
1,404,386,586,000 |
I know that Linux OS's are typically multi-programmed, which means that multiple processes can be active at the same time. Can there be multiple kernels executing at the same time?
|
Sort of. Check out User-mode Linux.
| Can there be multiple kernels executing at the same time? |
1,404,386,586,000 |
In short:
How to extract the VERSION, SUBVERSION and PATCHLEVEl numbers from a system backup .img? ideally without root permissions.
Extended:
From the following page:
https://www.raspberrypi.org/downloads/raspbian/
It is provided a Debian zip extracted as .img, which represents a full system backup of a Debian/Raspian system for arm architecture.
For the generation of a custom kernel, it is required to know the VERSION, SUBVERSION and PATCHLEVEL of the system, equivalent to what is provided by the typical
$ uname -r
4.9.0-3-amd64
The easiest way is to load the system directly and run the command, but that is not applicable in this case.
Goal:
The kernel of the image need to be patched and cross-compiled. My intention is to create a script for this process, so it may be "easily" applied further when kernel updates come.
|
This seems to work on the 2017-09-07-raspbian-stretch-lite.img image at that site:
$ sudo kpartx -rva 2017-09-07-raspbian-stretch-lite.img
add map loop0p1 (252:19): 0 85622 linear 7:0 8192
add map loop0p2 (252:20): 0 3528040 linear 7:0 94208
$ sudo mount -r /dev/mapper/loop0p1 mnt
$ LC_ALL=C gawk -v RS='\37\213\10\0' 'NR==2{printf "%s", RS $0; exit}
' < mnt/kernel.img | gunzip | grep -aPom1 'Linux version \S+'
Linux version 4.9.41+
(where \37\213\10\0 identifies the start of gzipped data).
As non-root, and assuming the first partition is always 4MiB within the image, using the GNU mtools to extract the kernel.img from that vfat partition:
$ MTOOLS_SKIP_CHECK=1 mtype -i 2017-09-07-raspbian-stretch-lite.img@@4M ::kernel.img|
LC_ALL=C gawk -v RS='\37\213\10\0' 'NR==2{printf "%s", RS $0; exit}' |
gunzip | grep -aPom1 'Linux version \K\S+'
4.9.41+
If not, on systems with /dev/fd support (and GNU grep):
MTOOLS_SKIP_CHECK=1 MTOOLSRC=/dev/fd/3 mtype z:kernel.img \
3<< EOF 4< 2017-09-07-raspbian-stretch-lite.img |
drive z:
file="/dev/fd/4"
partition=1
EOF
LC_ALL=C gawk -v RS='\37\213\10\0' 'NR==2{printf "%s", RS $0; exit}' |
gunzip | grep -aPom1 'Linux version \K\S+'
(on other systems, use file="2017-09-07-raspbian-stretch-lite.img", the /dev/fd/4 is just for making it easier to adapt to arbitrary file names)
From, the zip file, you should be able to get away without extracting the whole image, just the first partition with:
#! /bin/zsh -
zip=${1?zip file missing}
MTOOLS_SKIP_CHECK=1 mtype -i =(
unzip -p -- "$zip" | perl -ne '
BEGIN{$/=\512}
if ($. == 1) {
($offset, $size) = unpack("x454L<2",$_)
} elsif ($. > $offset) {
print;
if ($. == $offset + $size - 1) {exit}
}') ::kernel.img |
LC_ALL=C gawk -v RS='\37\213\10\0' 'NR==2{printf "%s", RS $0; exit}' |
gunzip | grep -aPom1 'Linux version \K\S+'
| How to extract the linux version from a `.img` backup? |
1,404,386,586,000 |
The book 'Understanding Linux Kernel' says that 'for something abstract such as math functions, there may be no reason to make system calls'. Can any one please explain how is a system call not required in math functions? Isn't the CPU involved in mathematical operations? Can anyone give an example of such a program?
|
I don't have that book to check, but I assuming its using the normal meaning of system calls, then a system call is a call into the kernel to perform some operation the hardware considers privileged, or is unaware of. This is used to enforce permissions, etc. on the system. So you need to make a system call to (among many other things):
read from a file (the kernel must check that the permissions allow you to read from said file, and then the kernel carries out the actual instructions to the disk to read the file)
signal a process (processes do not exist as far as the hardware is concerned, they are an abstraction provided by the kernel, etc.)
obtain additional memory (the kernel must make sure you don't exceed the ulimit, make sure two processes don't claim the same RAM, etc.)
Math is not one of those things. Typically, it requires no intervention from the kernel.
| API with no system calls |
1,404,386,586,000 |
Over the past few years, various live kernel patching techniques have become popular among sysadmins who strive to ensure the highest possible uptimes for their systems.
For that process to be possible, a human being prepares custom patches, which are then typically distributed to paying customers, and - sometimes - free of charge to home users.
Why is it not possible to automatically create these patches using the difference between the source codes of the running kernel version and the latest available? As I understand it, server kernels, which could profit the most from this, typically only undergo major changes once every a couple of years, and otherwise only receive major bugfix and security updates, seemingly making this even easier. Likewise, if stability was the concern, it would seem quite simple to set up a system where volunteers running machines of relatively low importance would build their patches first, and automatically report back on how well they work.
Yet, none of this happens. What am I missing there that makes this the case?
|
We like to think of running programs like the static source code that creates them. But they are really continually changing. Likewise the kernel in memory is NOT the same as the kernel on disk.
To quote Dijkstra in his letter "goto considered harmful"...
My first remark is that, although the programmer's activity ends when he has constructed a correct program, the process taking place under control of his program is the true subject matter of his activity, for it is this process that has to accomplish the desired effect; it is this process that in its dynamic behavior has to satisfy the desired specifications. Yet, once the program has been made, the "making' of the corresponding process is delegated to the machine.
My second remark is that our intellectual powers are rather geared to master static relations and that our powers to visualize processes evolving in time are relatively poorly developed. For that reason we should do (as wise programmers aware of our limitations) our utmost to shorten the conceptual gap between the static program and the dynamic process, to make the correspondence between the program (spread out in text space) and the process (spread out in time) as trivial as possible
From this I would infer that it is a bad idea to have a program or kernel in memory that isn't the result of loading the kernel from disk. If nothing else you want to know that you can reboot and end up with the same kernel as you are running now.
As a sys-admin you want to know you got a bonafide regular kernel at the end not some Frankenstein's monster because your live kernel had subtle differences to the one they patched.
And live patching is very hard indeed. It is technically impossible to automatically generate live patches
It's important to understand that program code effectively rewrites itself.
int X = 10;
void run(){
X=5;
}
In this code example X=10 is never executed as code. The number 10 is placed in the location "X" by the compiler. When the 3rd line is executed at run time it replaces the value at location "X". It literally overwrites the value, meaning the number 10 disappears from the running program code entirely.
Now we try to live patch this with:
int X = 20;
void run(){
X=15;
}
What should X be patched to 20 or 15? Should we patch it at all or just leave it? We are not just changing code here we are changing dynamically generated values. You might think that because they are dynamically generated you might not need to change them, but if we don't change them do we know 5 or 10 is still a valid value in the new code? This cannot be done automatically!
In short there are techniques with associated tools that can create live patches, but using them and testing the result requires experts. Releasing these tools and expecting home users to understand how to use them is a good way for a lot of home users to screw up their system.
| Why can't entire kernels be patched live? |
1,404,386,586,000 |
I am trying to upgrade my fedora system (21 → 22) using fedup. I removed all old kernels using package-cleanup but fedup still needs 2MB more on /boot.
These are the files in /boot:
-rw-r--r--. 1 root root 153K Sep 22 17:52 config-4.1.8-100.fc21.x86_64
drwxr-xr-x. 4 root root 1.0K May 25 09:38 efi
-rw-r--r--. 1 root root 181K Oct 21 2014 elf-memtest86+-5.01
drwxr-xr-x. 2 root root 3.0K May 25 09:47 extlinux
drwxr-xr-x. 6 root root 1.0K Oct 23 13:32 grub2
-rw-------. 1 root root 38M Aug 18 2014 initramfs-0-rescue-91b91d0aa1ed43eab9d2bcf5b8669540.img
-rw-r--r--. 1 root root 19M Oct 11 11:58 initramfs-4.1.8-100.fc21.x86_64.img
-rw-r--r--. 1 root root 41M May 22 05:12 initramfs-fedup.img
-rw-r--r--. 1 root root 552K May 25 09:51 initrd-plymouth.img
drwx------. 2 root root 12K Aug 18 2014 lost+found
-rw-r--r--. 1 root root 179K Oct 21 2014 memtest86+-5.01
-rw-------. 1 root root 3.0M Sep 22 17:52 System.map-4.1.8-100.fc21.x86_64
-rwxr-xr-x. 1 root root 5.0M Aug 18 2014 vmlinuz-0-rescue-91b91d0aa1ed43eab9d2bcf5b8669540
-rwxr-xr-x. 1 root root 5.7M Sep 22 17:52 vmlinuz-4.1.8-100.fc21.x86_64
-rw-r--r--. 1 root root 5.7M May 21 18:46 vmlinuz-fedup
initramfs-0-rescue-... is taking up the maximum space. This was created when I upgraded my OS from last version (fedora 20). I guess this file can be removed. Is there a way to remove this without manually deleting using rm? If not this file, which other file can be safely deleted (there is a folder called /efi/EFI/fedora/fonts, but I think the rescue files are the most dispensable)?
|
The vmlinuz-0-rescue-* and initramfs-0-rescue-* files can be safely removed with rm. They're not owned by any package, and to my knowledge there isn't any tool for deleting them (although you can create new ones with dracut).
After removing, run
grub2-mkconfig -o /boot/grub2/grub.cfg
to regenerate your grub config so they don't show up in the boot menu.
These images are the largest, by the way, because they are machine-independent — they'll boot on any system. The other kernel/ramfs combinations leave out some modules not needed for the hardware on the machine they were installed on, and may not be portable to other systems. The rescue image lets you fix that if need be.
(As for other files, you can also remove the fedup ones. Those were used in the upgrade, and should have been removed automatically.)
| Removing the rescue image from /boot on fedora |
1,320,588,194,000 |
My immediate objective is to compile a small kernel for my laptop without sacrificing usability. I am familiar with the kernel compilation steps (don't necessarily understand the process). What are the options I can get rid of in menuconfig for a faster, slimmer kernel? I have been using the trial and error method, i.e uncheck unused filesystems and drivers, but this is a painfully slow process. Can somebody point me towards things I should not touch or a better way of going about this process? This little "project" is for recreation only.
System Specs and OS:
i7 580M, Radeon HD5850, 8Gb DDR3, MSI Motherboard
x86_64 Ubuntu 11.10.
|
Unchecking filesystems and drivers isn't going to reduce the size of the kernel at all, because they are compiled as modules and only the modules that correspond to hardware that you have are loaded.
There are a few features of the kernel that can't be compiled as modules and that you might not be using. Start with Ubuntu's .config, then look through the ones that are compiled in the kernel (y, not m). If you don't understand what a feature is for, leave it alone.
Most of the kernel's optional features are optional because you might not want them on an embedded system. Embedded systems have two characteristics: they're small, so not wasting memory on unused code is important, and they have a dedicated purpose, so there are many features that you know you aren't going to need. A PC is a general-purpose device, where you tend to connect lots of third-party hardware and run lots of third-party software. You can't really tell in advance that you're never going to need this or that feature. Mostly, what you'll be able to do without is support for CPU types other than yours and workarounds for bugs in chipsets that you don't have (what few aren't compiled as modules). If you compile a 64-bit kernel, there won't be a lot of those, not nearly as many as a 32-bit x86 kernel where there's quite a bit of historical baggage.
In any case, you are not going to gain anything significant. With 8GB of memory, the memory used by the kernel is negligible.
If you really want to play around with kernels and other stuff, I suggest getting a hobbyist or utility embedded board (BeagleBoard, Gumstix, SheevaPlug, …).
| Stripped down Kernel for a Laptop |
1,320,588,194,000 |
In the Linux Kernel CONFIG_NO_HZ is not set. But an initial reading suggests that setting that option would be nice from a performance point of view. But reading some posts like this made me think again.
Why CONFIG_NO_HZ is not set by default or why no performance improvement when it is enabled?
|
The performance improvement is not visible to everyone, just certain users for which RT kernels really matter : DSP, audio/video processing, and so on. So that config option is not universally beneficial, hence disabled.
| Why CONFIG_NO_HZ is not set by default |
1,320,588,194,000 |
How to enable Ethernet over USB support in Linux kernel?
Which driver (like CONFIG_USB_USBNET) is related to this support?
Is EEM (Ethernet Emulation) regarding this support or not?
|
You need CONFIG_USB_USBNET together with whatever CONFIG_USB_NET_* module you need for your USB device.
The only thing I can find in my config that could match your EEM is CONFIG_USB_NET_CDC_EEM but I don't have that enabled, that is another USB device that I don't own.
| Enabling ethernet over USB support in Linux kernel |
1,320,588,194,000 |
I am compiling an image for wifi devices using OpenWrt. Following the instruction I copy a simple .config file to TOP-DIR.
CONFIG_TARGET_ar71xx=y
CONFIG_TARGET_ar71xx_generic=y
CONFIG_TARGET_ar71xx_generic_XXX_OpenWrt_Router=y
Then run the command make menuconfig. The outcome of this command is
.config file now having default configuration.
However, I don't know where all the new configuration comes from. If I just change CONFIG_TARGET_ar71xx_generic_XXX_Router=y to CONFIG_TARGET_ar71xx_generic_YYY_Router=y then the outcome of make defconfig must be a lot different.
|
OpenWrt stores config in the directory
target/linux/<target system>/<subtarget>/profiles
You can set target system and subtarget with command make menuconfig:
In my case:
target system = ar71xx
subtarget = generic
So the directory would be:
target/linux/ar71xx/generic/profiles
In this directory, you will find some predefine profiles, stores in <profile>.mk file. Those files define which packages will be used as default when a target profile is selected.
| Where does make defconfig get its configuration in OpenWrt? |
1,320,588,194,000 |
What does it mean to install a new version of the kernel. My Linux box gave me this message when I was updating,
NOTE, 3.8.13 was the last maintained maintenance release by Greg Kroah-Hartman.
It is recommend to move on to linux310-series.
What I want to know is,
Is it really that simple to only change the Linux kernel?
Is the Linux kernel like a simple executable file that can be swapped out for another Linux kernel?
What happens if I install a new kernel while the box is already running another version of the kernel?
Right now I'm using Kernel 3.8.13.8-1. Is it really okay to move to the linux310-series as the above update message says?
|
Yes.
Yes, sort of. It's not simple and it's not "just" an executable file, but you can swap between Linux kernels super easily. This is not applicable to every kernel, though, meaning that e.g. you can probably replace a FreeBSD kernel with a newer FreeBSD kernel, and as stated above, you can upgrade a Linux kernel pretty easily, but you cannot easily replace a Linux kernel with a FreeBSD kernel. See the Wikipedia page on kernels. I'd link you to the "what is a kernel" question, but I can't find it.
Your box will continue to use your current kernel. It depends on your bootloader and distribution, but when you reboot, you'll probably be able to choose between the two kernels.
Probably. Upgrading your kernel isn't actually a big deal, especially since (if you're smart, and keep your old kernel around just in case) you can go back to the old one if the new kernel doesn't work for some reason. Sometimes there are problems with new kernel versions, especially if you have exotic hardware, but I've never seen any myself.
| What does it mean to install a kernel? |
1,320,588,194,000 |
init is the first task executed after kernel is loaded, right?
Then who is its owner.
also I can see [swapper/0] [swapper/1] ..... [swapper/7] having pid 0
PID PPID CPU TASK ST %MEM VSZ RSS COMM
0 0 0 c180b020 RU 0.0 0 0 [swapper/0]
0 2 1 f7550ca0 RU 0.0 0 0 [swapper/1]
0 2 2 f7554bc0 RU 0.0 0 0 [swapper/2]
0 2 3 f7570ca0 RU 0.0 0 0 [swapper/3]
0 2 4 f7574bc0 RU 0.0 0 0 [swapper/4]
0 2 5 f75c8ca0 RU 0.0 0 0 [swapper/5]
0 2 6 f75ccbc0 RU 0.0 0 0 [swapper/6]
0 2 7 f75f0ca0 RU 0.0 0 0 [swapper/7]
1 0 2 f7480000 IN 0.1 4676 2568 init
2 0 5 f7480ca0 IN 0.0 0 0 [kthreadd]
|
init is a user-space process that always has PID=1 and PPID=0. It's the first user-space program spawned by the kernel once everything is ready (i.e. essential device drivers are initialised and the root filesystem is mounted). As the first process launched, it doesn't have a meaningful parent.
The other 'processes' in your extract are indeed kernel tasks.
| init: is it a user thread or a kernel thread? |
1,320,588,194,000 |
I'm in a situation where I have a 10/100/1000-capable PHY cabled only to support 10/100.
The default behaviour is to use the autonegociation to find the best mode.
At the other end, using a gigabit capable router ends in a non-working interface. I guess autonegociation never converges.
I've heard some people tried with a 100Mbps switch and it works fine.
I'm able to get it working using ethtool but this is quite frustrating :
ethtool -s eth1 duplex full speed 100 autoneg off
What I'd like to do is to keep the autonegociation but withdraw 1000baseT/Full from the choices so that it ends up running seemlessly in 100Mbps. Any way to achieve that using ethtool or kernel options ? (didn't find a thing on my 2.6.32 kernel ...)
(Let's just say some strange dude comes to me with a 10Mbps switch, I need this eth to work with this switch from another century)
|
The thing with autonegotiation is that if you turn it off from one end, the other side can detect the speed but not the duplex mode, which defaults to half. Then you get a duplex mismatch, which is almost the same as the link not working. So if you disable autonegotiation on one end, you practically have to disable it on the other end too.
(Then there's the thing that autonegotiation doesn't actually test the cable, just what the endpoints can do. This can result in a gigabit link over a cable that only has two pairs, and cannot support 1000Base-T.)
But ethtool seems capable of telling the driver what speed/duplex modes to advertise. ethtool -s eth1 advertise 0x0f would allow all 10/100 modes but not 1G.
advertise N
Sets the speed and duplex advertised by autonegotiation. The
argument is a hexadecimal value using one or a combination of
the following values:
0x001 10baseT Half
0x002 10baseT Full
0x004 100baseT Half
0x008 100baseT Full
0x010 1000baseT Half (not supported by IEEE standards)
0x020 1000baseT Full
| Remove SOME advertised link modes with ethtool |
1,320,588,194,000 |
As I understand it, part of the Unix identity is that it has a microkernel delegating work to highly modular file processes. So why is Linux still considered "Unix-Like" if it strays from this approach with a monolithic kernel?
|
I believe the answer lies in how you define "Unix-like". As per the wikipedia entry for "Unix-like", there doesn't seem to be a standard definition.1
A Unix-like (sometimes referred to as UN*X or *nix) operating system
is one that behaves in a manner similar to a Unix system, while not
necessarily conforming to or being certified to any version of the
Single UNIX Specification.
There is no standard for defining the term, and some difference of
opinion is possible as to the degree to which a given operating system
is "Unix-like".
The term can include free and open-source operating systems inspired
by Bell Labs' Unix or designed to emulate its features, commercial and
proprietary work-alikes, and even versions based on the licensed UNIX
source code (which may be sufficiently "Unix-like" to pass
certification and bear the "UNIX" trademark).
Probably the most obvious reason is that UNIX and MINIX are antecedent of Linux, having inspired its creation.2
Torvalds began the development of the Linux kernel on MINIX and
applications written for MINIX were also used on Linux. Later, Linux
matured and further Linux kernel development took place on Linux
systems.
Linus Torvalds had wanted to call his invention Freax, a portmanteau
of "free", "freak", and "x" (as an allusion to Unix).
Whether a system is monolithic or microkernel does not seem to be considered when calling an operating system "Unix-like". At least, not nearly as often as whether the system is POSIX-compliant or mostly POSIX-compliant.
| Why is Linux "Unix-like" if its kernel is monolithic? |
1,320,588,194,000 |
I read in a book written by Robert Love that:
Linux supports the dynamic loading of kernel modules.
He said this is the difference between Linux and Unix, but I seem to recall there is also KLD in FreeBSD? So can KLD also be seen as dynamic loading of kernel modules?
|
KLD is indeed dynamic kernel modules. In fact, many old school Unixen also have loadable kernel modules nowadays. Your book must be quite old :)
| "Linux supports the dynamic loading of kernel modules. " |
1,320,588,194,000 |
One of my (several) USB drives are reporting this error in dmesg
usb 5-1: device descriptor read/8, error -110
How do I find out the /dev/sdX name of all my USB devices?
In reverse: How do I find out the USB number of a device, given it's /dev/sdX name?
UPDATE:
Many excellent answers given here. I would expect this information to be in a standard tool like lsusb, though sadly it's not (yet). I'm choosing to award @sudodus, as his response best seems to answer getting the Name of All Drives.
ANSWER:
To generate more clear output, I suggest this somewhat longer command
find /sys/ -name dev -path '*usb*block*' | grep -v "sd../" | grep -Po "\/usb.*\/sd."
|
I did some testing with USB mass storage devices (USB pendrives and USB connected SSDs) in my computer and think the output of
find /sys/ -name dev -path '*usb*block*'
can help you match the USB number of a device with its /dev/sdX name.
Another method is to match lsblk and lsusb command lines and to identify them via 'model/product' and 'serial' tags,
lsblk -do name,model,serial,hotplug|grep -e HOTPLUG$ -e 1$
sudo lsusb -v |grep -i -e '^BUS .* Device' -e product -e serial
| How to get a USB device Name from its Number |
1,320,588,194,000 |
I know there's *.conf file in /usr/lib/sysctl.d and/or /etc/sysctl.d folders ready to establish kernel parameters on boot. But they are general;
what I want is to customize some of these parameters (say, net.ipv4.icmp_echo_ignore_all) depending on the user who is in, but I want per-user kernel parameters.
Is it possible or what I'm saying is a complete silliness?
|
What @Tomasz says it's true: those are kernel parameters, so they are "unique"!
Anyway, you can achieve some result with that limit
...in the OP, @Osqui doesn't say users are simultaneously logged in...
by sketching out a script executed when users login/logout using the sysctl command
| Customize sysctl parameters by user |
1,320,588,194,000 |
I have an embedded Linux on some network device. Because this device is pretty important I have to make many network tests (I have a separate device for that). These tests include flooding my device with ARP packets (normal packets, malformed packets, packets with different size etc.)
I read about different xx-tables on the internet: ebtables, arptables, iptables, nftables etc. For sure I'm using iptables on my device.
What xx-tables is the best to filter (limit, not drop) ARP packets?
I heard something about /proc/config.gz file which suppose to have information what is included in the Kernel. I checked CONFIG_IP_NF_ARPFILTER which is not included. So - in order to use arptables - I should have Kernel compilled with CONFIG_IP_NF_ARPFILTER option enabled, correct? And the same goes to for example ebtables?
I read that ebtables & arptables works on OSI level 2 when iptables works on OSI level 3. So I would assume that filtering anything on level 2 is better (performance?) then on level 3, correct?
I found somewhere on this website answer to use ebtables to filter ARP packets. Does ebtables have any advantage over arptables?
EXTRA ONE. What is the best source on the internet to learn about limiting/filtering network traffic for different kind of packets and protocols?
|
What xx-tables is the best to filter (limit, not drop) ARP packets?
iptables
iptables starts from IP layer: it's already too late to handle ARP.
arptables
While specialized in ARP, arptables lacks the necessary matches and/or targets to limit rather than just drop ARP packets. It can't be used for your purpose.
ebtables
ebtables can be a candidate (it can both handle ARP and use limit to not drop everything).
pro:
− quite easy to use
con:
− it's working on ethernet bridges. That means if you're not already using a bridge, you have to create one and enslave your (probably unique) interface on it, for the sake of having it being usable at all. This comes with a price, both for configuration, and probably some networking overhead (eg: network interface is set promiscuous).
− as it doesn't have the equivalent of iptables's companion ipset, limiting traffic is crude. It can't do per-source on-the-fly metering (so such source MACs or IPs must be manually added in the rules).
nft (nftables)
pro:
− this tool was made with the goal to replace other tools and avoid duplication of code, like duplicating match modules (one could imagine that arptables could also have received a limit match, but that would just be the third implementation of such a match module, after ip(6)tables' xt_limit and ebtables' ebt_limit ones). So it's intended to be generic enough to use the same features at any layer: it can limit/meter traffic at ARP level while also doing it per source rather than globally.
con:
− some features might require a recent kernel and tools (eg: meter requires kernel >= 4.3 and nftables >= 0.8.3).
− since its syntax is more generic, rules can be more difficult to create correctly. Sometimes documentation can be misleading (eg: non-working examples).
tc (traffic control)?
It might perhaps be possible to use tc to limit ARP traffic. As tc feature works very early in the network stack, its usage could limit ressource usage. But this tool is also known for its complexity. Even to use it for ingress rather than egress traffic requires steps. I didn't even try on how to do this.
CONFIG_IP_NF_ARPFILTER
As seen in previous point, this is moot: arptables can't be used. You need instead NF_TABLES_ARP or else BRIDGE_NF_EBTABLES (or maybe if tc is actually a candidate, NET_SCHED). That doesn't mean it's the only prerequisite, you'll have to verify what else can be needed (at least what to make those options become available, and various match kernel modules needed to limit ARP).
What layer is best?
I'd say using the most specific layer doing the job would be the most easier to handle. At the same time, the earlier handled, the less overhead is needed, but it's usually more crude and so complex to handle then. I'm sure there are a lot of different possible advices here. ARP can almost be considered being between layer 2 and 3. It's implemented at layer 2, but for example equivalent IPv6's NDP is implemented at layer 3 (using multicast ICMPv6). That's not the only factor to consider.
Does ebtables have any advantage over arptables?
See points 1 & 2.
What is the best source on the internet to learn about limiting/filtering network traffic for different kind of packets and protocols?
Sorry there's nothing that can't be found using a search engine with the right words. You should start with easy topics before continuing with more difficult. Of course SE is already a source of informations.
Below are examples both for ebtables and nftables
with ebtables
So let's suppose you have an interface eth0 and want to use ebtables with it with IP 192.0.2.2/24. The IP that would be on eth0 becomes ignored once the interface becomes a bridge port. It has to be moved from eth0 to the bridge.
ip link set eth0 up
ip link add bridge0 type bridge
ip link set bridge0 up
ip link set eth0 master bridge0
ip address add 192.0.2.2/24 dev bridge0
Look at ARP options for ebtables to do further filtering. As told above ebtables is too crude to be able to limit per source unless you manually state each source with its MAC or IP address with rules.
To limit to accepting one ARP request per second (any source considered).
ebtables -A INPUT -p ARP --arp-opcode 1 --limit 1/second --limit-burst 2 -j ACCEPT
ebtables -A INPUT -p ARP --arp-opcode 1 -j DROP
There are other variants, like creating a veth pair, putting the IP on one end and set the other end as bridge port, leaving the bridge without IP (and filtering with the FORWARD chain,stating which interface traffic comes from, rather than INPUT).
with nftables
To limit to accepting one ARP request per second and on-the-fly per MAC address:
nft add table arp filter
nft add chain arp filter input '{ type filter hook input priority 0; policy accept; }'
nft add rule arp filter input arp operation 1 meter per-mac '{ ether saddr limit rate 1/second burst 2 packets }' counter accept
nft add rule arp filter input arp operation 1 counter drop
| Best way to filter/limit ARP packets on embedded Linux |
1,320,588,194,000 |
Today pretty much all kernels use virtual memory provided by the MMU. They do that with the global page table, the address of which is located in a CPU register, and a page supervisor/mapper of pages to processes. The "vm" in vmlinuz, for example, means that the linux kernel supports virtual memory.
All that is possible because the MMU maps continuous addresses of memory to the memory segments understood by the x86 architecture.
The original UNIX kernel did have a vmunix version, which, I believe, must have used a similar technique. Yet, the original UNIX kernel was written before MMUs were available. If I'm not mistaken the original UNIX kernel (called simply unix), was written before the existence of the x86 architecture. Historically it did run on the PDP-9 and PDP-11.
How that kernel performed memory addressing and management? Was it a segment based addressing (two numbers) or full memory addressing (a single number)? How it separated memory between processed?
|
Virtual memory is almost a decade older than Unix: there was one in the Burroughs B5000 in 1961. It didn't have an MMU in the modern sense (i.e. based on pages) but provided the same basic functions. IBM System/360 Model 67 in 1965 (still older than Unix) had an MMU. Intel x86 processors didn't get an MMU until the 80386 in 1986.
Implementing a Unix system doesn't actually require an MMU. It does require some form of virtual memory, otherwise implementing the fork system call is prohibitively difficult. The fork system call, to create processes by copying an existing process, was a fundamental part of Unix ever since the very first version, so it did require virtual memory. See D. M. Ritchie and K. Thompson, The UNIX Time-Sharing System, CACM, 1974, §V “Processes and images”.
I don't know the details of the hardware that the first Unix versions ran on, but they did have virtual memory in the form of a segmented architecture. The CPU translated between pointers dereferenced by a program (virtual addresses) and actual locations in memory (physical addresses). The mapping was performed by adding an offset to the virtual address. On each context switch between processes, the register containing the offset was adjusted.
Although virtually all Unix implementations provide process isolation, this was not the case of some historical implementations on hardware that didn't have memory protection (both in the 1970s, and also in the 1980s with MINIX on 8088 and 80286). Memory protection is somewhat orthogonal to address virtualization; an MMU provides both, a simple segmented architecture doesn't, an MPU¹ provides protection without virtualization. There is a Linux implementation for systems without an MMU, uCLinux, but due to the lack of fork many programs can't run (the only supported of fork is vfork which requires an execve call in the child immediately afterwards).
¹ An MPU (memory protection unit) records access rights for each page of memory.
| How the original unix kernel adressed memory? |
1,320,588,194,000 |
Unfortunately, when a hard drive (usually a virtual drive) is slow, Linux aborts requests to that drive after a timeout, possibly causing data corruption.
Last time this happened to me, I had 2 vms running (Linux and FreeBSD) on a storage, which had connectivity issues and was frozen for over an hour. The storage itself is fine, no errors there, and after fixing the connection, the vms (which obviously were frozen as well) seemed to be working again.
However, the Linux vm had decided to abort requests, rendering that system unusable (ls on most directories got stuck, so did mount without options and many other things did not work anymore); a reboot was necessary. These are the errors (dmesg):
...
[86707.916728] Write(10): 2a 00 02 4c 9e 38 00 03 c0 00
[86707.916732] mptscsih: ioc0: task abort: SUCCESS (rv=2002) (sc=ffff880036865500)
[86707.916734] mptscsih: ioc0: attempting task abort! (sc=ffff880036866100)
[86707.916735] sd 2:0:0:0: [sda] CDB:
[86707.916736] Write(10): 2a 00 02 4c a1 f8 00 03 c0 00
[86707.916739] mptscsih: ioc0: task abort: SUCCESS (rv=2002) (sc=ffff880036866100)
[86707.916741] mptscsih: ioc0: attempting task abort! (sc=ffff880036865c80)
[86707.916742] sd 2:0:0:0: [sda] CDB:
[86707.916743] Write(10): 2a 00 02 4c a5 b8 00 03 c0 00
[86707.916746] mptscsih: ioc0: task abort: SUCCESS (rv=2002) (sc=ffff880036865c80)
[86707.916748] mptscsih: ioc0: attempting task abort! (sc=ffff880036864300)
[86707.916749] sd 2:0:0:0: [sda] CDB:
[86707.916750] Write(10): 2a 00 02 4c a9 78 00 02 b0 00
[86707.916753] mptscsih: ioc0: task abort: SUCCESS (rv=2002) (sc=ffff880036864300)
It's interesting that the FreeBSD vm has no errors logged and is working fine. So apparently, only FreeBSD worked as expected, not aborting anything (although I think I've seen similar kernel messages on FreeBSD systems).
I don't know why the kernel is killing pending write requests after a timeout. It probably makes sense in some cases, but it certainly does not in my case - it's actually an unnecessary risk (without that timeout, the Linux vm would have continued normally after the connection had been restored, everything would have worked again).
How can the Linux kernel timeout (vm) for frozen hard drives be DISABLED?
Edit:
The Linux vm has 1 hard drive (/dev/sda) only, which should look like a regular (SCSI type of) physical drive to it.
lspci lists this controller: "SCSI storage controller [0100]: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI [1000:0030] (rev 01)".
Here's another example (different vm, same host, also Linux) (in this case, the storage wasn't gone, but the host was under heavy load):
[1179039.664031] ata2: lost interrupt (Status 0x18)
[1179039.727188] ata2: drained 8 bytes to clear DRQ
[1179039.727272] ata2.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
[1179039.740720] sr 1:0:0:0: CDB:
[1179039.740759] Get event status notification: 4a 01 00 00 10 00 00 00 08 00
[1179039.740768] ata2.00: cmd a0/00:00:00:08:00/00:00:00:00:00/a0 tag 0 pio 16392 in
res 40/00:02:00:08:00/00:00:00:00:00/a0 Emask 0x4 (timeout)
[1179039.740770] ata2.00: status: { DRDY }
[1179039.748067] ata2: soft resetting link
[1179039.937757] ata2.00: configured for UDMA/33
[1179039.943435] ata2: EH complete
Edit:
And this is what the timeout errors look like on a Debian/kBSD (FreeBSD kernel) system (same host, same situation, different vm):
mpt0: request 0xffffff80007305d0:62955 timed out for ccb 0xfffffe000a3bb800 (req->ccb 0xfffffe000a3bb800)
mpt0: request 0xffffff800072fa90:62956 timed out for ccb 0xfffffe000a3d1000 (req->ccb 0xfffffe000a3d1000)
mpt0: request 0xffffff8000726070:62962 timed out for ccb 0xfffffe000a428000 (req->ccb 0xfffffe000a428000)
mpt0: attempting to abort req 0xffffff80007305d0:62955 function 0
mpt0: completing timedout/aborted req 0xffffff8000726070:62962
mpt0: completing timedout/aborted req 0xffffff80007305d0:62955
mpt0: completing timedout/aborted req 0xffffff800072fa90:62956
mpt0: abort of req 0xffffff80007305d0:0 completed
mpt0: request 0xffffff8000726190:64136 timed out for ccb 0xfffffe000a3d1800 (req->ccb 0xfffffe000a3d1800)
mpt0: attempting to abort req 0xffffff8000726190:64136 function 0
mpt0: completing timedout/aborted req 0xffffff8000726190:64136
mpt0: abort of req 0xffffff8000726190:0 completed
mpt0: request 0xffffff8000721990:50970 timed out for ccb 0xfffffe00024bf800 (req->ccb 0xfffffe00024bf800)
mpt0: attempting to abort req 0xffffff8000721990:50970 function 0
mpt0: completing timedout/aborted req 0xffffff8000721990:50970
mpt0: abort of req 0xffffff8000721990:0 completed
mpt0: request 0xffffff80007279c0:61393 timed out for ccb 0xfffffe000a3cf000 (req->ccb 0xfffffe000a3cf000)
mpt0: request 0xffffff8000732550:61395 timed out for ccb 0xfffffe000a428000 (req->ccb 0xfffffe000a428000)
mpt0: attempting to abort req 0xffffff80007279c0:61393 function 0
mpt0: completing timedout/aborted req 0xffffff80007279c0:61393
mpt0: completing timedout/aborted req 0xffffff8000732550:61395
mpt0: abort of req 0xffffff80007279c0:0 completed
|
I have found a timeout, which appears to have a default of 30 seconds on most systems. I'm not completely sure that this is the relevant timeout, but I've increased it on some vms, put the system under a significant load and I've not had any hdd timeouts in the vms so far.
Also, some of the comments are expressing confusion as to what hdd I've configured in the vm, so I have added that information to the question. And I have several Linux vms running at the same time, so the errors are not appearing in just one single vm.
Timeout setting (e.g., in /etc/rc.local):
Linux:
TIMEOUT=86400
for f in /sys/block/sd?/device/timeout; do
echo $TIMEOUT >"$f"
done
If this pattern (sd?) does not match your hardware, search for timeouts and check them manually:
find /sys/ -name timeout
Debian/kBSD (GNU/kFreeBSD 9.0-2-amd64):
sysctl kern.cam.da.default_timeout=86400
(I've significantly increased the timeout rather than disabling it; if this turns out to be the culprit, a more appropriate value might be set.)
Again, I've not confirmed that this is exactly the timeout my vms are running into (or that this is the only timeout), but given that I've put the system under high load (the kind of load that used to trigger hdd timeouts) and no hdd timeout has occurred yet (although network timeouts have, like before), it certainly seems like this might at least be part of the solution.
| Can the hard drive timeout be disabled in Linux (attempting task abort) |
1,320,588,194,000 |
I have an Android TV stck
I compiled Linux for IT from https://github.com/Galland/rk3x_kernel_3.0.36
But when I booted that Image I found /proc/config.gz is of 0 bytes
can some please explain how the command line params from .config file in kernel source get mounted to /proc.
I mean what goes in the background??
|
Files in /proc do not have a file size in general, and are shown as having 0 size in ls -l, but you can read data from them anyway (see man 5 proc).
Try, for example:
zcat /proc/config.gz | wc
or:
$ ls -l /proc/cmdline
-r--r--r-- 1 root root 0 Aug 4 10:16 /proc/cmdline
Looks empty. But:
$ cat /proc/cmdline | wc
1 5 114
it contains data. Let's see:
$ cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-3.13.0-24-generic root=UUID=fc48808f-8f06-47fc-a1fe-5d08ee9e0a50 ro noirqdebug nomodeset
feels like a normal file - except if you want to do anything special, like reading by blocks, seek(), or loking at the size.
In case you can not read /proc/config.gz, there is a file that normally contains the same:
less /lib/modules/$(uname -r)/build/.config
See man proc for details.
| /proc/config.gz is of 0 bytes |
1,320,588,194,000 |
I was looking for a list of keyboard scancodes in the linux kernel sources, but I did not find anything. Does someone know where to find these? Especially the USB scancodes would be interesting.
|
The keycodes are in [src]/drivers/tty/vt/defkeymap.map:
# Default kernel keymap. This uses 7 modifier combinations.
[...]
See also my answer here for ways to view (dumpkeys) and modify (loadkeys) the current keymap as it exists in the running kernel.
However, those are a bit higher level than the scancodes sent by the device. Those might be what's in the table at the top of [src]/drivers/hid/hid-input.c, however, since they come from the device, you don't need the linux kernel source to find out what they are; they are the same regardless of OS.
"HID" == human interface device. The usbhid subdirectory of drivers/hid doesn't appear to contain any special codes, since USB keyboards are really regular keyboards.
One difference between keycodes and scancodes is that scancodes are more granular -- notice there's a different signal for the press and release. A keycode corresponds to a key that's down, I believe; so the kernel maps scancode events to a keycode status.
| Where in the linux kernel sources can I find a list of the different keyboard scancodes? |
1,320,588,194,000 |
I have used buildroot to successfully create a kernel, root file system and cross-compilers to enable me to write application code to run on an embedded device. Currently I have no need to write device drivers and currently have no idea how to go about it anyway but it is quite likely that in future I may need to do this. From my research I have come to understand that the kernel API can change between versions and that writing a device driver is specific to a kernel version unlike writing a user level application. Basically I would like to know:
Is the above correct?
What factors need consideration when deciding on a what kernel version to use?
The reason I ask is that from all the literature that I have read on the subject (and the attendance at an embedded linux course) deals with version 2.6.x kernels. I am building by embedded system using a 3.6.11 kernel but I am wondering why the course and literature seems to deal with these older kernels. Are there beneficial aspects to using an older kernel, or are there drawbacks to using newer versions?
|
3.x is just continuation of 2.x - at one point Linus decided that the "x" part of the version is too big. Generally you probably want reasonably recent kernel, probably one marked as "longterm". A lot also depends on your application as well - while remote security holes in kernel are rather scarce, local problems are much more prevalent.
| What considerations need to be made when choosing the version of kernel for an embedded device? |
1,320,588,194,000 |
I need to install my linux headers for an Nvidia driver install. But I get an error when doing so:
peter@peter-deb:~$ sudo apt-get install linux-headers-$(uname -r)
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package linux-headers-2.6.32-5-amd64
E: Couldn't find any package by regex 'linux-headers-2.6.32-5-amd64'
How can I get this to work?
Edit: I am using Deb 6.
@Warren Young :
peter@peter-deb:~$ sudo apt-get install -qy linux-headers-$(uname -r)
[sudo] password for peter:
Reading package lists...
Building dependency tree...
Reading state information...
E: Unable to locate package linux-headers-2.6.32-5-amd64
E: Couldn't find any package by regex 'linux-headers-2.6.32-5-amd64'
And also
peter@peter-deb:~$ apt-cache search linux-headers
linux-headers-3.0.0-1-all - All header files for Linux 3.0.0 (meta-package)
linux-headers-3.0.0-1-all-amd64 - All header files for Linux 3.0.0 (meta-package)
linux-headers-3.0.0-1-amd64 - Header files for Linux 3.0.0-1-amd64
linux-headers-3.0.0-1-common - Common header files for Linux 3.0.0-1
linux-headers-3.0.0-1-common-rt - Common header files for Linux 3.0.0-1-rt
linux-headers-3.0.0-1-rt-amd64 - Header files for Linux 3.0.0-1-rt-amd64
linux-headers-2.6-amd64 - Header files for Linux amd64 configuration (dummy package)
linux-headers-2.6-rt-amd64 - Header files for Linux rt-amd64 configuration (dummy package)
linux-headers-amd64 - Header files for Linux amd64 configuration (meta-package)
linux-headers-rt-amd64 - Header files for Linux rt-amd64 configuration (meta-package)
And sources.list:
# Debian packages for testing
deb http://mirror.transact.net.au/debian/ testing main contrib non-free
# Uncomment the deb-src line if you want 'apt-get source'
# to work with most packages.
# deb-src http://mirror.transact.net.au/debian/ testing main contrib non-free
# Security updates for stable
# deb http://security.debian.org/ stable/updates main contrib non-free
Also note, I apt-get updated and this made no difference.
|
Ubuntu doesn't ship an AMD64-specific kernel header package.
What you probably want is linux-headers-2.6.32-5-generic. This combines Linux headers for both 32- and 64-bit Intel x86 CPU variants.
Say apt-cache search linux-headers to see your other choices.
| Command to install linux headers fails |
1,320,588,194,000 |
I recently deleted my active Linux kernel and continued using the system as if nothing drastic happened. Are there any side-effects to deleting the Linux kernel that's currently in use? What about other non-Windows kernels?
|
The Linux kernel is completely loaded into RAM on boot. After the system is booted, it never goes back and tries to read anything from that file. The same goes for drivers, once loaded into the kernel.
If you deleted the only kernel image on disk, the only consequence is that the system cannot be successfully rebooted unless you install a replacement kernel image before reboot.
As for other OSes, I imagine it is the same, simply due to the nature of OS kernels. They're intentionally small bits of code that stay running all the time, so there is no incentive to keep going back to disk to "look" at the code again. It's always in memory. (RAM or VM.)
| What potential ills can be brought by 'deleting' a live kernel? |
1,320,588,194,000 |
I have Ubuntu 10.10 and after few updates the boot menu lists many kernel versions. How do I remove older versions?
|
Check for currently-installed kernels:
$ dpkg --get-selections | grep linux-image
linux-image-2.6.38-2-686-bigmem install
linux-image-2.6.32-5-686 install
Check what current kernel you are running:
$ uname --all
Linux debian 2.6.38-2-686-bigmem #1 SMP Thu Apr 7 06:05:53 UTC 2011 i686 GNU/Linux
Remove the kernel(s) you are displeased with, generally keeping the latest (and greatest).
$ sudo apt-get remove linux-image-2.6.32-5-686
I think it's a good idea to keep at least two different versions though. However, I think this advice used to be more useful some time ago, because it seems the kernel gets more and more stable (I experience far less trouble than I used to), but maybe I'm lucky.
| grub shows multiple kernel versions on startup |
1,320,588,194,000 |
I was reading "Linux device drivers, 3rd edition" and don't completely
understand a part describing interrupt handlers. I would like to clarify:
are the interrupt handlers in Linux nonpreemptible?
are the interrupt handlers in Linux non-reentrant?
I believe I understand the model of Top/Bottom halves quite well, and
according to it the interrupts are disabled for as long as the TopHalf
is being executed, thus the handler can't be re-entered, am I right?
But what about high priority interrupts? Are they supported by vanilla Linux or
specific real-time extensions only? What happens if a low priority interrupt
is interrupted by high priority one?
|
The Linux kernel is reentrant (like all UNIX ones), which simply means that multiple processes can be executed by the CPU. He doesn't have to wait till a disk access read is handled by the deadly slow HDD controller, the CPU can process some other stuff until the disk access is finished (which itself will trigger an interrupt if so).
Generally, an interrupt can be interrupted by an other interrupt (preemption), that's called 'Nested Execution'. Depending on the architecture, there are still some critical functions which have to run without interruption (non-preemptive) by completely disabling interrupts. On x86, these are some time relevant functions (time.c, hpet.c) and some xen stuff.
There are only two priority levels concerning interrupts: 'enable all interrupts' or 'disable all interrupts', so I guess your "high priority interrupt" is the second one. This is the only behavior the Linux kernel knows concerning interrupt priorities and has nothing to do with real-time extensions.
If an interruptible interrupt (your "low priority interrupt") gets interrupted by an other interrupt ("high" or "low"), the kernel saves the old execution code of the interrupted interrupt and starts to process the new interrupt. This "nesting" can happen multiple times and thus can create multiple levels of interrupted interrupts. Afterwards, the kernel reloads the saved code from the old interrupt and tries to finish the old one.
| re-entrency of interrupts in Linux |
1,320,588,194,000 |
ps(1), with the -f option, will output processes for which there is no associated command line in square brackets, like so:
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 Aug28 ? 00:07:42 /sbin/init
root 2 0 0 Aug28 ? 00:00:01 [kthreadd]
root 3 2 0 Aug28 ? 00:00:00 [rcu_gp]
root 4 2 0 Aug28 ? 00:00:00 [rcu_par_gp]
root 6 2 0 Aug28 ? 00:00:00 [kworker/0:0H-kblockd]
root 8 2 0 Aug28 ? 00:00:00 [mm_percpu_wq]
root 9 2 0 Aug28 ? 00:02:14 [ksoftirqd/0]
root 10 2 0 Aug28 ? 00:05:33 [rcu_preempt]
root 11 2 0 Aug28 ? 00:01:36 [rcuc/0]
root 12 2 0 Aug28 ? 00:00:00 [rcub/0]
root 13 2 0 Aug28 ? 00:00:07 [migration/0]
root 14 2 0 Aug28 ? 00:00:00 [idle_inject/0]
root 16 2 0 Aug28 ? 00:00:00 [cpuhp/0]
root 17 2 0 Aug28 ? 00:00:00 [cpuhp/1]
root 18 2 0 Aug28 ? 00:00:00 [idle_inject/1]
root 19 2 0 Aug28 ? 00:00:05 [migration/1]
root 20 2 0 Aug28 ? 00:00:55 [rcuc/1]
Are these processes scheduled like other processes?
|
Under Linux, ps and top handle information made available by the kernel in /proc, for each process, in a directory named after the pid. This includes two files, comm and cmdline; comm is the process’s command name, and cmdline is the process’s command line, i.e. the arguments it was provided with (including its own “name”). ps and top use square brackets to distinguish between the two: if a process has a command line, then the args field (also known as CMD) outputs that; otherwise it outputs the command name, surrounded with square brackets.
This is described in the ps manpage, for args:
Sometimes the process args will be unavailable; when this happens, ps will instead print the executable name in brackets.
Processes without process arguments include processes constructed without any command line (not even argv[0]), such as kernel threads, and processes which have lost their command line, i.e. defunct processes, also known as zombies (identifiable by the <defunct> suffix).
None of this changes the scheduling properties: all processes are scheduled in the same way, according to their state, priority, etc.
| Are processes in square brackets scheduled in the same way as other processes? |
1,320,588,194,000 |
I'm running Arch Linux with systemd boot. In /boot/loader/entries/arch.conf I currently specify the luks crypto device with a line like this:
options rw cryptdevice=/dev/sda1:ABC root=/dev/mapper/ABC
I know I can also use UUID instead of /dev/sda1. In that case the kernel options line would look like this:
options rw cryptdevice=UUID=1f5cce52-8299-9221-b2fc-19cebc959f51:ABC root=/dev/mapper/ABC
However, can I instead use either a partition label or a volume label or any other kind of label? If so, what is the syntax?
|
If you're already using the new LUKS2 format, you can set a label:
For new LUKS2 containers:
# cryptsetup luksFormat --type=luks2 --label=foobar foobar.img
# blkid /dev/loop0
/dev/loop0: UUID="fda16145-822e-405c-9fe8-fe7e7f0ddb5e" LABEL="foobar" TYPE="crypto_LUKS"
For existing LUKS2 containers:
# cryptsetup config --label=barfoo /dev/loop0
# blkid /dev/loop0
/dev/loop0: UUID="fda16145-822e-405c-9fe8-fe7e7f0ddb5e" LABEL="barfoo" TYPE="crypto_LUKS"
However, it's not possible to set a label for the more common LUKS1 header.
With LUKS1, you can only set a label on a higher layer. For example, if you are using GPT partitions, you can set a PARTLABEL.
# parted /dev/loop0
(parted) name 1 foobar
(parted) print
Model: Loopback device (loopback)
Disk /dev/loop0: 105MB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 104MB 103MB foobar
This sets the partition label of partition 1 to "foobar".
You can identify it with PARTLABEL=foobar or find it in /dev/disk/by-partlabel/
# ls -l /dev/disk/by-partlabel/foobar
lrwxrwxrwx 1 root root 13 Oct 10 20:10 /dev/disk/by-partlabel/foobar -> ../../loop0p1
Similarly, if you use LUKS on top of LVM, you could go with VG/LV names.
As always with labels, take extra care to make sure each label doesn't exist more than once. There's a reason why UUIDs are meant to be "universally unique". You get a lot of problems when trying to use the wrong device; it can even cause data loss (e.g. if cryptswap formats the wrong device on boot).
| How to specify cryptdevice by label using systemd boot? |
1,320,588,194,000 |
I have a USB Linux Kernel module that compiles and builds. Running insmod loads my module and dmseg and tail -f /var/log/debug shows me it works as expected.
Running depmod -a then modprobe from the terminal loads the module and modprobe -r unloads and I see - tail -f /var/log/debug output as expected.
When I plug in my USB keyboard though it does not trigger and load on-demand as expected.
I have investigated /etc/udev/rules.d with no success. Any workarounds or guidance is most welcome. Am running Ubuntu 12.04.4 LTS with a Custom Linux Kernel 3.14.0
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/usb.h>
#include <linux/usb/input.h>
#include <linux/hid.h>
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Aruna Hewapathirane");
MODULE_DESCRIPTION("A USB Keyboard Driver Kernel Module");
static struct usb_device_id usb_kbd_id_table[] = {
{ USB_INTERFACE_INFO(
USB_INTERFACE_CLASS_HID,
USB_INTERFACE_SUBCLASS_BOOT,
USB_INTERFACE_PROTOCOL_KEYBOARD) },
{ } /* Terminating entry */
};
MODULE_DEVICE_TABLE(usb, usb_kbd_id_table);
static int __init kbd_init(void)
{
printk(KERN_DEBUG "USB Keyboard Plugged In.. \n");
return 0;
}
static void __exit kbd_exit(void)
{
printk(KERN_DEBUG "USB Keyboard Removed.. \n");
return ;
}
module_init(kbd_init);
module_exit(kbd_exit);
|
You are missing with usb_register and probe functions
Here is updated device driver with usb_register and probe functions
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/usb.h>
#include <linux/usb/input.h>
#include <linux/hid.h>
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Aruna Hewapathirane");
MODULE_DESCRIPTION("A USB Keyboard Driver Kernel Module");
static struct usb_device_id usb_kbd_id_table[] = {
{ USB_INTERFACE_INFO(
USB_INTERFACE_CLASS_HID,
USB_INTERFACE_SUBCLASS_BOOT,
USB_INTERFACE_PROTOCOL_KEYBOARD) },
{ } /* Terminating entry */
};
MODULE_DEVICE_TABLE(usb, usb_kbd_id_table);
static int kbd_probe(struct usb_interface *interface,
const struct usb_device_id *id)
{
pr_info("USB keyboard probe function called\n");
return 0;
}
static void kbd_disconnect(struct usb_interface *interface)
{
pr_info("USB keyboard disconnect function called\n");
}
static struct usb_driver kbd_driver = {
.name = "usbkbd",
.probe = kbd_probe,
.disconnect = kbd_disconnect,
.id_table = usb_kbd_id_table,
};
static int __init kbd_init(void)
{
int res = 0;
res = usb_register(&kbd_driver);
if (res)
pr_err("usb_register failed with error %d", res);
return res;
}
static void __exit kbd_exit(void)
{
pr_debug("USB Keyboard Removed..\n");
usb_deregister(&kbd_driver);
return;
}
module_init(kbd_init);
module_exit(kbd_exit);
Please refer previous SO question for probe function's use.
| usb kernel module does not load on demand but works fine with insmod and modprobe from the shell |
1,320,588,194,000 |
I'm into something about writing a "Mock GPU driver" for *nix based systems. What I mean is that, simply I want to write a driver (Behind X-server obviously) to answer X's API calls with some debugging messages.
In other words I want to fool *nix system about having an actual GPU. So I can make a test-bed for GUI-accelerated packages in console based systems.
Right now, if I execute a GUI-accelerated package in *nix console based systems; it'll simply dies due to lack of a real GPU (or a GPU driver better I'd say).
So I want to know:
Is it even possible? (Writing a GPU driver to fool *nix about having an actual GPU)
What resources do you recommend before getting my hands dirty in code?
Is there any similar projects around the net?
PS: I'm an experienced ANSI-C programmer but I don't have any clue in real Kernel/Driver development under *nix (read some tutorials about USB driver development though), so any resources about these areas will be really appreciated as well. Thanks in advance.
|
Given your requirements, it seems likely to me that you may not need to write your own driver. You may be able to use llvmpipe, which I believe conforms to your requirements. In particular, it is a "real driver" by some meanings of the word, and it does not require that X11 be running.
llvmpipe creates what might be called a virtual GPU that interprets OpenGL shaders by converting them on-the-fly to machine language for your CPU and runs it. It uses parts of the LLVM infrastructure to accomplish this.
However, it might not meet your needs, since what is actually going on is that llvmpipe is linked against by the binaries calling it. In other words, this is not a real, live, running-in-the-kernel driver. Instead, it creates an alternative libGL.so which interprets your OpenGL shaders.
If you're not able to compile your 3D graphics accelerated program from source, you probably cannot use llvmpipe to good effect. (But you want this to help with debugging your own programs, so that shouldn't be a problem.)
You might want to provide more information about what you need, in your question. In particular: Why do you need to debug your code from the driver side? Why can't you put the necessary debugging code in your program itself (or in your programs themselves)? Both X libraries and OpenGL libraries provide information about what went wrong, when a call fails. Why can't you use that information--plus kernel messages--in your program to facilitate debugging? And why would you expect that debugging information you get on the driver side, with a virtual driver implemented in the kernel, would correspond to what actually happens on real computers? More importantly, why would you assume that if your program produces low-level problems, those problems would be the same for different GPU's and drivers when it's run in the real world? You may have perfectly good answers to these questions (plus maybe I'm missing something), but I think it would be easier to answer your question if you explained this.
(By the way, one interesting application of llvmpipe is to enable graphical user interfaces to be written only in 3D-accelerated versions, but still run on computers without 3D acceleration. Theoretically this should facilitate running GNOME Shell without 3D acceleration, though some development work might be necessary to make it actually work; I think GNOME Shell makes some assumptions relating to compositing that might not be automatically fulfilled. Also, there are apparently some performance problems. A real-world instance of this that actually works is Unity, which in Ubuntu 12.10 will come in just one version, and be able to run on top of llvmpipe instead of there being a separate "Unity 2D" implementation.)
| Writing a driver to fool *nix systems about having a GPU |
1,320,588,194,000 |
I have been looking through the documentation for /proc and the "stack" object being a new'ish object in proc, I have also looked through the kernel commit to create it -- however the documentation does not detail exactly what is in the /proc/self/stack file -- and since I intuitively expected it to be the actual stack of the process -- however the old pstack tool gives a different (and more believable) output.
So as an example of the stack for bash
$ cat /proc/self/stack
[<ffffffff8106f955>] do_wait+0x1c5/0x250
[<ffffffff8106fa83>] sys_wait4+0xa3/0x100
[<ffffffff81013172>] system_call_fastpath+0x16/0x1b
[<ffffffffffffffff>] 0xffffffffffffffff
and, using pstack
$ pstack $$
#0 0x00000038cfaa664e in waitpid () from /lib64/libc.so.6
#1 0x000000000043ed42 in ?? ()
#2 0x000000000043ffbf in wait_for ()
#3 0x0000000000430bc9 in execute_command_internal ()
#4 0x0000000000430dbe in execute_command ()
#5 0x000000000041d526 in reader_loop ()
#6 0x000000000041ccde in main ()
The addresses are different, and obviously the symbols are not at all the same....
Does anybody have an explanation for the difference and/or a document which describes what is actually shown in /proc-stack?
|
The file /proc/$pid/stacks shows kernel stacks. On your system, memory addresses of the form ffffffff8xxxxxxx are in the space that's reserved for the kernel. There's not much documentation, you can check the source code. In contracst, the pstack program shows user-space stacks (using its knowledge of executable formats).
| What is the difference between /proc/self/stack and output from pstack? |
1,320,588,194,000 |
I am using ssh to remotely access some machines. These machines have a custom kernel installed (based on the 2.6.28 source). However, whenever I try to reboot the machines using sudo reboot, the system uses kexec and loads the 2.6.28-19-generic kernel, which is also intalled on the machine.
So how can I specify which kernel image to load after reboot?
EDIT:
I have ubuntu 9.04 installed on the machine, with grub 1.something.
The custom kernel is based on the 2.6.28 source with the name being 2.6.28.10-custom-1.1.
Two other kernels are installed on the machine 2.6.28-19-generic and 2.6.28-6-386. I have checked that after calling reboot, the machine does not actually reboot but uses kexec to load the 19-generic kernel, even if the current kernel was the custom one.
|
Normally, when you reboot, the machine will return to grub and either allow you to select a kernel via the keyboard, or boot the default configured kernel. However if you have kexec-tools installed, the reboot command will short circuit this behaviour and directly kexec into a kernel. You can disable this behaviour, and return to grub in reboot, by uninstalling kexec tools or editing the file
/etc/default/kexec
and setting:
LOAD_KEXEC=false
Alternatively, to keep kexec active and have it reboot into the kernel of your choice, try a command line like this to load your desired kernel:
kexec -l /boot/vmlinux --append=root=/dev/hda1 --initrd=/boot/initrd
then when 'kexec -e' is later run, the configured kernel in the kexec line as well will be run. As I believe the reboot script eventually just calls 'kexec-e' I believe the kernel change should take effect then.
| Which kernel does reboot load? |
1,320,588,194,000 |
I would like to debug a loaded kernel module I don't have the source code to; I suspect it's a virus. Is there a way to feed it into GDB for analysis?
|
From a debugging perspective, the kernel is a special "process", distinct from the user space processes, which communicate with the kernel via a sort of rpc mechanism (syscalls) or mapped memory..
I don't think you can see the kernel's data structure simply by inspecting some random user process.
Another problem is, that every user space process (including the debugger) needs the kernel to run and to communicate with the users; I don't think you can just stop the kernel and believe that the debugger will continue to run.
So you need to run GDB on a second machine, and that is what is called Kernel debugging.
Please refer to (http://kgdb.linsyssoft.com/, Documentation/sh/kgdb) for more details.
| How to debug an inserted kernel module? |
1,320,588,194,000 |
I am trying to install kernel headers version 4.14.71-v6 (uname -r) for Kali Linux. I already did the common commands...
apt update
apt upgrade
apt dist-upgrade
apt install linux-headers-generic
alt install linux-headers-$(uname -r)
...with and without option -y
Also did reboots. I searched the repos by apt search 4.14. I took a look onto http://http.kali.org/kali/pool/main/l/linux/, no success at all.
I've seen on http://http.kali.org/kali/pool/main/l/linux/ are kernel-headers for 4.18 and 4.19, but the upgrade is distributing versions only up to my 4.14.x
Does anybody have an idea what to do?
|
You are not finding the headers for your kernel version, in the official distribution repository, because you are dealing with a Kali setup using a custom made kernel version.
Whist we do not have all data, from you uname -r, it leds me to suspect it was made using these scripts/tools https://github.com/Re4son/re4son-kernel-builder ; it also leads me to speculate, after a bit of detective work, that maybe you have a Raspberry PI/ARM v6 device.
In this case, the easier option is either to reinstall a new version, or even better, choosing a more user friendly Linux distribution.
| Kali Linux kernel headers for 4.14.71-v6 |
1,320,588,194,000 |
When installing a kernel in Ubuntu(e.g. http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.0-vivid/), what are the files for respectively:
linux-headers-4.0.0-xxx_all.deb
linux-headers-4.0.0-xxx-generic_xxx_i386/amd64.deb
linux-image-4.0.0-xxx-generic_xxx_i386/amd64.deb
linux-headers-4.0.0-xxx-lowlatency_xxx_i386/amd64.deb
linux-image-4.0.0-xxx-lowlatency_xxx_i386/amd64.deb
|
Debian (and Ubuntu and other derivatives) divides Linux kernel packages into several parts:
linux-image-VERSION-PATCHLEVEL-FLAVOR contains the kernel image that is loaded by the bootloader, a file containing the symbol table (used by some system tools), a file containing the kernel configuration (informative for the system administrator), and modules that can be loaded dynamically. This is the package that is needed for normal use of the system.
linux-headers-VERSION-PATCHLEVEL-FLAVOR contains headers that are shipped with the kernel source or generated during the kernel compilation. These headers are needed to compile third-party kernel modules.
linux-libc-dev contains headers that are used to compile userspace programs. These headers are backwards compatible (unlike the headers used to compile kernel modules), so there is no need to install multiple versions.
linux-doc-VERSION contains kernel documentation. It is useful for people who write kernel modules or diagnose kernel-behavior.
linux-source-VERSION contains the kernel sources. People who want to compile their own kernel can install this binary package and unpack the archive that it contains.
linux-tools-VERSION contains tools that depend on the kernel version. Currently there is only perf.
The packaging distinguishes VERSION (the upstream version) from PATCHLEVEL (incremented on each change that affects binary compatibility). A bug fix can affect binary compatibility, so modules need to be recompiled, so it must be possible to install multiple patchlevels of kernels (and headers and third-party modules), so that you can have both the files for the running kernel and the files for the kernel that you'll next reboot to installed at the same time. There's a single package per version for the documentation and the source because there's no need to have multiple copies of them for different patchlevels.
The different FLAVORs correspond to kernel compilation options. Some kernel options are compromises, for example to support computers with large physical memory (at the expense of an overhead in kernel memory) or only computers with small physical memory (less overhead but a smaller maximum amount of RAM).
In current versions of Ubuntu, there are only two kernel flavors: “generic” (suitable for most computers) and “lowlatency” (which makes programs more reactive at the cost of a little more CPU overhead, see https://askubuntu.com/questions/126664/why-to-choose-low-latency-kernel-over-generic-or-realtime-ones). Debian has many more, most of which only make sense on specific architectures.
In addition to the packages with full version numbers, there are metapackages with no version as part of the package name. That way, you can install e.g. linux-image-generic which always depends on the latest linux-image-VERSION-PATCHLEVEL-generic package. For example, linux-image-generic version 3.13.0.42 depends on linux-image-3.13.0-42-generic, linux-image-generic version 3.13.0.43 depends on linux-image-3.13.0-43-generic, etc. As the linux-image-generic package gets upgraded, newer kernel packages are pulled in.
| What are the deb files for installing a kernel for? |
1,320,588,194,000 |
I have a machine running Debian stable, and I'd like to update the stock kernel to the latest kernel available in the backports repository. There are many kernel-related packages and virtual packages--what is the right way to do this and get the appropriate packages upgraded?
|
Use a meta package (e.g. linux-image-2.6-686), depending on which you want to use. This is so that if the real package name changes (which is very often), you won't lag behind. To determine which meta package to use, have a look at the name of the kernel package you are running. Also, do read the package descriptions of the meta packages that seem like the right ones, just to make sure.
Supposing that you have updated your "/etc/sources.list" file (instructions), run this (as root):
apt-get -t squeeze-backports install linux-image-2.6-686
Once you have done that, there is no need to keep re-running the command to check if a newer package is available because your normal apt-get update && apt-get upgrade will do the check for you and upgrade it. Note that this behavior is new (since Squeeze), and in the older days, you had to APT pinning.
| How to update Debian kernel to latest in backports |
1,320,588,194,000 |
I am looking for a formal description of the following boot parameters in the linux kernel:
real_root
cdroot
I have problems tuning them to create my own bootable LiveUSB system. Are they specific to my distribution (Gentoo)?
They do not appear in the gitweb kernel documentation.
|
root is the device you want to be mounted up as the root file system when the kernel first starts. This is pretty self explanitory, but it gets complicated because this can actually change over time. The usual reason that happens is when the kernel doesn't have the modules it needs to mount the root file system. In that case a system called initrd is used. An initrd image is basically a small compressed file system with a few goodies like drive controller or network modules that the kernel is going to need to read the real root file system and continue booting. In this case the initrd image becomes root, and...
real_root is going to be the actual root partition matching your entry in /etc/fstab. If you don't use initrd, this option can be omitted in favor of just using root. As long as we're on the topic, there is also nfsroot which is specifically for situations where the root file system will be an NFS mounted remote file system and networking needs to be initiated before the final root file system can be mounted.
cdroot I don't recognize, but it probably has to do with the special way your Live distro is setup and would denote where to find the LiveCD/Image as opposed to the virtual file system or that is the root of the running live distro. In searching it seems to show up mostly with Gentoo LiveUSB/CD builds, so it may be proprietary. It often does not have an argument, so it might simply be a flag to denote that the root media is a CD so that later processes can know.
| description of linux kernel boot parameters 'real_root' and 'cdroot' |
1,320,588,194,000 |
we have linux DB server redhat 7.2
we notice about many message as below about all disks that are mounted
from /var/log/messages
what we are need to understand if this behavior is relevant to HW problem
Mar 29 13:28:22 server_DB kernel: Buffer I/O error on device sdb, logical block *N4980*
Mar 29 13:28:22 server_DB kernel: Buffer I/O error on device sdb, logical block *N4981*
Mar 29 13:28:22 server_DB kernel: Buffer I/O error on device sdb, logical block *N4982*
Mar 29 13:28:22 server_DB kernel: Buffer I/O error on device sdb, logical block *N4983*
Mar 29 13:28:22 server_DB kernel: Buffer I/O error on device sdb, logical block *N4984*
Mar 29 13:28:22 server_DB kernel: Buffer I/O error on device sdb, logical block *N4985*
Mar 29 13:28:22 server_DB kernel: Buffer I/O error on device sdb, logical block *N4986*
Mar 29 13:28:22 server_DB kernel: Buffer I/O error on device sdb, logical block *N4987*
Mar 29 13:28:22 server_DB kernel: Buffer I/O error on device sdb, logical block *N4988*
Mar 29 13:28:22 server_DB kernel: Buffer I/O error on device sdb, logical block *N4989*
Mar 29 13:28:22 server_DB kernel: Buffer I/O error on device sdb, logical block *N4990*
Mar 29 13:28:22 server_DB kernel: Buffer I/O error on device sdb, logical block *N4991*
Mar 29 13:28:22 server_DB kernel: Buffer I/O error on device sdb, logical block *N4992*
Mar 29 13:28:22 server_DB kernel: Buffer I/O error on device sdb, logical block *N4993*
Mar 29 13:28:22 server_DB kernel: Buffer I/O error on device sdb, logical block *N4994*
Mar 29 13:28:22 server_DB kernel: Buffer I/O error on device sdb, logical block *N4995*
Mar 29 13:28:22 server_DB kernel: Buffer I/O error on device sdb, logical block *N4996*
Mar 29 13:28:22 server_DB kernel: Buffer I/O error on device sdb, logical block *N4997*
we also seen this messages
Mar 27 09:18:08 server_DB smartd[1734]: Monitoring 0 ATA and 26 SCSI devices
Mar 27 09:18:08 server_DB ModemManager[1755]: <warn> Couldn't find support for device at '/sys/devices/pci0000:00/0000:00*CO*/0000:02*CO*': not supported by any plugin
Mar 27 09:18:08 server_DB ModemManager[1755]: <warn> Couldn't find support for device at '/sys/devices/pci0000:00/0000:00*CO*/0000:02*CO*': not supported by any plugin
Mar 27 09:18:08 server_DB ModemManager[1755]: <warn> Couldn't find support for device at '/sys/devices/pci0000:00/0000:00*CO*/0000:01*CO*': not supported by any plugin
Mar 27 09:18:08 server_DB ModemManager[1755]: <warn> Couldn't find support for device at '/sys/devices/pci0000:00/0000:00*CO*/0000:01*CO*': not supported by any plugin
Mar 27 09:18:08 server_DB ModemManager[1755]: <warn> Couldn't find support for device at '/sys/devices/pci0000:80/0000:80*CO*/0000:81*CO*': not supported by any plugin
Mar 27 09:18:08 server_DB ModemManager[1755]: <warn> Couldn't find support for device at '/sys/devices/pci0000:80/0000:80*CO*/0000:81*CO*': not supported by any plugin
I am also checked the disk
smartctl -a -d megaraid,0 /dev/sdb
smartctl 6.2 2013-07-26 r3841 [x86_64-linux-3.10.0-327.el7.x86_64] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Vendor: SEAGATE
Product: ST600MM0238
Revision: BS04
User Capacity: 600,127,266,816 bytes [600 GB]
Logical block size: 512 bytes
Formatted with type 2 protection
Logical block provisioning type unreported, LBPME=0, LBPRZ=0
Rotation Rate: 10000 rpm
Form Factor: 2.5 inches
Logical Unit id: 0x5000c500a0f28343
Serial number: W0M0LYD2
Device type: disk
Transport protocol: SAS
Local Time is: Wed Mar 27 10:51:30 2019 UTC
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
Temperature Warning: Disabled or Not Supported
=== START OF READ SMART DATA SECTION ===
SMART Health Status: OK
Current Drive Temperature: 24 C
Drive Trip Temperature: 60 C
Manufactured in week 45 of year 2017
Specified cycle count over device lifetime: 10000
Accumulated start-stop cycles: 50
Specified load-unload count over device lifetime: 300000
Accumulated load-unload cycles: 177
Elements in grown defect list: 0
Vendor (Seagate) cache information
Blocks sent to initiator = 412242328
Blocks received from initiator = 3213595579
Blocks read from cache and sent to initiator = 312462212
Number of read and write commands whose size <= segment size = 31915885
Number of read and write commands whose size > segment size = 0
Vendor (Seagate/Hitachi) factory information
number of hours powered up = 3178.45
number of minutes until next internal SMART test = 12
|
This I/O error message is written to warn about a hardware error with sdb. It could be with the disks or with the cable, for example.
I suppose it is less likely to be an error in the disks themselves, if you have a large number of disks all showing errors at the same time :-). It could be an error in the disk controller.
If you see "Buffer I/O error" but no specific messages about ATA or SCSI error codes, or about retry attempts in general, maybe that gives some hint. But I do not really know :-).
Of course, a software error could cause any messages whatsoever :-).
To give an example of a software error, although I know this is not the same error: I have seen a kernel bug where "Buffer I/O error" was shown, without any error messages about ATA or SCSI or retry attempts. Fedora bug 1553979.
The "Buffer" part just means that it happened during a request for file data which is cacheable in the page cache. For historical reasons, people sometimes call these requests "buffered IO".
| "kernel: Buffer I/O error on device" - Does my server have a hardware problem? |
1,320,588,194,000 |
Suppose I called kmalloc and didn't free that memory before rmmod was called on the module, what happens to that memory? Is it a memory leak and it is completely unusable until restart, or does the kernel free that memory automatically?
thanks
|
It won't be freed until explicitly done. Memory allocated with kmalloc() needs to be freed using kfree(). That piece of memory stays till the system is on.
[...] didn't free that memory before rmmod was called on the module [...]
When you do rmmod, module_exit() will be executed where you can free the memory incase when memory has to freed when module is unloaded. for example.
x() {
z = kmalloc(...)
}
y() {
kfree(z)
}
module_init(x)
module_exit(y)
| What happens to memory that wasn't freed in a kernel module after unloading? |
1,320,588,194,000 |
I am trying to understand the logs of sysdig. It mentions there file descriptors other than 0 (standard input), 1 (standard output), 2 (standard error); file descriptors such as 3, 6, 7, -2 are listed.
If these are the file index or file number in memory, why there are negative numbers?
The structure of the events is like this:
*%evt.num %evt.time %evt.cpu %proc.name (%thread.tid) %evt.dir %evt.type %evt.args
58650327 12:56:29.887941337 0 clear_console (5527) > open
58650328 12:56:29.887948371 0 clear_console (5527) < open fd=-2(ENOENT) name=/dev/tty0 flags=3(O_RDWR) mode=0
58650329 12:56:29.887949853 0 clear_console (5527) > open
58650330 12:56:29.887954188 0 clear_console (5527) < open fd=-13(EACCES) name=/dev/console flags=3(O_RDWR) mode=0
58650331 12:56:29.887954835 0 clear_console (5527) > open
58650332 12:56:29.887956940 0 clear_console (5527) < open fd=-13(EACCES) name=/dev/console flags=1(O_RDONLY) mode=0
58650333 12:56:29.887957474 0 clear_console (5527) > open
58650334 12:56:29.887959911 0 clear_console (5527) < open fd=-13(EACCES) name=/dev/console flags=2(O_WRONLY) mode=0
58650363 12:56:29.888201994 0 bash (5506) > open
58650390 12:56:29.912662138 0 bash (5506) < open fd=-2(ENOENT) name=/etc/bash.bash_logout flags=1(O_RDONLY) mode=0
58650395 12:56:29.912720036 0 bash (5506) > open
58650396 12:56:29.912735157 0 bash (5506) < open fd=3(<f>/home/ubuntu/.bash_history) name=/home/ubuntu/.bash_history flags=10(O_APPEND|O_WRONLY) mode=0
58650426 12:56:29.953271487 0 bash (5506) > open
58650427 12:56:29.953303756 0 bash (5506) < open fd=3(<f>/home/ubuntu/.bash_history) name=/home/ubuntu/.bash_history flags=1(O_RDONLY) mode=0
58650541 12:56:29.962503103 0 sshd (5495) > open
58650542 12:56:29.962537862 0 sshd (5495) < open fd=6(<f>/etc/passwd) name=/etc/passwd flags=4097(O_RDONLY|O_CLOEXEC) mode=0
58650559 12:56:29.962636515 0 sshd (5495) > open
58650560 12:56:29.962646634 0 sshd (5495) < open fd=6(<f>/var/run/utmp) name=/var/run/utmp flags=4097(O_RDONLY|O_CLOEXEC) mode=0
58651059 12:56:29.997560921 0 sshd (5495) > open
58651060 12:56:29.997629170 0 sshd (5495) < open fd=7(<f>/var/run/utmp) name=/var/run/utmp flags=4099(O_RDWR|O_CLOEXEC) mode=0
58651091 12:56:29.997727995 0 sshd (5495) > open
58651092 12:56:29.997768935 0 sshd (5495) < open fd=6(<f>/var/log/wtmp) name=/var/log/wtmp flags=2(O_WRONLY) mode=0
58651991 12:56:30.016524060 0 sshd (5495) > open
58651992 12:56:30.016573912 0 sshd (5495) < open fd=4(<f>/etc/login.defs) name=/etc/login.defs flags=1(O_RDONLY) mode=0
58652240 12:56:30.053254470 0 sshd (5495) > open
58652241 12:56:30.053280905 0 sshd (5495) < open fd=4(<f>/etc/passwd) name=/etc/passwd flags=4097(O_RDONLY|O_CLOEXEC) mode=0
|
When you use open() to open a file (see man 2 open), you get a file descriptor back for it (it's an int in C). The standard streams are associated with descriptors 0, 1 and 2, and any other open file stream will have a separate descriptor associated with it.
There's a limit to how many files you can have open at once, usually somewhere around 512 or 1024 (see ulimit -Hn for the hard upper limit), and each of those open files will have a file descriptor associated with them.
Conceptually, it's just an index into an array maintained by the kernel. Apart from the three standard ones, there is no fixed association between the file descriptors and any other stream.
In the log that you have added to the question, you see that the "negative file descriptors" are associated with error codes (ENOENT and EACCESS). The open() system call returns negative numbers for errors.
See man errno for a description of these error codes.
The file descriptors are per process, so file descriptor 6 in process A is not the same stream as file descriptor 6 in process B.
| What are file descriptors other than 0, 1 and 2 |
1,320,588,194,000 |
My board boots via U-Boot and AFAIK that bootloader does not support device tree overlays, so I'm probably forced to generate a single, static .dtb will all relevant overlays (and settings??) already applied to it. In principle that would be okay for me, but how to do that?
Is there some command line tool that takes .dtb and .dtbo files resp. .dts and .dtsi files and combines them into a single .dtb / .dts?
dtc doesn't seem to do that job.
The ultimate goal is to get I²C working on a Raspberry B+ that boots via U-Boot.
|
You don't need to do this.
With this change, overlays are in u-boot!
https://github.com/u-boot/u-boot/commit/e6628ad7b99b285b25147366c68a7b956e362878
Enjoy :)
| how to merge device tree overlays to a single .dtb at build time? |
1,320,588,194,000 |
My board continues to display the message below.
The terminal does not have any input.
What is it with the following message, which I know? (T, g, c, q ...)
What is the cause of this phenomenon?
How can I fix this phenomenon?
INFO: rcu_preempt detected stalls on CPUs/tasks: { 3} (detected by 0, t=3936547 jiffies, g=367023708, c=367023707, q=1511)
INFO: rcu_preempt detected stalls on CPUs/tasks: { 3} (detected by 2, t=3972552 jiffies, g=367023708, c=367023707, q=1511)
INFO: rcu_preempt detected stalls on CPUs/tasks: { 3} (detected by 1, t=4008557 jiffies, g=367023708, c=367023707, q=1511)
INFO: rcu_preempt detected stalls on CPUs/tasks: { 3} (detected by 1, t=4044562 jiffies, g=367023708, c=367023707, q=1511)
INFO: rcu_preempt detected stalls on CPUs/tasks: { 3} (detected by 2, t=4080567 jiffies, g=367023708, c=367023707, q=1511)
INFO: rcu_preempt detected stalls on CPUs/tasks: { 3} (detected by 0, t=4116572 jiffies, g=367023708, c=367023707, q=1511)
INFO: rcu_preempt detected stalls on CPUs/tasks: { 3} (detected by 1, t=4152577 jiffies, g=367023708, c=367023707, q=1511)
INFO: rcu_preempt detected stalls on CPUs/tasks: { 3} (detected by 0, t=4188582 jiffies, g=367023708, c=367023707, q=1511)
INFO: rcu_preempt detected stalls on CPUs/tasks: { 3} (detected by 1, t=4224587 jiffies, g=367023708, c=367023707, q=1511)
INFO: rcu_preempt detected stalls on CPUs/tasks: { 3} (detected by 1, t=4260592 jiffies, g=367023708, c=367023707, q=1511)
INFO: rcu_preempt detected stalls on CPUs/tasks: { 3} (detected by 1, t=4296597 jiffies, g=367023708, c=367023707, q=1511)
INFO: rcu_preempt detected stalls on CPUs/tasks: { 3} (detected by 2, t=4332602 jiffies, g=367023708, c=367023707, q=1511)
INFO: rcu_preempt detected stalls on CPUs/tasks: { 3} (detected by 2, t=4368607 jiffies, g=367023708, c=367023707, q=1511)
|
You probably have a real time application that is consuming all cpu (some bad implementation) and because of its realtime scheduling priority the system doesn't have enough resources available for other tasks.
I suggests that you remove realtime priority from your applications and check which one is consuming a lot of CPU and, after correcting the problem, puts it back to realtime priority
| "rcu_preempt detected stalls on CPUs / tasks" message appears to continue |
1,436,241,099,000 |
I have updated the Linux kernel on my system to Linux 3.11-2-amd64 #1 SMP Debian 3.11.10-1 (2013-12-04) x86_64 GNU/Linux. Having done that, the sound no longer works.
Distribution Info(cat /etc/*-release):
PRETTY_NAME="Debian GNU/Linux jessie/sid"
NAME="Debian GNU/Linux"
ID=debian
ANSI_COLOR="1;31"
HOME_URL="http://www.debian.org/"
SUPPORT_URL="http://www.debian.org/support/"
BUG_REPORT_URL="http://bugs.debian.org/"
When I run the command lspci | grep -i audio it prints the following:
00:01.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Wrestler HDMI Audio
00:14.2 Audio device: Advanced Micro Devices, Inc. [AMD] FCH Azalia Controller (rev 01)
When I try to do a test of my sound with the command:
LC_ALL=C speaker-test -l 3 -t sine -c 1
I get the following:
speaker-test 1.0.27.2
Playback device is default
Stream parameters are 48000Hz, S16_LE, 1 channels
Sine wave rate is 440.0000Hz
ALSA lib pulse.c:243:(pulse_connect) PulseAudio: Unable to connect: Access denied
Playback open error: -111,Connection refused
What should I do?
When I run the command alsactl init it prints:
Found hardware: "HDA-Intel" "ATI R6xx HDMI" "HDA:1002aa01,00aa0100,00100200" "0x1025" "0x0740"
Hardware is initialized using a generic method
So when I try to run: pulseaudio -D it prints:
E: [pulseaudio] main.c: Daemon startup failed.
aplay -l
**** List of PLAYBACK Hardware Devices ****
card 0: Generic [HD-Audio Generic], device 3: HDMI 0 [HDMI 0]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 1: Generic_1 [HD-Audio Generic], device 0: ALC271X Analog [ALC271X Analog]
Subdevices: 1/1
Subdevice #0: subdevice #0
aplay -L
default
Playback/recording through the PulseAudio sound server
hdmi:CARD=Generic,DEV=0
HD-Audio Generic, HDMI 0
HDMI Audio Output
sysdefault:CARD=Generic_1
HD-Audio Generic, ALC271X Analog
Default Audio Device
front:CARD=Generic_1,DEV=0
HD-Audio Generic, ALC271X Analog
Front speakers
surround40:CARD=Generic_1,DEV=0
HD-Audio Generic, ALC271X Analog
4.0 Surround output to Front and Rear speakers
surround41:CARD=Generic_1,DEV=0
HD-Audio Generic, ALC271X Analog
4.1 Surround output to Front, Rear and Subwoofer speakers
surround50:CARD=Generic_1,DEV=0
HD-Audio Generic, ALC271X Analog
5.0 Surround output to Front, Center and Rear speakers
surround51:CARD=Generic_1,DEV=0
HD-Audio Generic, ALC271X Analog
5.1 Surround output to Front, Center, Rear and Subwoofer speakers
surround71:CARD=Generic_1,DEV=0
HD-Audio Generic, ALC271X Analog
7.1 Surround output to Front, Center, Side, Rear and Woofer speakers
dmesg | grep sound
[ 8.838666] input: HD-Audio Generic HDMI/DP,pcm=3 as /devices/pci0000:00/0000:00:01.1/sound/card0/input7
[ 9.395442] input: HD-Audio Generic Headphone as /devices/pci0000:00/0000:00:14.2/sound/card1/input10
[ 9.395704] input: HD-Audio Generic Mic as /devices/pci0000:00/0000:00:14.2/sound/card1/input11
lsmod
Module Size Used by
cpufreq_userspace 12525 0
cpufreq_stats 12777 0
cpufreq_powersave 12454 0
cpufreq_conservative 14184 0
parport_pc 26300 0
ppdev 12686 0
lp 17074 0
parport 35749 3 lp,ppdev,parport_pc
bnep 17431 2
rfcomm 36903 0
bluetooth 215917 10 bnep,rfcomm
binfmt_misc 16949 1
nfsd 255063 2
auth_rpcgss 51036 1 nfsd
oid_registry 12419 1 auth_rpcgss
nfs_acl 12511 1 nfsd
nfs 143940 0
lockd 79321 2 nfs,nfsd
fscache 45230 1 nfs
sunrpc 211258 6 nfs,nfsd,auth_rpcgss,lockd,nfs_acl
fuse 78616 1
loop 26609 0
uvcvideo 78960 0
videobuf2_vmalloc 12816 1 uvcvideo
videobuf2_memops 12519 1 videobuf2_vmalloc
videobuf2_core 35029 1 uvcvideo
videodev 105100 2 uvcvideo,videobuf2_core
media 18303 2 uvcvideo,videodev
joydev 17063 0
acer_wmi 30174 0
snd_hda_codec_realtek 41059 1
arc4 12536 2
sparse_keymap 12818 1 acer_wmi
ath9k 94801 0
radeon 1166155 3
ath9k_common 12687 1 ath9k
ath9k_hw 390315 2 ath9k_common,ath9k
ath 21417 3 ath9k_common,ath9k,ath9k_hw
mac80211 416244 1 ath9k
ttm 69419 1 radeon
drm_kms_helper 35647 1 radeon
drm 227730 5 ttm,drm_kms_helper,radeon
snd_hda_codec_hdmi 35769 1
snd_hda_intel 39672 7
snd_hda_codec 142551 3 snd_hda_codec_realtek,snd_hda_codec_hdmi,snd_hda_intel
kvm 354353 0
i2c_piix4 12623 0
cfg80211 377915 3 ath,ath9k,mac80211
snd_hwdep 13148 1 snd_hda_codec
snd_pcm 84096 3 snd_hda_codec_hdmi,snd_hda_codec,snd_hda_intel
snd_page_alloc 17114 2 snd_pcm,snd_hda_intel
pcspkr 12595 0
psmouse 82028 0
evdev 17445 24
serio_raw 12849 0
k10temp 12618 0
snd_seq 48834 0
snd_seq_device 13132 1 snd_seq
snd_timer 26614 2 snd_pcm,snd_seq
i2c_algo_bit 12751 1 radeon
snd 60869 23 snd_hda_codec_realtek,snd_hwdep,snd_timer,snd_hda_codec_hdmi,snd_pcm,snd_seq,snd_hda_codec,snd_hda_intel,snd_seq_device
wmi 17339 1 acer_wmi
rfkill 18978 6 cfg80211,acer_wmi,bluetooth
i2c_core 24084 6 drm,i2c_piix4,drm_kms_helper,i2c_algo_bit,radeon,videodev
acpi_cpufreq 17299 0
mperf 12411 1 acpi_cpufreq
soundcore 13026 1 snd
processor 28326 3 acpi_cpufreq
video 17844 1 acer_wmi
ac 12668 0
battery 13101 0
button 12944 0
ext4 457329 1
crc16 12343 2 ext4,bluetooth
mbcache 13034 1 ext4
jbd2 82560 1 ext4
sg 29971 0
sd_mod 44300 3
crc_t10dif 12348 1 sd_mod
microcode 30309 0
ehci_pci 12472 0
thermal 17468 0
thermal_sys 27268 3 video,thermal,processor
ohci_pci 12808 0
ohci_hcd 25977 1 ohci_pci
ahci 25096 2
libahci 27121 1 ahci
ehci_hcd 44263 1 ehci_pci
xhci_hcd 89949 0
libata 169120 2 ahci,libahci
usbcore 154086 6 uvcvideo,ohci_hcd,ohci_pci,ehci_hcd,ehci_pci,xhci_hcd
scsi_mod 178166 3 sg,libata,sd_mod
usb_common 12440 1 usbcore
r8169 60070 0
mii 12675 1 r8169
grep -i pulse /var/log/syslog
Dec 20 13:43:39 Demian pulseaudio[3565]: [pulseaudio] authkey.c: Failed to open cookie file '/var/lib/gdm3/.config/pulse/cookie': No existe el fichero o el directorio
Dec 20 13:43:39 Demian pulseaudio[3565]: [pulseaudio] authkey.c: Failed to load authorization key '/var/lib/gdm3/.config/pulse/cookie': No existe el fichero o el directorio
Dec 20 13:43:41 Demian pulseaudio[3631]: [pulseaudio] authkey.c: Failed to open cookie file '/var/lib/gdm3/.config/pulse/cookie': No existe el fichero o el directorio
Dec 20 13:43:41 Demian pulseaudio[3631]: [pulseaudio] authkey.c: Failed to load authorization key '/var/lib/gdm3/.config/pulse/cookie': No existe el fichero o el directorio
Dec 20 13:46:52 Demian pulseaudio[3731]: [pulseaudio] authkey.c: Failed to open cookie file '/var/lib/gdm3/.config/pulse/cookie': No existe el fichero o el directorio
Dec 20 13:46:52 Demian pulseaudio[3731]: [pulseaudio] authkey.c: Failed to load authorization key '/var/lib/gdm3/.config/pulse/cookie': No existe el fichero o el directorio
Dec 20 13:46:54 Demian pulseaudio[3735]: [pulseaudio] authkey.c: Failed to open cookie file '/var/lib/gdm3/.config/pulse/cookie': No existe el fichero o el directorio
Dec 20 13:46:54 Demian pulseaudio[3735]: [pulseaudio] authkey.c: Failed to load authorization key '/var/lib/gdm3/.config/pulse/cookie': No existe el fichero o el directorio
Dec 20 15:59:58 Demian pulseaudio[3736]: [pulseaudio] authkey.c: Failed to open cookie file '/var/lib/gdm3/.config/pulse/cookie': No existe el fichero o el directorio
Dec 20 15:59:58 Demian pulseaudio[3736]: [pulseaudio] authkey.c: Failed to load authorization key '/var/lib/gdm3/.config/pulse/cookie': No existe el fichero o el directorio
Dec 20 15:59:59 Demian pulseaudio[3739]: [pulseaudio] authkey.c: Failed to open cookie file '/var/lib/gdm3/.config/pulse/cookie': No existe el fichero o el directorio
Dec 20 15:59:59 Demian pulseaudio[3739]: [pulseaudio] authkey.c: Failed to load authorization key '/var/lib/gdm3/.config/pulse/cookie': No existe el fichero o el directorio
Dec 20 16:00:57 Demian pulseaudio[3945]: [pulseaudio] pid.c: Daemon already running.
Dec 20 16:11:57 Demian pulseaudio[3762]: [pulseaudio] authkey.c: Failed to open cookie file '/var/lib/gdm3/.config/pulse/cookie': No existe el fichero o el directorio
Dec 20 16:11:57 Demian pulseaudio[3762]: [pulseaudio] authkey.c: Failed to load authorization key '/var/lib/gdm3/.config/pulse/cookie': No existe el fichero o el directorio
Dec 20 16:11:59 Demian pulseaudio[3767]: [pulseaudio] authkey.c: Failed to open cookie file '/var/lib/gdm3/.config/pulse/cookie': No existe el fichero o el directorio
Dec 20 16:11:59 Demian pulseaudio[3767]: [pulseaudio] authkey.c: Failed to load authorization key '/var/lib/gdm3/.config/pulse/cookie': No existe el fichero o el directorio
Dec 20 16:13:47 Demian pulseaudio[3973]: [pulseaudio] pid.c: Daemon already running.
Dec 20 18:06:15 Demian pulseaudio[3911]: [pulseaudio] module-x11-publish.c: PulseAudio information vanished from X11!
Dec 20 20:15:22 Demian pulseaudio[3726]: [pulseaudio] authkey.c: Failed to open cookie file '/var/lib/gdm3/.config/pulse/cookie': No existe el fichero o el directorio
Dec 20 20:15:22 Demian pulseaudio[3726]: [pulseaudio] authkey.c: Failed to load authorization key '/var/lib/gdm3/.config/pulse/cookie': No existe el fichero o el directorio
Dec 20 20:15:24 Demian pulseaudio[3731]: [pulseaudio] authkey.c: Failed to open cookie file '/var/lib/gdm3/.config/pulse/cookie': No existe el fichero o el directorio
Dec 20 20:15:24 Demian pulseaudio[3731]: [pulseaudio] authkey.c: Failed to load authorization key '/var/lib/gdm3/.config/pulse/cookie': No existe el fichero o el directorio
Dec 20 20:20:36 Demian pulseaudio[3673]: [pulseaudio] authkey.c: Failed to open cookie file '/var/lib/gdm3/.config/pulse/cookie': No existe el fichero o el directorio
Dec 20 20:20:36 Demian pulseaudio[3673]: [pulseaudio] authkey.c: Failed to load authorization key '/var/lib/gdm3/.config/pulse/cookie': No existe el fichero o el directorio
Dec 20 20:20:37 Demian pulseaudio[3705]: [pulseaudio] authkey.c: Failed to open cookie file '/var/lib/gdm3/.config/pulse/cookie': No existe el fichero o el directorio
Dec 20 20:20:37 Demian pulseaudio[3705]: [pulseaudio] authkey.c: Failed to load authorization key '/var/lib/gdm3/.config/pulse/cookie': No existe el fichero o el directorio
Dec 20 21:36:34 Demian pulseaudio[3643]: [pulseaudio] authkey.c: Failed to open cookie file '/var/lib/gdm3/.config/pulse/cookie': No existe el fichero o el directorio
Dec 20 21:36:34 Demian pulseaudio[3643]: [pulseaudio] authkey.c: Failed to load authorization key '/var/lib/gdm3/.config/pulse/cookie': No existe el fichero o el directorio
Dec 20 21:36:36 Demian pulseaudio[3713]: [pulseaudio] authkey.c: Failed to open cookie file '/var/lib/gdm3/.config/pulse/cookie': No existe el fichero o el directorio
Dec 20 21:36:36 Demian pulseaudio[3713]: [pulseaudio] authkey.c: Failed to load authorization key '/var/lib/gdm3/.config/pulse/cookie': No existe el fichero o el directorio
Dec 20 21:37:22 Demian pulseaudio[3950]: [pulseaudio] pid.c: Daemon already running.
|
Well I found the solution :
rm -r /home/user/.pulse*
And change the file /etc/libao.conf
change(old)
default_driver=alsa
quiet
to (new )
default_driver=pulse
quiet
and restart your system.
| After upgrading the kernel, the sound no longer works |
1,436,241,099,000 |
I have a problem booting my Debian Linux server. After a system update, GRUB loads the initrd and the system should ask for the password, but it doesn't. Instead, I get dropped to BusyBox. After trying to mount the encrypted volume manually with cryptsetup luksOpen, I get this error:
device-mapper: table: 254:0: crypt: Error allocating crypto tfm
device-mapper: reload ioctl failed: Invalid argument
Failed to setup dm-crypt key mapping for device /dev/sda3
Check that the kernel supports aes-cbc-essiv:sha256 cipher (check syslog for more info).
Images
|
Your kernel lacks support for aes-cbc-essiv:sha256. “Error allocating crypto tfm” refers to the kernel's cryptographic subsystem: some necessary cryptographic data structure couldn't be initialized. Your support for cryptographic algorithms comes in modules, and you have a module for the AES algorithm and a module for the SHA-256 algorithm, but no module for the CBC mode. You will not be able to mount your encrypted device without it.
If you compiled your own kernel, make sure to enable all necessary crypto algorithms. If your kernel comes from your distribution, this may be a bug that you need to report. In either case, there must be a module /lib/modules/2.6.32-5-amd64/kernel/crypto/cbc.ko. If the module exists, then your problem is instead with the initramfs generation script.
In addition to the cbc module, you need other kernel components to tie the crypto together. Check that CRYPTO_MANAGER, CRYPTO_RNG2 and CRYPTO_BLKCIPHER2 are set in your kernel configuration. Debian's initramfs building script should take care of these even if they're compiled as modules. As the crypto subsystem is rather complex, there may be other vital components that are missing from the initramfs script. If you need further help, read through the discussion of bug #541835, and post your exact kernel version, as well as your kernel configuration if you compiled it yourself.
You will need to boot from a rescue system with the requisite crypto support to repair this. Mount your root filesystem, chroot into it, mount /boot, and run dpkg-reconfigure linux-image-… to regenerate the initramfs.
| Booting encrypted root partion fails after system update |
1,436,241,099,000 |
Since linux 2.6.30, filesystems are mounted with "relatime" by default. In this discussion, Ingo Molnar says he has added the CONFIG_DEFAULT_RELATIME kernel option, which:
makes 'norelatime' the default for all mounts without an extra kernel
boot option.
I don't really get it, I wonder if that means that without CONFIG_DEFAULT_RELATIME in .config, a kernel will not use relatime as a default mount option?
How can one enable or disable CONFIG_DEFAULT_RELATIME in make menuconfig? (I don't find anything related to relatime.)
And finally, I can't even find CONFIG_DEFAULT_RELATIME in the kernel sources.
Can someone enlighten me?
|
Ingo Molnar proposed a patch, but this patch wasn't accepted into the kernel tree. Linus Torvalds made relatime the default setting in 2.6.30, unconditionally, and this is still true in 3.0. If you want relatime to default off in the kernel, you need to apply Ingo Molnar's patch in your copy of the source.
| How to configure CONFIG_DEFAULT_RELATIME to disable relatime |
1,436,241,099,000 |
The description of the bzImage in Wikipedia is really confusing me.
The above picture is from Wikipedia, but the line next to it is:
The bzImage file is in a specific
format: It contains concatenated
bootsect.o + setup.o + misc.o +
piggy.o.
I can't find the others (misc.o and piggy.o) in the image.
I would also like to get more clarity on these object files.
The info on this post about why we can't boot a vmlinux file is also really confusing me.
Another doubt is regarding the System.map. How is it linked to the bzImage? I know it contains the symbols of vmlinux before creating bzImage. But then at the time of booting, how does bzImage get attached to the System.map?
|
Till Linux 2.6.22, bzImage contained:
bbootsect (bootsect.o):
bsetup (setup.o)
bvmlinux (head.o, misc.o, piggy.o)
Linux 2.6.23 merged bbootsect and bsetup into one (header.o).
At boot up, the kernel needs to initialize some sequences (see the header file above) which are only necessary to bring the system into a desired, usable state. At runtime, those sequences are not important anymore (so why include them into the running kernel?).
System.map stands in relation with vmlinux, bzImage is just the compressed container, out of which vmlinux gets extracted at boot time (=> bzImage doesn't really care about System.map).
Linux 2.5.39 intruduced CONFIG_KALLSYMS. If enabled, the kernel keeps it's own map of symbols (/proc/kallsyms).
System.map is primary used by user space programs like klogd and ksymoops for debugging purposes.
Where to put System.map depends on the user space programs which consults it.
ksymoops tries to get the symbol map either from /proc/ksyms or /usr/src/linux/System.map.
klogd searches in /boot/System.map, /System.map and /usr/src/linux/System.map.
Removing /boot/System.map generated no problems on a Linux system with kernel 2.6.27.19 .
| More doubts in bzImage |
1,436,241,099,000 |
I'm trying to mirror a website locally. However, I've been running into a segmentation fault at some consistent point in the download, on a different domain than the site I'm targeting (probably due to --page-requisites).
2018-04-09 04:58:32 (346 KB/s) - './not-website.com/2017/06/28/xyz/index.html' saved [145810]
29247 Segmentation Fault (core dumped) wget --directory-prefix="${DL_ROOT}" --recursive --page-requisites --span-hosts --tries="${TRIES_NUM}" --timeout="${TIMEOUT_NUM}" --reject="*.tar" --convert-links --adjust-extension --continue --no-check-certificate "http://website.com/"
As a result, I assume that the segmentation fault is due to a wget trying to download a specific website but failing.
However, the error message doesn't seem to tell me what address wget is failing on. It only tells me the last successful download. How can I figure out where/why wget fails with this segfault?
There is a 55M core file that the error seems to reference in (core dumped), but it's not in plain text. Is the information I need in there, and how do I extract that?
I have tested this across distros (Solaris, Debian, Raspbian), and this segfault is consistent, and always after the same address (not-website.com/... in the error message above).
I'm using the command:
$ wget \
--directory-prefix="${DL_ROOT}" \
--recursive \
--page-requisites \
--span-hosts \
--tries="${TRIES_NUM}" \
--timeout="${TIMEOUT_NUM}" \
--reject="*.tar" \
--convert-links \
--adjust-extension \
--continue \
--no-check-certificate \
"http://website.com/"
Additional Information
It's a big site, with quite a bit of media. At the point of failure, the downloaded directory size is about 252M.
Tested on:
GNU Wget 1.18 built on solaris2.10.
-cares +digest -gpgme +https +ipv6 -iri +large-file -metalink -nls
+ntlm +opie -psl +ssl/openssl
and
GNU Wget 1.18 built on linux-gnu.
-cares +digest -gpgme +https +ipv6 +iri +large-file -metalink +nls
+ntlm +opie +psl +ssl/gnutls
and
GNU Wget 1.16 built on linux-gnueabihf.
+digest +https +ipv6 +iri +large-file +nls +ntlm +opie +psl +ssl/gnutls
|
Segmentation Fault means the program, in this case, wget, tried to access an invalid memory address and therefore was terminated by the kernel. This typically happen due to a program bug, so while it is quite likely it is being triggered by a specific website or web page (considering you seem to be able to reproduce it quite consistently, on multiple platforms, at the same point), it is still likely you have exposed a bug in wget itself.
In order to find where in wget the segmentation fault is happening, you can use the gdb program (GNU debugger) to get a stack trace of wget at the time it crashed, which is possible since you have a core file. (A core dump is a copy of the image of the running program at the time it was terminated due to an invalid operation such as a Segmentation Fault.)
In order to do so, use the following command:
$ gdb wget core
Which will start the debugger on the wget binary (from the path) and restore the core file (in the current directory) as the image of the running program.
gdb will then print some information about the program and give you a prompt:
$ gdb wget core
GNU gdb (GDB) 7.9
Copyright (C) 2015 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
...
Core was generated by `wget --directory-prefix=... --recursive --page-requisites --span-hosts --tries=... --timeout=... --reject=*.tar --convert-links --adjust-extension --continue --no-check-certificate http://website.com/'.
Program terminated with signal SIGSEGV, Segmentation Fault.
(gdb) _
At that point, you can use the command bt (short for "backtrace") to show you what was being executed at the time the program crashed. Which is usually a good place to start looking for the bug.
For instance, you might see something like this:
(gdb) bt
#0 0x00007f5371206363 in __select_nocancel () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x0000559e5acbf21c in select_fd ()
#2 0x0000559e5acf0bde in wgnutls_poll ()
#3 0x0000559e5acbf3a2 in poll_internal ()
#4 0x0000559e5acbf6ed in fd_peek ()
#5 0x0000559e5ace423d in fd_read_hunk ()
#6 0x0000559e5acd5ef9 in gethttp ()
#7 0x0000559e5acd9b26 in http_loop ()
#8 0x0000559e5ace53c8 in retrieve_url ()
#9 0x0000559e5ace273b in retrieve_tree ()
#10 0x0000559e5acbe67d in main ()
You can then quit gdb with the q (for "quit") command:
(gdb) q
It's usually helpful if you have the "debug symbols" installed. These are the information generated by the compiler for debugging binaries, which is usually stripped for binaries that are installed on a system, so they're smaller in size. That information can be saved to an alternate location (typically under /usr/lib/debug) that can be located by gdb while trying to debug a binary.
With that information present, your backtraces will typically have more information attached to them, such as the name of all internal functions.
On Debian, you can install the debug info for wget with the following command:
$ sudo apt-get install wget-dbgsym
You might also want to install the debug symbols for glibc:
$ sudo apt-get install libc6-amd64-dbgsym
Having said that, before you start looking at why wget crashed, you might want to try the latest version of wget, which it seems is version 1.9.4 that you can download here. That is a source package, so you might need to build from sources to get it to work in your system.
This is because a segmentation fault is typically caused by a bug, and it's quite possible this bug was already fixed in wget and the fix is present in the latest version.
In case you get the same problem in the latest version, consider getting a core file and using gdb to get a backtrace, then report the bug to wget maintainers so they have a chance to address it.
In case it's fixed on latest wget 1.9.4 but it exists in a version of Debian you are using, consider reporting this to Debian, so they can have a chance to backport the patch to their version of wget.
There's also a new project called wget2, it looks like they're trying to replace wget with a new codebase. You might want to check whether that one works or not... It seems recent Debian ship it under the name "wget2".
I hope these pointers are helpful too!
| Wget segfault---how do I know which site is causing this? |
1,436,241,099,000 |
I was trying to change linux process priority using chrt. I changed priority of one process to SCHED_FIFO from SCHED_OTHER. I could see some improvement in the perfomance. I run linux angstrom distribution for my embedded system.
So if I use SCHED_FIFO for one process, how other process will get affected? What are the precautions to be taken? I couldn't notice an apparent change in processor utilization. Thanks in advance.
|
As explained in sched_setscheduler(2), SCHED_FIFO is RT-priority, meaning that it will preempt any and all SCHED_OTHER (ie. "normal") tasks if it decides it wants to do something.
So, you should be absolutely sure it is well written and will yield control periodically by itself, because if it decides not to (eg. it wants CPU time) the rest of your system will come to complete halt until such time your RT process decides to sleep (which may be "never").
| SCHED_FIFO and SCHED_OTHER |
1,436,241,099,000 |
I first found this by investigating parameters for earlycon but found that the options for console look almost identical. Both are present below and were taken from this source:
From documentation for console we have:
console= [KNL] Output console device and options.
tty<n> Use the virtual console device <n>.
ttyS<n>[,options]
ttyUSB0[,options]
Use the specified serial port. The options are of
the form "bbbbpnf", where "bbbb" is the baud rate,
"p" is parity ("n", "o", or "e"), "n" is number of
bits, and "f" is flow control ("r" for RTS or
omit it). Default is "9600n8".
See Documentation/serial-console.txt for more
information. See
Documentation/networking/netconsole.txt for an
alternative.
uart[8250],io,<addr>[,options]
uart[8250],mmio,<addr>[,options]
uart[8250],mmio16,<addr>[,options]
uart[8250],mmio32,<addr>[,options]
uart[8250],0x<addr>[,options]
Start an early, polled-mode console on the 8250/16550
UART at the specified I/O port or MMIO address,
switching to the matching ttyS device later.
MMIO inter-register address stride is either 8-bit
(mmio), 16-bit (mmio16), or 32-bit (mmio32).
If none of [io|mmio|mmio16|mmio32], <addr> is assumed
to be equivalent to 'mmio'. 'options' are specified in
the same format described for ttyS above; if unspecified,
the h/w is not re-initialized.
hvc<n> Use the hypervisor console device <n>. This is for
both Xen and PowerPC hypervisors.
If the device connected to the port is not a TTY but a braille
device, prepend "brl," before the device type, for instance
console=brl,ttyS0
For now, only VisioBraille is supported.
From documentation for earlycon we have:
earlycon= [KNL] Output early console device and options.
When used with no options, the early console is
determined by the stdout-path property in device
tree's chosen node.
cdns,<addr>
Start an early, polled-mode console on a cadence serial
port at the specified address. The cadence serial port
must already be setup and configured. Options are not
yet supported.
uart[8250],io,<addr>[,options]
uart[8250],mmio,<addr>[,options]
uart[8250],mmio32,<addr>[,options]
uart[8250],mmio32be,<addr>[,options]
uart[8250],0x<addr>[,options]
Start an early, polled-mode console on the 8250/16550
UART at the specified I/O port or MMIO address.
MMIO inter-register address stride is either 8-bit
(mmio) or 32-bit (mmio32 or mmio32be).
If none of [io|mmio|mmio32|mmio32be], <addr> is assumed
to be equivalent to 'mmio'. 'options' are specified
in the same format described for "console=ttyS<n>"; if
unspecified, the h/w is not initialized.
pl011,<addr>
pl011,mmio32,<addr>
Start an early, polled-mode console on a pl011 serial
port at the specified address. The pl011 serial port
must already be setup and configured. Options are not
yet supported. If 'mmio32' is specified, then only
the driver will use only 32-bit accessors to read/write
the device registers.
meson,<addr>
Start an early, polled-mode console on a meson serial
port at the specified address. The serial port must
already be setup and configured. Options are not yet
supported.
msm_serial,<addr>
Start an early, polled-mode console on an msm serial
port at the specified address. The serial port
must already be setup and configured. Options are not
yet supported.
msm_serial_dm,<addr>
Start an early, polled-mode console on an msm serial
dm port at the specified address. The serial port
must already be setup and configured. Options are not
yet supported.
smh Use ARM semihosting calls for early console.
s3c2410,<addr>
s3c2412,<addr>
s3c2440,<addr>
s3c6400,<addr>
s5pv210,<addr>
exynos4210,<addr>
Use early console provided by serial driver available
on Samsung SoCs, requires selecting proper type and
a correct base address of the selected UART port. The
serial port must already be setup and configured.
Options are not yet supported.
lpuart,<addr>
lpuart32,<addr>
Use early console provided by Freescale LP UART driver
found on Freescale Vybrid and QorIQ LS1021A processors.
A valid base address must be provided, and the serial
port must already be setup and configured.
armada3700_uart,<addr>
Start an early, polled-mode console on the
Armada 3700 serial port at the specified
address. The serial port must already be setup
and configured. Options are not yet supported.
An example of the usage is:
earlycon=uart8250,0x21c0500
My questions are:
Why is there a reference to the 8250/16550 physical hardware? Has this old implementation molded into an interface specification for modern designs? That is, are we still using the drivers for UART that were compatible when these comms devices were external to the SoC?
If MMIO is Memory Mapped IO, what is "normal" IO referring to in this context?
What is the <addr> parameter? Is this the beginning of the UART configuration registers for the specific SoC you are running this kernel on? Do most UART configuration registers conform to a specific register layout such that a generic UART driver may appropriately configure the hardware?
|
I'm sure someone is still doing this, but back in the days before stuff like ILO/DRAC/etc. became cheap and ubiquitous, the best way to get "out of band" access to the console in case of emergencies or an oops was over the serial port. You would mount a Terminal Server in the rack, then run cables to the serial port of your servers. Some BIOSs supported console redirection to the serial port (for example VA Linux and SuperMicro servers in the 1999+ timeframe).
The 8250/16550 UARTS were some of the most popular serial port chips at the time, meaning that they would be the best supported under Linux, and all of them used the 8250 kernel driver (there were many more models in that series that all used the same driver).
I suspect that a lot of SoC designs intended to be used by linux built 8250/16550 compatible UARTs into them because it was the easy button--well documented and a well tested driver. Although hopefully they built the later multibyte buffer versions (of course even "slow" processors by todays standards can service a serial interrupt far more often than a 115k serial port can receive it). IIRC the Mac had a serial port used for Local/Apple Talk (Can't remember which was the protocol and which was the hardware) that did 248k. Still, that was back when CPUs did 60Mhz.
This is probably the best answer for the difference between MMIO and Port I/O: https://en.wikipedia.org/wiki/Memory-mapped_I/O I don't understand that level well enough to boil it down.
The above link will probably answer what is for these purposes, but basically it's a memory address.
| Kernel parameters "console" and "earlycon" refer to old hardware? |
1,436,241,099,000 |
How do I activate extra verbosity during boot, on Debian? I removed the quiet parameter and tried to add debug but it didn't help.
My problem is that my keyboard takes 2-3 min to activate so it slows my startup tremendously since I need to unlock a partition. I would like to get the message that pops out the moment my keyboard gets activated, but removing quiet doesn't print it.
|
There is a list of all possible boot parameters. I've never used it, but try adding ignore_loglevel:
Ignore loglevel setting - this will print /all/ kernel messages to the console. Useful for debugging. We also add it as printk module parameter, so users could change it dynamically, usually by /sys/module/printk/parameters/ignore_loglevel.
I previously mentioned verbose, but actually that only applies to other specific kernel options like acpi, as you can read, above.
With an understanding of your exact hardware, like it seems you have, you can go even deeper with arch/arm/Kconfig.debug
| How can I activate extra-verbose mode (debugging mode) during Debian boot? |
1,436,241,099,000 |
I have a Xilinx FPGA PCIe end-point on the PCI Bus.
Linux picks up the device just fine and everything in lspci looks perfect.
My question is about PCI access options from user-space and what would be good/bad.
Option 1: Direct access via /sys/.../resource0
(only one I have managed to make work so far)
I can open and mmap say /sys/bus/pci/devices/XXXX:XX:XX.X/resource0 then mmap that and read/write. Just need to fix permissions first.
My question is, is this a good or bad approach?
It feels like this might not be the preferred approach of accessing PCI address space?
Option 2: using uio_pci_generic
I've managed to configure my FPGA so that this driver actually connects, the fact that it requires interrupts is really annoying.
And it seems this gives access to nothing accept interrupts and configuration memory space?
This doesn't seem very useful to me? Am I missing something?
Option 3: Write my own uio driver
This might be a reasonable option perhaps?
I'm not really sure how difficult this is.
One possible advantage of this is that I might get access to DMA and therefore speed things up quite a bit.
Option 4: Write a completely custom linux PCI driver
I would like to avoid this option if possible
My question is about what is the best approach and what are the down sides specifically of option 1. Or are there any other approaches I should consider?
(I'm running debian with kernel 3.14.15 rt patched)
|
Option 1 (direct access via /sys/.../resource0)
Good for testing and nothing wrong with it functionally, although can not do anything advanced and no driver layer abstraction. I find this method icky by the way the user program interacts with the sysfs but that might be my personal opinion.
Option 2 (using uio_pci_generic)
I don't know what uio_pci_generic does, but it seems to add little functionality beyond allowing your user program to access pci legacy interrupts. Which is bad cause MSI is preferred anyhow.
Option 3 (custom uio driver)
I didn't try this but I suspect it is a bit of a waste of time compared to option 4
Option 4 (custom kernel driver)
This is really the best solution and only way to properly do things. You need a driver to be able to handle things like DMA and MSI properly and be able to provide any amount of abstraction via a character device. There are however ample documentation on how to write drivers for PCI cards online and the kernel provides a lot of support for managing things.
| Linux Userspace PCI driver options? (uio_pci_generic) |
1,436,241,099,000 |
I'm using an Ubuntu Server (13.04) Minimal installation (with the Xubuntu Metapackage as a desktop environment, if that matters) on x86_64 on my Samsung notebook. I'm currently forced to use the no-longer-maintained version of the proprietary AMD graphics driver (fglrx-legacy) as the open-source "radeon" driver heats the card 15°C more in idle.
That's why I'd really like to try kernel 3.11 with the new power management features for AMD cards. The problem is, once I install a mainline kernel the system freezes after selecting the kernel in Grub with the messages:
Loading Linux 3.11.0-laptop ...
Loading initial ramdisk ...
And nothing happens. How can I find out what's wrong? Are there any logs from that early in the boot process stored somewhere?
Some more information: The system works perfectly with the Ubuntu Raring Kernel self-compiled from Git (which is based on 3.8). It doesn't work with mainline Kernel 3.9 or 3.10 (same problem). I also tried a pre-compiled "generic" version of 3.10, doesn't work either. I have 4 partitions on the hard drive: /boot (unencrypted), /, /home and swap (all LUKS/dm-crypt encrypted). The notebook is a Samsung NP-R522H. The GPU is a Mobility Radeon HD 4650.
|
Sorry, I totally forgot this question.
The solution back then was to use the Saucy (Ubuntu 13.10) Kernel, which is based on 3.11, instead of the vanilla/mainline one. Some changes probably broke compatibility, at least with my combination of hardware and software.
| Kernel freezes on loading ramdisk. How to find out what's wrong? |
1,436,241,099,000 |
I write a kernel module which has initialize and end function. I want one more function and want to call it from the user space process at any time i want.
Is it applicable ? If so, how ?
I am working on CentOS 5.2 and custom kernel, patched from linux 2.6.18.
EDIT:
To make clear, I want to write a function into kernel module and call this function from the regular source.c file.
|
Doing a kernel module that can use the /proc filesystem sounds like it might work for you. IBM developerWorks has an article on that topic. I worked through the code a few years ago, and it worked back then. The article is dated 2006, and seems to apply to Linux 2.6 kernels.
The problem I can foresee with using "files" in the /proc filesystem to get your module to do its work is that an open/read/close style API probably doesn't match what you want to do. You might have to use an open() on a /proc file to mean "execute WBINVD" or something unobvious like that.
| kernel module function call |
1,436,241,099,000 |
I am working on a project that includes building an old kernel version of linux. This is fine, but I still need to patch the kernel of all previously found security vulnerabilities based on CVEs. I have the CVEs and have extracted the names of the vulnerable files mentioned in them, along with the kernel versions it affects.
So far, I have found about 150 potential vulnerabilities that could affect my build, but obviously some of them affect files relevant to graphics drivers that I don't use. So far, I have just gone through the list manually, checking if the files are included by using make menuconfig, and cating Kconfig in relevant folders. This has worked alright so far, but these methods don't show the actual file names (e.g. ipc/sem.c) so it takes more work than necessary.
Ideally, I would like to somehow print a list of all the files that will be included in my build, and then just write a script to check if vulnerable files are included.
How can I find the names of ever source file (e.g. ipc/sem.c) that will be included in my build?
|
Do the build, then list the .o files. I think every .c or .S file that takes part in the build is compiled into a .o file with a corresponding name. This won't tell you if a security issue required a fix in a header file that's included in the build.
make vmlinux modules
find -name '*.o' -exec sh -c '
for f; do for x in c S; do [ -e "${f%.o}.$x" ] && echo "${f%.o}.$x"; done; done
' _ {} +
A more precise method is to put the sources on a filesystem where access times are stored, and do the build. Files whose access time is not updated by the build were not used in this build.
touch start.stamp
make vmlinux modules
find -type f -anewer start.stamp
| How do I know which files will be included in a linux kernel before I build it? |
1,436,241,099,000 |
Is it possible to restart only the userspace? Like shutdown everything up to the kernel and then restart from PID 1?
I would like to snapshot my root btrfs filesystem and quickly boot on that snapsnot.
|
You are probably inspired by macOS launchctl reboot userspace feature; unfortunately systemd lacks this feature (systemd can re-launch itself with systemctl daemon-reexec, but this does not terminate all children), however you can get an even better result than macOS' reboot userspace by using the kexec feature of the Linux kernel.
What it does it terminates all userspace processes as well as all kernel threads, then restarts the currently loaded (or even user-specified alternate) kernel, effectively resulting in a start from the moment the bootloader would pass the initialization to the kernel, minus going through the hardware reset stage that a reboot would do.
from kexec(8) manual page:
DESCRIPTION
kexec is a system call that enables you to load and boot into another
kernel from the currently running kernel. kexec performs the function
of the boot loader from within the kernel. The primary difference be‐
tween a standard system boot and a kexec boot is that the hardware ini‐
tialization normally performed by the BIOS or firmware (depending on
architecture) is not performed during a kexec boot. This has the effect
of reducing the time required for a reboot.
There is a ruby script that eases the use of kexec if you do not want to learn kexec arguments, it parses the grub configuration files and allows you to choose even a different kernel, however note that it does not seem to understand Fedora's new bootloader spec configuration files, in which case you must perform the actions using bare kexec tool.
For reference, here is output from running kexec-reboot on Ubuntu 22.04 on EFI
$ sudo kexec-reboot -iv
Read GRUB configuration from /boot/grub/grub.cfg
Select a kernel to stage:
1: Ubuntu
2: Ubuntu, with Linux 5.19.0-35-generic
3: Ubuntu, with Linux 5.19.0-35-generic (recovery mode)
4: Ubuntu, with Linux 5.15.0-67-generic
5: Ubuntu, with Linux 5.15.0-67-generic (recovery mode)
6: Ubuntu, with Linux 5.15.0-58-generic
7: Ubuntu, with Linux 5.15.0-58-generic (recovery mode)
Your selection: 1
Staging Ubuntu
Staging kernel Ubuntu
Unloading previous kexec target, if any
Running /sbin/kexec -l /boot/vmlinuz-5.19.0-35-generic
--append='root=UUID=9e994b93-047b-46a6-9a71-51dfcb4e9598 ro intel_iommu=on iommu=pt
i915.enable_gvt=1 zswap.enabled=1 zswap.compressor=zstd resume=/dev/disk/by-
uuid/01b394fe-b29e-499c-a722-5f8d56cec3cd quiet splash $vt_handoff' --initrd=/boot
/initrd.img-5.19.0-35-generic
Add -r to also reboot
The API call itself states that filesystems are not umounted, however the kexec tool appears to also invoke a shutdown sequence minus the part where it reboots the machine, thus syncing the filesystems, so your services and processes should terminate gracefully like in a normal shutdown.
| Reboot only userspace |
1,436,241,099,000 |
I need to upgrade my centos OS to kernel-3.10.0-862 to address the security issue. When I run yum check-update|grep kernel Gives only 693.21
kernel.x86_64 3.10.0-693.21.1.el7 updates
I do see the updated kernel here :
http://mirror.centos.org/centos/7/updates/x86_64/Packages/
What is the correct method to install these update using yum command?
Thanks
SR
Update:
# rpm -qa kernel\*
kernel-3.10.0-693.11.1.el7.x86_64
kernel-headers-3.10.0-693.17.1.el7.x86_64
kernel-tools-libs-3.10.0-693.11.6.el7.x86_64
kernel-3.10.0-693.11.6.el7.x86_64
kernel-tools-3.10.0-693.11.6.el7.x86_64
# yum list installed | grep kernel
kernel.x86_64 3.10.0-693.11.1.el7 @updates
kernel.x86_64 3.10.0-693.11.6.el7 @updates
kernel-headers.x86_64 3.10.0-693.17.1.el7 @updates
kernel-tools.x86_64 3.10.0-693.11.6.el7 @updates
kernel-tools-libs.x86_64 3.10.0-693.11.6.el7 @updates
yum file for updates
#released updates
[updates]
name=CentOS-$releasever - Updates
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates&infra=$infra
#baseurl=http://mirror.centos.org/centos/$releasever/updates/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
|
Within /etc/yum.repos.d, you should see a repo called CentOS-CR.repo which will be disabled. Set enabled to 1 and then yum list available | grep kernel and you'll see the kernel packages for 3.10.0-862.
After that, you can yum update or yum install kernel* to get the new kernel packages.
I just ran yum update on my Centos 7.4 text box and everything seems to be working okay with the usual tasks after 3.10.0-862 was installed.
| Centos 7 updated kernel to kernel-3.10.0-862 using YUM |
1,436,241,099,000 |
I have an Atmel SAM9X system running Angstrom. I am trying to make a recovery partition so when a user holds a button during boot up the recovery partition boots up.
I have modified the bootstrap so when a button is held on boot up, an alternate linux kernel is loaded. What I want is the alternate kernel to load linux from the recovery boot partition not the normal main partition.
Is this even possible? Or can I load the recovery partition without using two kernels?
The reason I want this is so if the main bootable partition gets corrupted the recovery partition will copy itself to the main partition (similar to those Dell or HP windows machines with the recovery partition) and the main bootable partition will be restored.
Edit: Giles suggestion did it. The bootstrap was setting the kernel command line argument, I just added root=/dev/mmcblk0p3 (boot from 3rd sd partition) to that and it booted from the desired partition!
|
The kernel contains a default root partition setting, determined at compile time (you can change it in the binary image with the rdev command). You can pass an argument on the kernel command line to override this default at boot time, e.g. root=/dev/mmcblk9p42 to boot from MMC device 9 partition 42 instead of the default. The command line is passed to the kernel by the bootloader, so you need to change your bootloader configuration.
If there is an initrd or initramfs, it may override the root partition that was compiled in or passed by the bootloader.
| Making a recovery partition in embedded Linux |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.