date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,624,558,154,000
I am considering building a small home server. I'd like to encrypt some folders on this server, therefore using the instruction set aes-ni which is supported by newer (mostly Intel) chips would be advantageous. Is there a way to use aes-ni with Debian, or is there at least an alternative kernel that supports it? [edit] Or is it already supported by default: http://kernel.alioth.debian.org/config/2.6.38-2/config_amd64_none_amd64?
It does look like it is configured in the config you listed (as CONFIG_CRYPTO_AES_NI_INTEL=m, which means configured as a module), but regardless, it is easy to build your own custom Debian kernels. See the Debian Kernel Handbook. You want 1.10, the version online is 1.09 which is out of date. The only downside of compiling a custom kernel is that you need to rebuild whenever there are security updates (and keep track of the security updates). The stock kernel updates arrive automatically via the package management system. Manoj Srivastava's kernel-package is also used for this, but the Debian Kernel team use the procedures outlined above in the handbook to build the stock kernels, for example, so I think it is a better way to go.
Use aes-ni in Debian
1,594,598,856,000
When you want to limit CPU time per process, you can do it via cgroups. There are two parameters that can do the job: cpu.cfs_period_us and cpu.cfs_quota_us. There's some info on the parameters here: cpu.cfs_period_us: The duration in microseconds of each scheduler period, for bandwidth decisions. This defaults to 100000us or 100ms. Larger periods will improve throughput at the expense of latency, since the scheduler will be able to sustain a cpu-bound workload for longer. The opposite of true for smaller periods. Note that this only affects non-RT tasks that are scheduled by the CFS scheduler. cpu.cfs_quota_us: The maximum time in microseconds during each cfs_period_us in for the current group will be allowed to run. For instance, if it is set to half of cpu_period_us, the cgroup will only be able to peak run for 50 % of the time. One should note that this represents aggregate time over all CPUs in the system. Therefore, in order to allow full usage of two CPUs, for instance, one should set this value to twice the value of cfs_period_us. Let's say I want to limit a process to 1 CPU core. This can be done in the following ways: cpu.cfs_quota_us 1.000.000 cpu.cfs_period_us 1.000.000 vs. cpu.cfs_quota_us 100.000 cpu.cfs_period_us 100.000 vs. cpu.cfs_quota_us 10.000 cpu.cfs_period_us 10.000 What's the difference between the three options? Let's say I have a Firefox process, what cpu.cfs_period_us is better for it -- longer or shorter and why?
As the quote says lower number give lower latency. A process does not have to wait long before it is scheduled: Every process gets a turn, soon. However there is more re-scheduling overhead: Every time the time runs out, and there are other processes ready to run, there is a re-schedule. Re-scheduling involves saving all registers on the stack, saving the stack-pointer into the task-control-block, switching task-control-block, disabling/enabling parts of the virtual-page-table, reloading the stack-pointer, and restoring the registers. It can also cause more cache misses. So in short things run slower. For long-running non-interactive tasks a longer scheduler period is better. The batch scheduler has a longer scheduler period, and runs at a lower priority than the standard interactive scheduler.
What are the benefits of using longer/shorter periods in cpu.cfs_period_us?
1,594,598,856,000
When I run command like iostat -dkx 2 2 via ssh, I get the expected result, but the processes on the local computer are saying alive in status "interruptible sleep". Why is this happening? Is there a way to find out the reason of that behavior? full command: $ ssh -o ConnectTimeout=4 -o ChallengeResponseAuthentication=no -o PasswordAuthentication=no <user>@host> iostat -dkx 2 2 ps output: $ ps aux | grep 11893 && ps aux | grep PID USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND 1000 10273 0.0 0.0 103280 904 pts/0 S+ 12:09 0:00 grep PID 1000 11893 0.0 0.0 158732 3892 ? S Feb17 0:00 ssh -o ConnectTimeout=4 -o ChallengeResponseAuthentication=no -o PasswordAuthentication=no <user>@<host> iostat -dkx 2 2 1000 10285 0.0 0.0 103280 904 pts/0 S+ 12:09 0:00 grep 11893 strace: $ strace -p 11893 Process 11893 attached - interrupt to quit select(8, [5], [], NULL, NULL^C <unfinished ...> wchan: $ cat /proc/11893/wchan poll_schedule_timeout stacktrace: $ cat /proc/11893/stack [] poll_schedule_timeout+0x39/0x60 [] do_select+0x6bb/0x7c0 [] core_sys_select+0x18a/0x2c0 [] sys_select+0x47/0x110 [] system_call_fastpath+0x16/0x1b [] 0xffffffffffffffff
There seems to be nothing wrong. The process you are looking at (ssh) has simply nothing to do at the moment you are taking it's process stat. As long as there is no output from the remotely started command, the "select" blocks, and the process is sent to sleep.
How to find out the reason why ssh processes are hanging?
1,594,598,856,000
I'm not sure at which level I am having a problem. System is a LeopardBoard DM368 running TI's own SDK / LSP / BusyBox kernel, the core Linux kernel is 2.6.x so using serial_core.c driver model. By default the system has one UART enabled, UART0, mounted as /dev/ttyS0 which is also used/invoked via the bootargs console=ttyS0,115200n8 earlyprintk. We want to enable UART1 as /dev/ttyS1, so have gone through the low-level board initialisation code which sets up the pinmux, clocks, etc. On booting, the low-level init reports (via printk's I added in) that it's enabled the UART1, and the driver code reports happiness too: [ 0.547812] serial8250.0: ttyS0 at MMIO 0x1c20000 (irq = 40) is a 16550A [ 0.569849] serial8250.0: ttyS1 at MMIO 0x1d06000 (irq = 41) is a 16550A However, the port does not appear in /dev/ (as /dev/ttyS1), and there are discrepancies with its status (flow control bits) which I suspect may be causing it to hang / never transmit: cat /proc/tty/driver/serial serinfo:1.0 driver revision: 0: uart:16550A mmio:0x01C20000 irq:40 tx:97998 rx:0 CTS|DSR 1: uart:16550A mmio:0x01D06000 irq:41 tx:0 rx:0 DSR If I try to configure or modify it from the command line I get an error: >: stty -F /dev/ttyS1 stty: can't open '/dev/ttyS1': No such file or directory Bizarrely, if I change the bootargs to console=ttyS1,115200n8 earlyprintk the port works perfectly, and ttyS0 is still initialised correctly and works too: cat /proc/tty/driver/serial serinfo:1.0 driver revision: 0: uart:16550A mmio:0x01C20000 irq:40 tx:0 rx:0 CTS|DSR 1: uart:16550A mmio:0x01D06000 irq:41 tx:11563 rx:0 RTS|DTR|DSR Now, that would be fine, but our bootloader must use UART0 so it would be nice to keep all the console stuff on ttyS0 and have ttyS1 for our secondary comms. I inserted a couple of printk's into serial_core.c and it seems like uart_open() is never being called for ttyS1, I'm assuming it's something in the Linux init/startup sequence that needs modifying? Edited: because I had fooled myself by doing an echo >/dev/ttyS1 which had created a file called /dev/ttyS1, which clouded matters somewhat. I'm now 99% sure /dev/ttyS1 is never created.
mknod /dev/ttyS1 c 4 65 (if /dev is read-only use any writable directory mounted without the option nodev) If the node is created without errors you can check if your patch is working reading/writing to the node or with any terminal emulator. The problem is that the node isn't created? If you're using some auto-magic dynamic dev fs like devfs or udev probably there's some registration problem in the middle (but I think not as most of the code is the same to bring up the ttyS0 and I guess adding a serial port is like adding a configuration row in an array in some platform file). If you aren't using dev fs like that probably you have a MAKEDEV file somewhere in your build tree where to manually add a line for your new device to be created statically. I've seen also a system where the dev nodes were created by an init script.
ttyS1/uart1 initialised but not accessible through /dev/ttyS1
1,594,598,856,000
I'm using Buildroot to create a Linux system for the NXP LPC3250 microcontroller. There are patches to the vanilla kernel to make it compatible with the LPC3250 controller: http://git.lpclinux.com/ I would like to build Kernel 2.6.39.2, but my Buildroot system always makes a 2.6.34 kernel! I have set the GIT repository to point to 2.6.39.2: You can see that I've specified a Defconfig for the system I'm building for: ea3250 I've edited my ea3250 defconfig as well: After doing a make clean all to clean everything and rebuild the system, looking inside the output kernel image shows it is still building 2.6.34: What am I doing wrong? Is there another menu I need to configure to get it to build 2.6.39.2?
Not really an answer, but it doesn't fit a comment due to formatting: What happens when you do the following: cd /home/user/projects/buildroot make clean make distclean Copy target system's /proc/config.gz to host /tmp Then take the existing kernel config and translates it to the new kernel version by answering the various question: gunzip -c /tmp/config.gz ./.config make oldconfig Do some sanity checking on version: make menuconfig Build the binaries make Check version of the kernel image built, don't load the kernel image in nano to search for a string, that is bad practice. file ./buildroot/output/images/*
Buildroot ignoring configuration files - building wrong kernel
1,594,598,856,000
this seems like a very basic question, but after searching the web for 2 hours I couldn't find any real help on the subject and it's driving me nuts. It's brutally simple: I have a radeon 4670 video card (rv770xt), use Arch Linux' repo kernel and have monitors on VGA and DVI. KMS is enabled and doing fine. The VGA monitor has a smaller resolution than the DVI monitor and by default the screen is mirrored to both with the VGA monitor's resolution. Now I don't want to use the VGA monitor at all for the kernel framebuffer/console and want the kernel to use the (bigger) DVI monitor. Note: This is not about X.org dual-head, it's about the kernel framebuffer/console. Now here's the question: Are there kernel parameters to specify a default output for my framebuffer and if so, which? The best way to do that I have found so far seems to use con2fb on startup to move all the VTs to the 2nd monitor, but I don't even know if the radeon driver creates 2 fbs and it would only be a workaround anyways. Edit1: I checked, the driver just creates fb0, so con2fb is a no-go
You could use a udev rule and fbset to force the framebuffer resolution on both displays, which may go some way to achieving what you are after. The udev rule would be along the lines of /etc/udev/rules.d/81-framebuffer-hack.rules: KERNEL=="card0-DVI", SUBSYSTEM=="drm", ATTR{dpms}=="On", ATTR{enabled}=="enabled", ATTR{status}=="connected", RUN+="/usr/sbin/fbset -g 1920 1080 1920 1080 32" You can read up on the specifics on udev rules on the Writing udev rules page.
VGA and DVI, set default kernel console to one
1,594,598,856,000
Sometimes after I boot up the machine, I get no sound in flash videos. First I didn't know why it was happening, but as I read a few articles here and there, I found out it was because of my USB webcam (logitech) - ALSA was messing something up and didn't know how to handle sound drivers well (so they say). So one of the suggestions I found was to force the (in my case) snd_hda_intel module load before snd_usb_audio module during the boot process. I thought it worked, but after a while the same problem appeared again. If I just unplug the webcam from my computer, sound works perfectly on each boot, but I don't want to get down and insart the cam everytime I need it... Anyone have any suggestions what I should try to fix this?
You might be able to fix it by creating a ~/.asoundrc see http://alsa.opensrc.org/.asoundrc Specifically you want the opposite of the case here: http://alsa.opensrc.org/.asoundrc#Default_PCM_device so aplay -L with out the usb audio in and and asoundrc like this might work: pcm.!default front:Intel If your card (Like mine) gets named Intel by alsa just putting that in ~/.asoundrc should work (if you want it to affect stuff that's not run as your UID then put it in /etc/asound.conf) Oh also none of this really applies if you're using pulseaudio... you need to do different stuff in that case.
Sometimes no sound in flash videos after boot?
1,594,598,856,000
I have two x86_64 kernels compiled on the same machine against the same code (4.15.0 in Linus' source tree). The config files were produced by running make localmodconfig against that source, using different, larger original config files coming from different distros: Arch and Slackware respectively. I'll nickname them arch config and slk config for that reason. The issue: running cat /proc/meminfo consistently reports about 55-60 MB more in the MemTotal field for arch than for slk: MemTotal: 32600808 kB for arch vs MemTotal: 32544992 kB for slk I say 'consistently' because I've tried the experiment with the same config files against earlier versions of the source (a bunch of the 4.15-rc kernels, 4.14 before that, etc., rolling over from one source to the next with make oldconfig). This is reflected in the figures reported by htop, with slk reporting ~60MB less usage on bootup than arch. This is consistent with the htop dev's explanation of how htop's used memory figures are based on MemTotal. My question is: any suggestions for which config options I should look at that would make the difference? I of course don't mind the 60MB (the machine the kernels run on has 32 GB..), but it's an interesting puzzle to me and I'd like to use it as a learning opportunity. Memory reporting on Linux is discussed heavily on these forums and outside in general, but searching for this specific type of issue (different kernels / same machine => different outcome in memory reporting) has not produced anything I found relevant. Edit As per the suggestions in the post linked by @ErikF, I had a look at the output of journalctl --boot=# where # stands for 0 or -1, for the current and previous boots respectively (corresponding to the two kernels). These lines do seem to reflect the difference, so it is now a little clearer to me where it stems from: arch (the one reporting larger MemTotal): Memory: 32587752K/33472072K available (10252K kernel code, 1157K rwdata, 2760K rodata, 1364K init, 988K bss, 884320K reserved, 0K cma-reserved) slk (the one reporting smaller MemTotal): Memory: 32533996K/33472072K available (14348K kernel code, 1674K rwdata, 3976K rodata, 1616K init, 784K bss, 938076K reserved, 0K cma-reserved) That's a difference of ~55 MB, as expected! I know the slk kernel is larger, as verified by comparing the sizes of the two vmlinuz files in my /boot/ folder, but the brunt of the difference seems to come from how much memory the two respective kernels reserve. I'd like to better understand what in the config files affects that to the extent that it does, but this certainly sheds some light. Second edit Answering the questions in the comment by @Tim Kennedy. Do you have a dedicated GPU, or use shared video memory No dedicated GPU; it's a laptop with on-board Intel graphics. and do both kernels load the same graphics driver? Yes, i915. Also, compare the output of dmesg | grep BIOS-e820 | grep reserved As you expected, does not change. In all cases it's 12 lines, identical in every respect (memory addresses and all). (Final?) edit I believe it may just be as simple as this: the kernel reporting less MemTotal has much more of the driver suite built in; I just hadn't realized it would make such a noticeable difference.. I compared: du -sh /lib/modules/<smaller-kernel>/modules.builtin returns 4K, while du -sh /lib/modules/<larger-kernel>/modules.builtin returns 16K. So in the end, I believe I was barking up the wrong tree: it won't be a single config option (or a handful), but rather the accumulated effect of many more built-in drivers.
I believe the picture is as in my last edit above; I just did not know enough about the process of reserving memory to realize when I asked the question: it all boiled down to the larger kernel having more of the drivers built in rather than modularized. Incidentally, I did want to put this to some (remotely) practical use: I wanted to have as slim a kernel as I could, but still boot it without an initrd. I knew that the smaller kernel (moniker arch above) is very lightweight and fast to boot, but not without an initramfs; the larger kernel (slk) will boot without an initramfs, but takes longer. So I made a compromise that I think I'll stick with for now. I took the two config files, call them large and small, and made sure that every unset config option in small is reflected in large. First, awk '/^# CO/ {print} small > staging' grabs all lines in the small config file that are of the form # CONFIG_BLAH is not set and dumps them in a staging text file. Then, for i in $(awk '{print $2}' staging) ; do sed -i "s/^$i=./# $i is not set/" large ; done takes all lines from the config file large that set (via =y or =m) an option contained in staging and unsets them. I then ran make oldconfig on the resulting file large in the kernel source directory and compiled. It all went through all right: the new kernel boots some 3 seconds faster without an initramfs is much smaller in the sense that its /lib/modules/<kernel>/modules.builtin got cut down in half, from 16K to 8K. Those are not much to write home about, but as I said this was puzzling me.. I think I'm all set on the issue now. Presumably the more straightforward thing to do would have been to figure out once and for all precisely which drivers I need on this machine in order to boot without an initramfs, but I'll leave that for some other day. Besides, playing off one config against the other was a fun exercise in its own right.
different kernels report different amounts of total memory on the same machine
1,424,774,855,000
It's clear that you need the kexec syscalls in the 'first' kernel. But does the kernel that should get loaded (with kexec_load and reboot) need to be compiled with kexec? If yes, what extra code needs to get executed inside the kernel to do a kexec boot instead of a normal boot?
I haven't seen an explicit statement about that and I haven't given it a try but I guess this is not necessary. In addition to my guess the man pages says: kexec performs the function of the boot loader from within the kernel. An adaption of the kernel to be loaded would be necessary only if some black magic instead of the boot loader procedure was involved but not if the running kernel does the samle like the boot loader.
does the second kernel need kexec enabled
1,424,774,855,000
Is there any actual difference between iptables -P FORWARD DROP and net.ipv4.ip_forward = 0 ? I know that one is a firewall command while the other one is a kernel option. But: I don't know whether net.ipv4.ip_forward = 0 is enforced by netfilter or by the kernel directly. I don't know if there is any overhead associated with iptables -P FORWARD DROP compared to net.ipv4.ip_forward = 0. I couldn't find any reference clearly stating that these two options are actually identical in their effect. In short, is there any actual difference between these two commands?
When you disable packet forwarding between interfaces the FORWARD chain is ignored at all. So, in connection to performance which is where your question is targeted it does not make any difference. You can check it doing: iptables -L -vnx HTH
Is there any difference between these two configuration options?
1,424,774,855,000
I think my Linux laptop has been hacked, for three reasons: Whenever I saved files into the Home folder, the files wouldn't appear - not even in the other folders on my computer. An unfamiliar .txt file has showed up in my Home folder. Having noticed it, I didn't open it. I immediately had a suspicion that maybe my laptop has been hacked. When checking my Firewall status, it turned out that it was inactive. Thus, I have taken the following steps: I backed-up all of my recent files using two USB Sticks that aren't as important as other USB Sticks which I own - so in case those USB Sticks get infected with the potential malware, it wouldn't infect my other backed-up important files. I've used ClamTK in order to scan the aforementioned suspicious file - but apparently, for some reason, it hasn't detected any threats. I've used chkrootkit for another scan. This is the output (up until that point, nothing seemed to have been infected): Searching for suspicious files and dirs, it may take a while... The following suspicious files and directories were found: /usr/lib/python2.7/dist-packages/PyQt4/uic/widget-plugins/.noinit /usr/lib/debug/.build-id /lib/modules/4.13.0-39-generic/vdso/.build-id /lib/modules/4.13.0-37-generic/vdso/.build-id /lib/modules/4.10.0-38-generic/vdso/.build-id /lib/modules/4.13.0-36-generic/vdso/.build-id /lib/modules/4.13.0-32-generic/vdso/.build-id /lib/modules/4.13.0-38-generic/vdso/.build-id /usr/lib/debug/.build-id /lib/modules/4.13.0-39-generic/vdso/.build-id /lib/modules/4.13.0-37-generic/vdso/.build-id /lib/modules/4.10.0-38-generic/vdso/.build-id /lib/modules/4.13.0-36-generic/vdso/.build-id /lib/modules/4.13.0-32-generic/vdso/.build-id /lib/modules/4.13.0-38-generic/vdso/.build-id And also: Searching for Linux/Ebury - Operation Windigo ssh... Possible Linux/Ebury - Operation Windigo installetd I was trying - twice - to scan my laptop with F-PROT, with fpscan, using Ultimate Boot CD. But when I tried getting into the PartedMagic section of the disc in order to use the tool, it just wouldn't work. Twice. So I was not able to use it whatsoever. When typing sudo freshclam, I got the following output: ERROR: /var/log/clamav/freshclam.log is locked by another process ERROR: Problem with internal logger (UpdateLogFile = /var/log/clamav/freshclam.log). Then, I scanned the computer using rkhunter. These are the warnings I got: /usr/bin/lwp-request [ Warning ] Performing filesystem checks Checking /dev for suspicious file types [ Warning ] Checking for hidden files and directories [ Warning ] And this is the summary: System checks summary ===================== File properties checks... Files checked: 143 Suspect files: 1 Rootkit checks... Rootkits checked : 365 Possible rootkits: 0 Applications checks... All checks skipped The system checks took: 1 minute and 10 seconds All results have been written to the log file: /var/log/rkhunter.log One or more warnings have been found while checking the system. Please check the log file (/var/log/rkhunter.log) So, after all that - I do not have access to the rkhunter log file as root: n-even@neven-Lenovo-ideapad-310-14ISK ~ $ sudo su neven-Lenovo-ideapad-310-14ISK n-even # /var/log/rkhunter.log bash: /var/log/rkhunter.log: Permission denied What should I be doing now? Help much appreciated! Thanks a lot.
Based on the details in your question, your system is clean. You're making backups. OK. clamav comes up clean. That's fine, too. Based on your output of chkrootkit, your system is clean. Those files listed as suspicious are benign. The Ebury/Windigo detection is a false positive: https://github.com/Magentron/chkrootkit/issues/1 Some of the live discs you tried didn't work. That's OK. There might already be an updater running as a daemon. You're trying to execute the log file. View it in a pager instead, like less /var/log/rkhunter.log. From a logical standpoint, chkrootkit and rkhunter aren't of much use if they are used to scan the same system they execute on since they are not realtime scanners thus any decently packaged rootkit would have sabatoged the scanners before they are run. Also, both have heuristics that result in plenty of false positives. The saved files not appearing are rarely an indication of system compromise. Without knowing the contents of the "suspicious" .txt file you mention, there can be no conclusion drawn from that. DEADJOE is a backup file created by the JOE text editor. The firewall in Linux Mint is disabled by default. Edit: Added info on DEADJOE file.
bash: /var/log/rkhunter.log: Permission denied (as root - Linux Mint 18.3)
1,424,774,855,000
Who are the authors of the pure Linux kernel from scratch, which was integrated with GNU tools and formed the full GNU/Linux Operating system in the 1990s? I have read some wiki articles but I haven't got any clear cut idea on the history.
The wikipedia page has a fairly clear history. Linus Torvalds, then a student, wrote his own kernel in the summer of 1991 because he was unhappy with the available Unix kernels: Unix itself (with the Bell Labs code) was extremely expensive (even PC unices such as Xenix), there was Andrew Tanenbaum's MINIX but it was only available to purchasers of Tanenbaum's book, and Torvalds was unaware of the effort led by Berkeley University to produce a free Unix (BSD), and BSD didn't run on PCs yet at the time. Since then, thousands of people have contributed to the kernel, most of them in the form of drivers.
Who wrote the "Linux kernel" (Linus Torvalds and his team)?
1,424,774,855,000
Given the limited fixed size of kernel stack, my guess is that although theoretically we might have a recursive function if its recursion doesn't go too deep, pragmatism would suggest to do away with recursive functions all together, to be on a safer side. After all, too much of recursion can result in wiping off the *thread_info_t* struct and a resultant kernel panic
Yes! Maybe some of the recurive calls are either documented or part of function names? Then, a find/grep should reveal them. Here is a command to do it: find /usr/src/linux/ -name "*.c" -exec grep recursive {} ";" -ls Piping this through | wc -l gives me 270, which is, since -ls prints one additional line per file, at least 135 files+functions. Let's have a look at the first match: /usr/src/linux/fs/jfs/jfs_dmap.c The match is a comment: if the adjustment of the dmap control page, itself, causes its root to change, this change will be bubbled up to the next dmap control level by a recursive call to this routine, specifying the new root value and the next dmap control page level to be adjusted. in front of the method static int dbAdjCtl(struct bmap * bmp, s64 blkno, int newval, int alloc, int level) and in fact, line 2486 and neighbours are: if (dcp->stree[ROOT] != oldroot) { /* are we below the top level of the map. if so, * bubble the root up to the next higher level. */ if (level < bmp->db_maxlevel) { /* bubble up the new root of this dmap control page to * the next level. */ if ((rc = dbAdjCtl(bmp, blkno, dcp->stree[ROOT], alloc, level + 1))) { /* something went wrong in bubbling up the new * root value, so backout the changes to the * current dmap control page. */ Since the question was, whether there is any recursive function, we don't have to visit the next 135 or more matches or search for not explicitly mentioned recursions. The answer is Yes!
Does the linux kernel (specifically 2.6 onwards) have any recursive function?
1,424,774,855,000
Just out of curiosity, I am interested in compiling the Linux kernel with both the clang and zapcc compilers; one at a time. I can't find a guide to follow. Only GCC is getting used to compile the Linux kernel. How do I compile the Linux kernel with other compilers?
The kernel build allows you to specify the tools you want to use; for example, to specify the C compiler, set the CC and HOSTCC variables: make CC=clang HOSTCC=clang The build is only expected to succeed with GCC, but there are people interested in using Clang instead, and it is known to work in some circumstances (some Android kernels are built with Clang).
How do I compile the Linux kernel with Clang? [closed]
1,424,774,855,000
I'm asking about functions like printf that a lot of processes might use and also need the help of kernel for stuff like system calls. What is the step-by-step description in detail for what happens? Because I'm a little confused in this area, I have these questions: Are the instructions for the printf function inside of the kernel part of our user process? And when it tries to execute printf, we do a JMP to that kernel location within the same user process, but we go into kernel mode? Or is there a context switch and a kernel process executes this? Do all of the processes that execute functions like printf map to the same physical memory location when they call printf in their virtual memory? Overall, what are the situations that non-kernel processes use the kernel part of the virtual memory?
printf is implemented by the C library, it’s not part of the kernel. (The kernel does have its own equivalent, more or less, but that’s not available to user processes.) So a user process calling printf doesn’t call into the kernel immediately. If printf’s output gets written¹, that happens by calling write, which is handled by the kernel (well, there’s a small wrapper in the C library, but it’s minimal); the process invokes the corresponding system call, and control switches to the kernel, but still within the context of the same process. Code pages from executables or libraries are only loaded once into memory (for the same version of the underlying file), so yes, printf maps to the same physical address, if it’s provided by the same library. The kernel part of virtual memory is only accessible from kernel code. ¹ Strictly speaking, printf writes its output to a buffer, which might not be written anywhere.
What exactly happens in virtual memory when I call a function like printf in Linux? [closed]
1,424,774,855,000
I'm trying various ways of Linux installation (from iso, flash, iso on flash, kernel on flash, root FS in iso-file on flash...) and want to understand what's going on. My question is: is it possible, given the built kernel and ramfs files from a distribution (vmlinuz and initrd), to find out, where they are going to look for the "/" file system? Is it possible to configure this without re-compiling the kernel? And one more: when kernel loads the root filesystem from loopback device, created from .iso-filesystem, how can I configure this process? Thanks! EDIT: In fact, GRUB configuration contains GRUB root, which is not the real kernel root filesystem location, but just a folder that contains GRUB's belongings,. The real root is configured in init script in initrd as described here. That's how Debian kernel finds an ISO file on hard drive, when booting from it - initramfs finds it: http://www.debian.org/releases/stable/i386/apas02.html.en#howto-getting-images-hard-disk; note that GRUB configuration doesn't contain any reference to ISO location.
It is given at boot time by your bootloader, for example Grub. To see with which arguments your kernel was started, do this: $ cat /proc/cmdline For me, this ouputs: BOOT_IMAGE=/vmlinuz-3.5.0-13-generic root=/dev/mapper/crypt-precise--root ro So the initrd/initramfs will try to mount my /dev/mapper/crypt-precise--root (encrypted LVM) logical volume as /. You can re-configure Grub to load other operating systems from your harddrive using the same kernel (multi-boot) or edit this line runtime by pressing e while selecting (not yet booting) the Grub entry. For recent Debian-based distributions, changing it permanently works like this: (be careful, you may not be able to boot into your original operating system again!) In the file /etc/default/grub set some GRUB_CMDLINE_LINUX="root=/dev/mydevice" yourself and update Grub by doing update-grub. However, I recommend you to configure multiboot, otherwise it's not possible to change or update your Grub configuration again easily.
Given vmlinuz and initrd.gz, how do I find out, where the kernel is going to load / (root) file system from?
1,424,774,855,000
I saw in my syslog kernel.perf_event_max_sample_rate get changed. I was wondering if I could write a quick script to log this variable every few minutes. Currently it is: sysctl -a | grep kernel.perf_event_max_sample_rate In the man page sysctl says sysctl - configure kernel parameters at runtime Does that mean that my script would get the parameter as it was set when the kernel starts? Would it pick up changes?
So one of the big things about learning to Unix is reading the bloody man page: I'm not just being a get off my lawn grumpy old man, there REALLY IS valuable information in there. In this case: DESCRIPTION sysctl is used to modify kernel parameters at runtime. The parameters available are those listed under /proc/sys/. Procfs is required for sysctl support in Linux. You can use sysctl to both read and write sysctl data. So we can: $sudo sysctl -a | grep kernel.perf_event_max_sample_rate kernel.perf_event_max_sample_rate = 50000 sysctl: reading key "net.ipv6.conf.all.stable_secret" sysctl: reading key "net.ipv6.conf.default.stable_secret" sysctl: reading key "net.ipv6.conf.enp3s0.stable_secret" sysctl: reading key "net.ipv6.conf.lo.stable_secret" sysctl: reading key "net.ipv6.conf.wlp1s0.stable_secret" By reading the manpage we learn that -a is "display all values currently available", but we also can see: SYNOPSIS sysctl [options] [variable[=value]] [...] sysctl -p [file or regexp] [...] which means we can shorten the above command to: $ sudo sysctl kernel.perf_event_max_sample_rate kernel.perf_event_max_sample_rate = 50000 Or we can: $ more /proc/sys/kernel/perf_event_max_sample_rate 50000 So, TL;DR: Yes, you can write a script to log this variable every few minutes, but if it's going to show up in the logs when it changes, why would you? It would probably be more efficient to read the value right out of /proc/sys/kernel/perf_event_max_sample_rate than to use sysctl, and it would be more efficient to ask for the specific value from sysctl than to use grep.
View current kernel parameters?
1,424,774,855,000
I'm learning now how to compile and boot linux kernels. Is there a way to boot kernels in a virtual machine, rather than messing my system? I use VMWare Workstation on Windows 8. Can I use that to boot my linux kernel?
I'm learning now how to compile and boot linux kernels. Is there a way to boot kernels in a virtual machine, rather than messing my system? I use VMWare Workstation on Windows 8. I am assuming, based on your wording, that you don't have a UNIX-like working environment. To build your own kernel, you have to have one, so in this case you have a choice between the two: Create one, by installing a GNU/Linux distribution in a Virtual Machine under your hypervisor (that is VMWare Workstation) or take it the hacker's way, and follow linux from scratch to create one for yourself (!!Not advised for a beginner). After you have a working environment, then compiling and testing your own kernel is as simple as doing (for example): wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.9.3.tar.xz tar -xzf linux-3.9.3.tar.xz cd linux-3.9.3 make menuconfig make make modules make modules_install make install and then reboot (it may be slightly more involved, like making a ramdisk, therefore the above serves only as an example). Here are two guides on how to compile a linux kernel for Ubuntu and for Arch Linux
Boot my kernel in a Virtual Machine?
1,424,774,855,000
I've always wondered how the kernel passes control to third-party code, or specifically distribution-specific code, during boot. I've dug around in GRUB's configuration files, suspecting a special parameter to be passed to the kernel to let it know what to do once it has booted successfully, but unable to find anything. This leads me to suspect there may be certain files on the root partition that the kernel looks for. I'd be grateful if someone could shed some light upon this matter. How do distributions achieve this?
It's hardcoded, but you can override the defaults by the kernel parameter init=.... From init/main.c: if (execute_command) { run_init_process(execute_command); printk(KERN_WARNING "Failed to execute %s. Attempting " "defaults...\n", execute_command); } run_init_process("/sbin/init"); run_init_process("/etc/init"); run_init_process("/bin/init"); run_init_process("/bin/sh"); panic("No init found. Try passing init= option to kernel. " "See Linux Documentation/init.txt for guidance.");
How does the kernel "give up" control to distribution-specific initialization?
1,424,774,855,000
If I want to build a custom kernel for an ARM architecture, do I need to : a) Download the kernel from kernel.org, make changes to the kernel, build it using some cross compiler (like code sourcery or something) b) Find an ARM specific kernel from somewhere, find some patches, compile it using some ARM specific tool ? Can any custom kernel be built for the ARM architecture ? I have little knowledge about kernels in general.
The Linux kernel source tarball and git repository includes the code for all supported architectures, such as ARM. The subdirectory Documentation/arm/ contains some ARM related documents which you should probably have a look at before going further. The ARM specific code is located in the arch/arm/ subdirectory (some ARM specific drivers may be in the drivers/*/ subdirectories). Thus go ahead and download the normal kernel tarball from kernel.org and start by reading Documentation/arm/README which starts as follows: Compilation of kernel In order to compile ARM Linux, you will need a compiler capable of generating ARM ELF code with GNU extensions. GCC 3.3 is known to be ... It looks like after reading that file you will have many answers (and maybe also more questions but do not hesitate to ask them :).
Is there a different linux kernel for different architectures?
1,424,774,855,000
I have extremely low space in /boot and wondering I could get rid if a few things. This is the output of sudo ls -al /boot: total 216002 drwxr-xr-x 4 root root 3072 Feb 20 12:33 . drwxr-xr-x 23 root root 4096 Feb 20 12:30 .. -rw-r--r-- 1 root root 1271689 Oct 22 2015 abi-3.19.0-32-generic -rw-r--r-- 1 root root 1239577 Apr 18 2016 abi-4.4.0-21-generic -rw-r--r-- 1 root root 1244118 Jan 6 17:44 abi-4.4.0-59-generic -rw-r--r-- 1 root root 1244118 Jan 18 08:59 abi-4.4.0-62-generic -rw-r--r-- 1 root root 1245512 Feb 1 12:39 abi-4.4.0-63-generic -rw-r--r-- 1 root root 177790 Oct 22 2015 config-3.19.0-32-generic -rw-r--r-- 1 root root 189412 Apr 18 2016 config-4.4.0-21-generic -rw-r--r-- 1 root root 190047 Jan 6 17:44 config-4.4.0-59-generic -rw-r--r-- 1 root root 190047 Jan 18 08:59 config-4.4.0-62-generic -rw-r--r-- 1 root root 190247 Feb 1 12:39 config-4.4.0-63-generic drwxr-xr-x 5 root root 1024 Feb 2 19:56 grub -rw-r--r-- 1 root root 35945618 Jan 15 04:42 initrd.img-3.19.0-32-generic -rw-r--r-- 1 root root 40519001 Jan 15 08:48 initrd.img-4.4.0-21-generic -rw-r--r-- 1 root root 41067223 Feb 2 09:45 initrd.img-4.4.0-59-generic -rw-r--r-- 1 root root 41069127 Feb 2 19:56 initrd.img-4.4.0-62-generic drwx------ 2 root root 12288 Jul 6 2016 lost+found -rw-r--r-- 1 root root 182704 Jan 28 2016 memtest86+.bin -rw-r--r-- 1 root root 184380 Jan 28 2016 memtest86+.elf -rw-r--r-- 1 root root 184840 Jan 28 2016 memtest86+_multiboot.bin -rw------- 1 root root 3628149 Oct 22 2015 System.map-3.19.0-32-generic -rw------- 1 root root 3853719 Apr 18 2016 System.map-4.4.0-21-generic -rw------- 1 root root 3875594 Jan 6 17:44 System.map-4.4.0-59-generic -rw------- 1 root root 3875553 Jan 18 08:59 System.map-4.4.0-62-generic -rw------- 1 root root 3883990 Feb 1 12:39 System.map-4.4.0-63-generic -rw-r--r-- 1 root root 6572944 Jul 4 2016 vmlinuz-3.19.0-32-generic -rw------- 1 root root 7013968 Apr 18 2016 vmlinuz-4.4.0-21-generic -rw------- 1 root root 7069136 Jan 6 17:44 vmlinuz-4.4.0-59-generic -rw------- 1 root root 7070992 Jan 18 08:59 vmlinuz-4.4.0-62-generic -rw------- 1 root root 7087088 Feb 1 12:39 vmlinuz-4.4.0-63-generic If I am correct I could delete everything with lower numbers before generic. Example: vmlinuz-4.4.0-21-generic vmlinuz-4.4.0-59-generic vmlinuz-4.4.0-62-generic vmlinuz-4.4.0-63-generic I should be able to delete: vmlinuz-4.4.0-21-generic vmlinuz-4.4.0-59-generic vmlinuz-4.4.0-62-generic Please tell me if I am able to do this to clear space without damaging my system.
If you are running 4.4.0-63, you can delete all the vlinuz-*, System.map-*, initrd.img-*, config-*, abi-*, except those containing the string 4.4.0-63. However, for resilience , I would keep the previous version around i.e. those files containing 4.4.0-62. You may have to update GRUB or whatever bootloader you use afterwords.
wondering if I can delete what I think are old kernel images
1,424,774,855,000
I read that there are two modes called “kernel mode” and “user mode” to handle execution of processes. (Understanding the Linux Kernel, 3rd Edition.) Is that a hardware switch (kernel/user) that is controlled by Linux, or software feature provided by the Linux kernel?
Kernel mode and user mode are a hardware feature, specifically a feature of the processor. Processors designed for mid-to-high-end systems (PC, feature phone, smartphone, all but the simplest network appliances, …) include this feature. Kernel mode can go by different names: supervisor mode, privileged mode, etc. On x86 (the processor type in PCs), it is called “ring 0”, and user mode is called “ring 3”. The processor has a bit of storage in a register that indicates whether it is in kernel mode or user mode. (This can be more than one bit on processors that have more than two such modes.) Some operations can only be carried out while in kernel mode, in particular changing the virtual memory configuration by modifying the registers that control the MMU. Furthermore, there are only very few ways to switch from user mode to kernel mode, and they all require jumping to addresses controlled by the kernel code. This allows the code running in kernel mode to control the memory that code running in user mode can access. Unix-like operating systems (and most other operating systems with process isolation) are divided in two parts: The kernel runs in kernel mode. The kernel can do everything. Processes run in user mode. Processes can't access hardware and can't access the memory of other processes (except as explicitly shared). The operating system thus leverages the hardware features (privileged mode, MMU) to enforce isolation between processes. Microkernel-based operating systems have a finer-grained architecture, with less code running in kernel mode. When user mode code needs to perform actions that it can't do directly (such as access a file, access a peripheral, communicate with another process, …), it makes a system call: a jump into a predefined place in kernel code. When a hardware peripheral needs to request attention from the CPU, it switches the CPU to kernel mode and jumps to a predefined place in kernel code. This is called an interrupt. Further reading Wikipedia What is the difference between user-level threads and kernel-level threads? Hardware protection needed for operating system kernel
Are “kernel mode” and “user mode” hardware features or software features?
1,424,774,855,000
Many of my modules are missing /sys/module/*/parameters directory and I can't check the module loaded parameters. # printf "%s\n" /sys/module/*/parameters | wc -l 125 # lsmod | wc -l 151 # comm -13 <(printf "%s\n" /sys/module/*/parameters | xargs dirname | xargs basename -a | sort) <(lsmod | awk '{print $1}' | sort) | fmt Module aesni_intel at24 blake2b_generic bpf_preload btbcm btintel btmtk btrfs btrtl crc16 crc32_pclmul crc32c_generic crc32c_intel crct10dif_pclmul cryptd crypto_simd crypto_user dummy ecdh_generic fat gf128mul ghash_clmulni_intel i2c_smbus iTCO_vendor_support iTCO_wdt intel_cstate intel_pmc_bxt intel_rapl_common intel_rapl_msr intel_uncore ip6_tables ip6t_REJECT ip6table_filter ip6table_mangle ip6table_nat ip6table_raw ip_tables ipt_REJECT iptable_filter iptable_mangle iptable_nat iptable_raw irqbypass joydev ledtrig_audio libcrc32c lpc_ich mac_hid mei mei_hdcp mei_me mei_pxp mei_wdt nf_conntrack_broadcast nf_conntrack_netlink nf_conntrack_pptp nf_defrag_ipv4 nf_defrag_ipv6 nf_log_syslog nf_nat nf_nat_amanda nf_nat_ftp nf_nat_h323 nf_nat_irc nf_nat_pptp nf_nat_sip nf_nat_snmp_basic nf_nat_tftp nf_reject_ipv4 nf_reject_ipv6 nfnetlink nfnetlink_log nvidia nvme_common parport polyval_clmulni polyval_generic ppdev raid6_pq rapl sha512_ssse3 snd_hda_codec_conexant snd_hda_codec_generic snd_hda_core snd_hwdep ts_kmp tun uas vboxnetadp vboxnetflt vfat vmd x_tables xhci_pci xhci_pci_renesas xor xt_CT xt_LOG xt_NFLOG xt_addrtype xt_comment xt_conntrack xt_hashlimit xt_mark xt_multiport xt_tcpudp Most notably I was interested in: # lsmod | grep nvidia nvidia_drm 77824 20 nvidia_modeset 1515520 40 nvidia_drm nvidia_uvm 2891776 0 video 69632 1 nvidia_modeset nvidia 61472768 2179 nvidia_uvm,nvidia_modeset # ls /sys/module/nvidia/parameters ls: cannot access '/sys/module/nvidia/parameters': No such file or directory But also dummy doesn't have parameters, which cmon, it's a dummy: # modprobe dummy numdummies=12 # lsmod | grep dummy dummy 16384 0 root@leonidas /root # ls /sys/module/dummy/parameters ls: cannot access '/sys/module/dummy/parameters': No such file or directory # ip a | grep dummy | wc -l 12 I found https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1774731 about also missing dummy/parameters directory. How to enable that directory so that I can get kernel module parameters?
In order for modules to have their parameters visible in /sys/module/*/parameters, the module needs to provide a module_param_cb() callback function for each parameter. Those functions will have to "know where to look" for the current setting of the respective parameter, so the sysfs filesystem driver can use them to figure out the parameters and their states on demand. If the module uses the module_param(name, type, perm) or module_param_named(name, value, type, perm) macros to declare their parameters, the callback function is generated automatically unless perm is specified as 0. The dummy module declares its module parameter like this: module_param(numdummies, int, 0); MODULE_PARM_DESC(numdummies, "Number of dummy pseudo devices"); explicitly specifying the perm as 0, which makes the parameter not visible in sysfs. The main nvidia module declares its own NV_MODULE_PARAMETER(x) and NV_MODULE_STRING_PARAMETER(x) macros (in common/inc/nv-linux.h within the NVidia driver package), which use module_param() with the perm parameter set to 0: #define NV_MODULE_PARAMETER(x) module_param(x, int, 0) #define NV_MODULE_STRING_PARAMETER(x) module_param(x, charp, 0) Other modules in the driver package, like nvidia_modeset, nvidia_drm and nvidia_uvm do use module_param() in a more normal fashion, and those modules do have their parameters in /sys/module/*/parameters as expected. Apparently, the nvidia module internally handles its parameters as something called "registry keys" (see nvidia/nv-reg.h in the driver package). Perhaps this is some kind of attempt to provide a cross-platform interface for NVidia driver parameters, that is at least in some sense similar between Windows and Linux? Note also that the nvidia module provides its own /proc/driver/nvidia/params pseudo-file, which provides all the parameters in one virtual file. So, in a nutshell: I would recommend you to look at /proc/driver/nvidia/params instead and to see if it fits your needs. If not, and if you are willing to build a custom version of the NVidia driver, you might (at your own risk) try changing the definitions of the NV_MODULE_PARAMETER() and NV_MODULE_STRING_PARAMETER() macros to have a perm value other than 0, e.g.: #define NV_MODULE_PARAMETER(x) module_param(x, int, 0400) #define NV_MODULE_STRING_PARAMETER(x) module_param(x, charp, 0400) to make all parameters declared this way readable using the /sys/module/*/parameters interface, by the root user only. If it works, you might then send NVidia an enhancement request.
modules are missing /sys/module/*/parameters directories, how to enable them?
1,424,774,855,000
when we run the sysctl -p on our rhel 7.2 server1 we get sysctl -p fs.file-max = 500000 vm.swappiness = 10 vm.vfs_cache_pressure = 50 sysctl: cannot stat /proc/sys/pcie_aspm: No such file or directory net.core.somaxconn = 1024 # ls /proc/sys/pcie_aspm ls: cannot access /proc/sys/pcie_aspm: No such file or directory but when we run the sysctl -p on other server2 as we get good results without error as sysctl -p fs.file-max = 500000 vm.swappiness = 10 vm.vfs_cache_pressure = 50 net.core.somaxconn = 1024 the file - /proc/sys/pcie_aspm not exist on this server also ( server2 ) so why sysctl -p failed on server1?
As revealed in the comments, there’s a pcie_aspm=off line in one of the files which sysctl -p reads. This causes sysctl to attempt to write to /proc/sys/pcie_aspm; if that doesn’t exist (and it won’t, it’s not a valid sysctl entry, it’s a kernel boot parameter), you’ll get the error shown in your question.
sysctl -p failed on /proc/sys/pcie_aspm
1,424,774,855,000
I am trying to compile the 5.4 kernel with the latest stable PREEMPT_RT patch (5.4.28-rt19) but for some reason can't select the Fully Preemptible Kernel (RT) option inside make nconfig/menconfig. I've compiled the 4.19 rt patch before, and it was as simple as copying the current config (/boot/config-4.18-xxx) to the new .config, and the option would show. Now I only see: No Forced Preemption (Server) Voluntary Kernel Preemption (Desktop) Preemptible Kernel (Low-Latency Desktop) And if I press F4 to "ShowAll", I do see the option: XXX Fully Preemptible Kernel (Real-Time) But cannot select it. I've tried manually setting it in .config with various PREEMPT options like: CONFIG_PREEMPT=y CONFIG_PREEMPT_RT_BASE=y CONFIG_PREEMPT_RT_FULL=y But it never shows. I just went ahead and compiled it with CONFIG_PREEMPT_RT_FULL=y (which is overwritten before when saving the make nconfig), but it seems it's still not the fully preemptive kernel that is installed. With 4.19, uname -a would show something like: Linux 4.19.106-rt45 #2 SMP PREEMPT RT <date> or something like that, but now it will just say: Linux 5.4.28-rt19 #2 <date> Anyone know what I'm missing here? OS: CentOS 8.1.1911 Kernel: 4.18.0-147.8.1 -> 5.4.28-rt19
Please enable EXPERT mode after launching make nconfig/menuconfig. Then you'll be able to select Fully Preemptible Kernel (RT) option.
Trouble selecting "Fully Preemptible Kernel (Real-Time)" when configuring/compiling from source
1,424,774,855,000
We are currently working on building a system for data visualisation for different sensors. To make development of the Linux application possible we would need to emulate the behaviour of the different character devices as the device drivers and the hardware design aren't done yet. So is there a way to receive the system calls (open(), read(), write()...) on a specific file inside a, for instance, C program that is also run from userspace? read() (Userspace Application/Database) <========= (~/mydev) <===== (dummy_driver)
You could use cuse Character Device in Userspace which is a part of the fuse library, available as a package in most systems. An example "driver" is cuse.c. When you compile and run this example as: sudo ./cuse -f --name=mydevice it creates /dev/mydevice and receives all the open, close, read, write, ioctl calls on it. To "unmount" the device (in fuse terminology), just kill the process. The example is probably not distributed, so to compile, download (or git clone) the zip, change to the libfuse/example directory, and compile as shown in the C file: gcc -Wall cuse.c $(pkg-config fuse --cflags --libs) -o cuse -I. You may need to install a fuse-devel package or similar for this to work. If you need to implement more ioctl's, check out this link given as a comment to the answer of this stackexchange question. Simpler alternatives to consider are a pseudo-tty pty, or tty0tty which is a kernel module that joins two serial ports together.
Emulating a character device from userspace
1,424,774,855,000
Can you help me understand, why my SATA hotplug doesn't work? When I plug sata disk, lsblk doesn't change. There is only my system disk /dev/sda. I have linux: $ uname -a Linux Z170-D3H 4.9.0-3-amd64 #1 SMP Debian 4.9.25-1 (2017-05-02) x86_64 GNU/Linux Kernel settings: $ cat /boot/config-4.9.0-3-amd64 | grep HOTPLUG CONFIG_MEMORY_HOTPLUG=y CONFIG_MEMORY_HOTPLUG_SPARSE=y # CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE is not set CONFIG_HOTPLUG_CPU=y # CONFIG_BOOTPARAM_HOTPLUG_CPU0 is not set # CONFIG_DEBUG_HOTPLUG_CPU0 is not set CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y CONFIG_ACPI_HOTPLUG_CPU=y CONFIG_ACPI_HOTPLUG_MEMORY=y CONFIG_ACPI_HOTPLUG_IOAPIC=y CONFIG_HOTPLUG_PCI_PCIE=y CONFIG_HOTPLUG_PCI=y CONFIG_HOTPLUG_PCI_ACPI=y CONFIG_HOTPLUG_PCI_ACPI_IBM=m CONFIG_HOTPLUG_PCI_CPCI=y CONFIG_HOTPLUG_PCI_CPCI_ZT5550=m CONFIG_HOTPLUG_PCI_CPCI_GENERIC=m CONFIG_HOTPLUG_PCI_SHPC=m CONFIG_XEN_BALLOON_MEMORY_HOTPLUG=y CONFIG_XEN_BALLOON_MEMORY_HOTPLUG_LIMIT=512 # CONFIG_CPU_HOTPLUG_STATE_CONTROL is not set $ cat /boot/config-4.9.0-3-amd64 | grep SATA CONFIG_SATA_ZPODD=y CONFIG_SATA_PMP=y CONFIG_SATA_AHCI=m # CONFIG_SATA_AHCI_PLATFORM is not set # CONFIG_SATA_INIC162X is not set CONFIG_SATA_ACARD_AHCI=m CONFIG_SATA_SIL24=m CONFIG_SATA_QSTOR=m CONFIG_SATA_SX4=m # SATA SFF controllers with BMDMA # CONFIG_SATA_DWC is not set CONFIG_SATA_MV=m CONFIG_SATA_NV=m CONFIG_SATA_PROMISE=m CONFIG_SATA_SIL=m CONFIG_SATA_SIS=m CONFIG_SATA_SVW=m CONFIG_SATA_ULI=m CONFIG_SATA_VIA=m CONFIG_SATA_VITESSE=m $ cat /boot/config-4.9.0-3-amd64 | grep AHCI CONFIG_SATA_AHCI=m # CONFIG_SATA_AHCI_PLATFORM is not set CONFIG_SATA_ACARD_AHCI=m SATA controller: $ lspci | grep SATA 00:17.0 SATA controller: Intel Corporation Sunrise Point-H SATA controller [AHCI mode] (rev 31)
Problem was in wrong BIOS configuration. Solution (for my motherboard Z170-D3H) is go to BIOS > Peripherals > SATA Configuration and here enable Hot Plug option for each SATA port. Then save settings and restart computer. Now everything works properly!
Sata hotplug doesn't work
1,424,774,855,000
Which file in /proc gets read by the kernel during the boot up process? This was a question on my LPIC 101 test that I think I might have answered wrong. I searched on google and some other places but wasn't able to find an answer. Hoping one of you could provide. Thanks!
My question is, which file in /proc gets read by the kernel during the boot up process? This was a question on my LPIC 101 test... Sounds like a trick question. The files in /proc aren't real files on disk (this is why they have a size of 0) and the nodes don't exist until the kernel mounts a procfs file system there and populates it. Procfs and sysfs files are kernel interfaces. When you read a file in /proc, you are asking the kernel for information and it will supply it. That information is not stored in that file -- nothing is. When you write to a file in /proc, you are sending the kernel information, but again, the information will not be stored in that file. This is possible because the kernel is the gatekeeper to file access generally. All file access involves system calls, i.e., they must pass through the kernel. So I would say the answer here is that it does not read any files in /proc at boot or at any other time. This would be like dialing your own phone number.
Which file in /proc gets read by the kernel during the boot up process?
1,424,774,855,000
I'm curious: What, exactly are the benefits to statically linking modules into the kernel rather than loading through rc.conf, etc? For example: To add Linux emulation, I could add linux_enable="YES" to /etc/rc.conf, or I could link it into the kernel by adding options COMPAT_LINUX to my kernel config. Is there actually an advantage to this? If so, what?
Statically linking used to be the only way to load a module which is think is the primary reason to having options like COMPAT_LINUX. Also, prior to loader, it used to be the only way to load modules necessary to get FreeBSD to get the necessary drivers to mount the root file system and boot FreeBSD. Nowadays, I don't think there is any significant benefit to statically linking in a module if it can be easily loaded at runtime. I don't think you will see any benefit in performance by statically linking Linux compatibility support, but some users still swear by it. I would avoid it just because of the inconvenience of recompiling a Kernel for little to no perceived performance gain.
Difference between rc.conf, loader.conf and static kernel linking in FreeBSD
1,424,774,855,000
Why are copy_from_user() and copy_to_user() needed, when the kernel is mapped into the same virtual address space as the process itself? Having developed a few (toy) kernel modules for learning purposes, I quickly reliazed that copy_from_user() and copy_to_user() were needed to copy data from/to user-space buffers; otherwise errors related to invalid addresses resulted in crashes. But if 0x1fffff is a virtual address pointing to a user-space buffer, then why isn't that address valid in the kernel? The kernel is in the same virtual address space, so 0x1fffff would be mapped to the same physical memory.
The address space mapping is the same on some (not all!) architectures, but even on architectures where they are the same, the protection levels aren’t. copy_from_user etc. serve three main purposes: they check that the permissions on the memory to be read from or written to would allow the process running in user space to read from or write to it — this ensures that processes can’t trick the kernel into accessing memory the process shouldn’t be able to; they allow for specific error-handling so that protection faults don’t crash the kernel, for example if the requested addresses aren’t currently mapped (think of zero pages or swapped-out pages); they ensure that the kernel doesn’t trip over its own protection, e.g. SMAP or kernel-specific address spaces (S/390). Some architectures use memory layouts which allow these functions to take shortcuts, e.g. using a direct mapping of physical memory, but you can’t assume that to be the case, and it doesn’t handle all situations anyway (swapped-out pages aren’t present in physical memory).
Why are `copy_from_user()` and `copy_to_user()` needed, when the kernel is mapped into the same virtual address space as the process itself?
1,424,774,855,000
When configuring the Linux kernel, what are the advantages and disadvantages of enabling UTS namespaces? Would the new system be harmed if UTS namespaces were disabled?
UTS Namespaces are per-process namespaces allowing a process to have different namespaces for different resources. For example, a process could have a set of namespaces for the following: mountpoints PID numbers network stack state IPC - inter process communications NOTE: the use of namespaces was limited only to root up until version 3.8+ of the Linux Kernel. unshare You can use the command unshare to disassociate a parent's namespace from a child process. $ unshare --help Usage: unshare [options] <program> [args...] Run program with some namespaces unshared from parent -h, --help usage information (this) -m, --mount unshare mounts namespace -u, --uts unshare UTS namespace (hostname etc) -i, --ipc unshare System V IPC namespace -n, --net unshare network namespace For more information see unshare(1). compiler option CONFIG_UTS_NS Support uts namespaces. This allows containers, i.e. vservers, to use uts namespaces to provide different uts info for different servers. If unsure, say N.
Enabling UTS Namespaces in the Linux Kernel
1,424,774,855,000
While reading about the Linux kernel I came across the notion of kernel data structures. I tried to find more information via Google, but couldn't find anything. What are kernel data structures? What are their requirements, usage, and access? What's the organization of data structure inside the kernel? Examples of kernel data structures might be file_operations or c_dev.
The kernel is written in C. "Kernel data structures" would just refer to various formations (trees, lists, arrays, etc.) of mostly compound types (structs and unions) defined in the source, which C code is normally filled with stuff like that. If you don't understand C, they will not be meaningful to you. Data structures structure the storage of information in memory or address space. There is nothing particularly special about the ones used by the linux kernel. Some of them can/must be used if you are writing a kernel module, but their use is completely internal to the kernel. Kernel memory is only accessed by the kernel and it's structure has no relevance to anything else.
What are "kernel data structures"?
1,424,774,855,000
A few questions about FreeBSD Where/how should I obtain the source code (ex. through terminal, Download off website) How (on ubuntu) should I build it? Before I build it can I customize it (in other words is possible)?
You can check the source for FreeBSD out of version control here. The developer's handbook answers a lot of questions about developing FreeBSD. Why aren't you building it from FreeBSD itself? It seems kind of... odd to be building from Ubuntu.
FreeBSD source and how to build
1,587,375,393,000
I can see my initrd is occupied almost 90 MB of disk but after extracting it via cpio , it contains only a 30 KB microcode : $ cpio -it < initrd.img-5.4.0-18-generic . kernel kernel/x86 kernel/x86/microcode kernel/x86/microcode/AuthenticAMD.bin 62 blocks I know that there should be a lot of files and tools which are needed by the kernel in the first stage of booting , but I cannot find anything useful in it. $ file initrd.img-5.4.0-18-generic initrd.img-5.4.0-18-generic: ASCII cpio archive (SVR4 with no CRC) I took a look at here and here and this question but these are too old and don't work for me.My initrd.img is not a gzip archive . How to extract that file properly? I use kernel v.5.4.0 Thanks.
initramfs images contain multiple cpio archives; the name of your file suggests you’re using a Ubuntu derivative, so the simplest option for you to list the full contents is to use lsinitramfs: lsinitramfs initrd.img-5.4.0-18-generic To extract the contents, use unmkinitramfs: unmkinitramfs initrd.img-5.4.0-18-generic initramfs This will extract all the files to the initramfs directory.
Problem extracting the "initrd" archive in kernel 5.4
1,587,375,393,000
Background: I recently read about a freedesktop.org-bug which allowed executing any systemctl command for uid > INT_MAX. Thus I ran: root@host$> useradd -u 4000000000 largeuiduser root@host$> su largeuiduser largeuiduser@host$> systemctl ["whatever"] [bug exists, and "whatever" gets executed] largeuiduser@host$> exit root@host$> userdel largeuiduser Looking for a cleaner way I later found root@host$> setpriv --reuid 4000000000 systemctl ["whatever"] [bug exists, and "whatever" stuff gets executed] Showing that for exploiting the bug, there is no need for a (temporary) username. It also made apparent, that I was not quite sure about: how essential usernames actually are ?. Question My question hence is. How dispensible are usernames, from the kernel (linux/posix) perspective? Are they needed, can they be used? The suspicion of mine is, that the username is only a sort of "amenity" used exclusively in userspace. A good answer would attempt to shed some light on this, by providing information of in what settings usernames become "necessary", and in which setting they are "expendable".
In the Linux and BSD worlds, the kernel operates almost wholly in terms of numeric user and group IDs. There is a standardized C library that provides access to a user database, where names can be looked up from IDs and vice versa, which is how applications softwares get from names supplied by humans to the IDs needed in system calls, and back again. Process credentials and access control list entries all operate in terms of the IDs. The one exception is the (not standardized) setlogin() API function in the BSD world, which operates in terms of a string, a user name, rather than in terms of a numeric user ID. The kernel places no interpretation upon this string, however. One could write applications programs that operated entirely in terms of the numeric IDs, presenting those to human beings. But that is not the way that most softwares are written, for the simple reason that human beings work better with accounts that are named. The kernel also has no notion of non-existent accounts. All IDs (aside from a few reserved values) are valid as far as the kernel is concerned. You could (as the superuser) start a process running as UID 24394, and create filesystem objects owned by that UID (in places where it has permission to, of course), and the kernel would not complain. The PolicyKit bug is not really about UIDs, note. It is about a program named pkttyagent abending … ERROR:pkttyagent.c:156:main: assertion failed: (polkit_unix_process_get_uid (POLKIT_UNIX_PROCESS (subject)) >= 0) … and the authorization mechanism failing open rather than failing closed in such circumstances, returning that the user is authorized across the Desktop Bus. Further reading "User database". Base Definitions. Single UNIX Specification. IEEE 1003.1. 2018. The Open Group. getpwnam(). System Interfaces. Single UNIX Specification. IEEE 1003.1. 2018. The Open Group. getpwuid(). System Interfaces. Single UNIX Specification. IEEE 1003.1. 2018. The Open Group. Jonathan de Boyne Pollard (2018). setlogin. nosh Guide. Softwares. Jonathan de Boyne Pollard (2018). setuidgid. nosh Guide. Softwares. What are the side effects of having several UNIX users share one UID? https://news.ycombinator.com/item?id=18605607 https://superuser.com/a/706578/38062
How essential are usernames ( compared to uids )?
1,587,375,393,000
I know there is a syscall convention but what do you call the calling convention that precedes it that you see when you call to int 80 rather than syscall, like this. mov rax,4 ; system call number (sys_write) mov rbx,1 ; file descriptor (stdout) mov rcx,hello ; message to write mov rdx,12 ; message length int 0x80 ; call kernel I read here that the arguments after rdx are esi, edi, ebp (or for x64 rsi, rdi, rbp), I don't see it documented in Wikipedia's page for calling conventions, but int80h seems to indicate that Windows also uses this convention? What is this conventioned named. Where in the Linux Kernel source can I see it defined? And, where is the table that resolves rax to the procedures when you call int 0x80? For syscall, sys_write is rax=1
You question covers a number of topics, I’ll try to address them all. I’m not sure there’s a single, canonical term for the way in which system calls are invoked, even less so for a specific way in which system calls are invoked (interrupt 0x80 as opposed to SYSENTER or SYSCALL). On x86-64, the documented system call interface, using SYSCALL, is described in the System V x86-64 ABI, but that’s informative only, not normative. Likewise, while most people would understand what you were talking about if you referred to it as the “i386 Linux kernel ABI” (replacing “i386” with whatever architecture you’re talking about), that could be confusing too since “kernel ABI” has another meaning (in the context of kernel modules), and again that’s not limited to interrupt 0x80. In practice most people shouldn’t be concerned about the specifics down to this level of detail anyway, especially since they can evolve: interrupt 0x80, SYSCALL etc. as you mention, but also the vDSO which introduces its own subtleties and is the preferred entry point for all system calls on x86 nowadays... Of course this doesn’t mean that there can’t be a term to refer to a specific calling convention, but I’m not sure it would be all that useful. Windows also supports using an interrupt for its system call interface, 0x2E, but its “calling convention” is quite different: arguments are pushed on the stack, the requested system call is given by EAX, and EBX points to the arguments on the stack. Current x86 kernels define the system call interface in arch/x86/entry: entry_32.S contains the i386 interface, entry_64.S the x86-32 and x86-64 interface, entry_64_compat.S the 32-bit x86-64 interface (for backward compatibility), syscalls/syscall_32.tbl the i386 system call table, syscalls/syscall_64.tbl the x86-32 and x86-64 system call table. The comments in those files document the interface, in particular how arguments are passed: for 32-bit calls, EAX contains the system call number, and its parameters are placed in EBX, ECX, EDX, ESI, EDI, and EBP (the parameter itself for SYSENTER, a pointer to the user stack containing the parameter for interrupt 0x80); for 64-bit calls, RAX contains the system call number, and its parameters are placed in RDI, RSI, RDX, R10, R8, and R9 (see also Why did the system call registers and order change from Intel 32bit to 64bit?). There’s a nice summary with diagrams in calling.h. As a side note, historical comparisons often refer to the MS-DOS call interface, which primarily used interrupt 0x21; it also included the multiplex interrupt, 0x2F, which provided an extensible mechanism for adding system services (typically involving TSRs; device drivers mostly used a different interface).
What do you call the calling convention behind `int 0x80`?
1,587,375,393,000
I am using a CentOS and I have enabled the USB driver for GSM and CDMA modems using make menuconfig. But, how does it work? After changing in menuconfig, is the modification performed in the moment? Or do I have to compile the whole kernel in order to get this configuration?
With make menuconfig you only change the configuration file .config which is used in the compilation process. One doesn't need to use this menuconfig tool - there are other scripts for that or one can even edit .config by hand (although this is error prone and thus not recommended). So in order to finish the task you've started you need to compile the kernel with new settings, copy that kernel to /boot (or wherever your boot loader is reading), optionally update link /usr/src/linux to point to correct source, add to grub (or other bootloader you use) a line with new kernel, and after that just reboot, select the previously set line in the grub menu, and voilà.
When are menuconfig changes performed?
1,587,375,393,000
I've got OSX 10.9.5 . I want to build NetBSD (or at least it's kernel) on my OSX box. I've tried building the 'config' program but I can't find 'build.sh'. Is this possible and what would be the steps to do it? Thanks.
NetBSD has documentation here: Part V. Building the system You might be able to compile a NetBSD kernel on OSX. At a minimum, you would need the kernel source tree, i.e., /sys (which at least does not conflict with OSX's system directories). whatever tools are needed, you probably have to download source and compile those. build.sh is only a small part of that. But to the extent that it used the OSX header-files, it may/may not even compile. You might consider cross-compiling: Chapter 31. Crosscompiling NetBSD with build.sh
Can a bootable NetBSD be built on OSX?
1,587,375,393,000
I recently installed my hard drive in a new computer and after some fiddling around I noticed my iptables rules were dropping DNS responses for some reason, turns out they were configured to allow stuff on eth0, but eth8 is used in this computer and everything was being dropped (not just DNS queires). Anyhow, I was using Wireshark concurrently to see if the DNS servers were responding to the queries and found out that they did. But I had just noticed iptables was dropping said packets. How come Wireshark can see the packets if they are being dropped? Script used to generate the ruleset: # Flush all rules iptables -F iptables -X # Allow unlimited traffic on loopback iptables -A INPUT -i lo -j ACCEPT iptables -A OUTPUT -o lo -j ACCEPT # Allow incomming traffic from estabilished and related connections iptables -A INPUT -i eth0 -m state --state ESTABLISHED,RELATED -j ACCEPT # Policy: Allow outgoing, deny incoming and forwarding iptables -P OUTPUT ACCEPT iptables -P INPUT DROP iptables -P FORWARD DROP
Wireshark uses libpcap to fetch data from the NIC before it is handled by the OS. See the libpcap tutorial for an introduction to libpcap.
How can Wireshark see packets dropped by iptables?
1,587,375,393,000
I'm trying to add a system call using linux kernel 4.1.6 but all the documentation I can find is for older versions. Does anyone know how it's done in the newer kernels or have any good references? There's supposed to be 3 steps: Add to the system call table. I've worked out that they now use arch/x86/syscalls/syscall_64.tbl instead of entry.S. So I've put something in there. Add to the asm/unistd.h file. Apparently the unistd.h file is generated automatically now so we don't have to update it manually? So I've done nothing for this step as the file doesn't exist. https://stackoverflow.com/questions/10988759/arch-x86-include-asm-unistd-h-vs-include-asm-generic-unistd-h Compile the syscall into the kernel. I've added the actual system call code to kernel/sys.c as suggested in a book based on kernel 2.6 (The linux kernel development book by Robert Love). I've compiled the kernel again. I then wrote a client program as suggested in the book but it says unknown type name 'helloworld' when I try to compile it. My program is different to the book but the structure is the same. #include <stdio.h> #define __NR_helloworld 323 __syscall0(long, helloworld) int main() { printf("I will now call helloworld syscall:\n"); helloworld(); return 0; } The Internet (and available books) seem to be seriously lacking of this information - or Google is not as smart as it would like to think. Anyway any help is appreciated. Thanks. ~ ~ ~
According to _syscall(2) man page the _syscall0 macro may be obsolete and requires #include <linux/unistd.h>; indeed Linux 4.x don't have it However, you might install musl-libc and use its _syscall function. And you could simply use the indirect syscall(2) in your user code. So your testing program would be #define _GNU_SOURCE /* See feature_test_macros(7) */ #include <unistd.h> #include <sys/syscall.h> #include <stdio.h> #define __NR_helloworld 323 static inline long mysys_helloworld(void) { return syscall(__NR_helloworld,NULL); } int main (int argc, char**argv) { printf("will do the helloworld syscall\n"); if (mysys_helloworld()) perror("helloworld"); return 0; } Above code is untested!
How to add a system call in linux kernel 4.x
1,587,375,393,000
I have a new system with debian (omv) a SSD hard drive for the OS and a software RAID 6 for the data. I only saw now that I have very regular exceptions in my syslog. I'm worried now, what could cause those exceptions. Is it a software problem or is actually some hardware faulty? Can you actually read anything from those logs? There is more exceptions in the syslog, but here an excerpt: Jul 19 07:48:51 msa-nas1 kernel: [485174.166986] ata5.01: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 Jul 19 07:48:51 msa-nas1 kernel: [485174.168522] ata5.01: failed command: WRITE MULTIPLE EXT Jul 19 07:48:51 msa-nas1 kernel: [485174.170003] ata5.01: cmd 39/00:00:00:cc:89/00:04:08:00:00/f0 tag 0 pio 524288 out Jul 19 07:48:51 msa-nas1 kernel: [485174.170003] res 51/84:00:00:cd:89/84:03:08:00:00/f0 Emask 0x10 (ATA bus error) Jul 19 07:48:51 msa-nas1 kernel: [485174.172996] ata5.01: status: { DRDY ERR } Jul 19 07:48:51 msa-nas1 kernel: [485174.174500] ata5.01: error: { ICRC ABRT } Jul 19 07:48:51 msa-nas1 kernel: [485174.176003] ata5: soft resetting link Jul 19 07:48:51 msa-nas1 kernel: [485174.355492] ata5.00: configured for UDMA/33 Jul 19 07:48:51 msa-nas1 kernel: [485174.364550] ata5.01: configured for PIO0 Jul 19 07:48:51 msa-nas1 kernel: [485174.364574] ata5: EH complete Jul 19 07:48:57 msa-nas1 kernel: [485180.175794] ata5.01: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 Jul 19 07:48:57 msa-nas1 kernel: [485180.177436] ata5.01: failed command: WRITE MULTIPLE EXT Jul 19 07:48:57 msa-nas1 kernel: [485180.179037] ata5.01: cmd 39/00:00:00:34:8a/00:04:08:00:00/f0 tag 0 pio 524288 out Jul 19 07:48:57 msa-nas1 kernel: [485180.179037] res 51/84:00:00:37:8a/84:01:08:00:00/f0 Emask 0x10 (ATA bus error) Jul 19 07:48:57 msa-nas1 kernel: [485180.182279] ata5.01: status: { DRDY ERR } Jul 19 07:48:57 msa-nas1 kernel: [485180.183907] ata5.01: error: { ICRC ABRT } Jul 19 07:48:57 msa-nas1 kernel: [485180.185524] ata5: soft resetting link Jul 19 07:48:57 msa-nas1 kernel: [485180.380318] ata5.00: configured for UDMA/33 Jul 19 07:48:57 msa-nas1 kernel: [485180.389391] ata5.01: configured for PIO0 Jul 19 07:48:57 msa-nas1 kernel: [485180.389407] ata5: EH complete Jul 19 07:48:58 msa-nas1 kernel: [485180.939900] ata5.01: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 Jul 19 07:48:58 msa-nas1 kernel: [485180.941736] ata5.01: failed command: WRITE MULTIPLE EXT Jul 19 07:48:58 msa-nas1 kernel: [485180.943533] ata5.01: cmd 39/00:00:00:3c:8a/00:04:08:00:00/f0 tag 0 pio 524288 out Jul 19 07:48:58 msa-nas1 kernel: [485180.943533] res 51/84:00:00:3e:8a/84:02:08:00:00/f0 Emask 0x10 (ATA bus error) Jul 19 07:48:58 msa-nas1 kernel: [485180.947169] ata5.01: status: { DRDY ERR } Jul 19 07:48:58 msa-nas1 kernel: [485180.948998] ata5.01: error: { ICRC ABRT } Jul 19 07:48:58 msa-nas1 kernel: [485180.950814] ata5: soft resetting link Jul 19 07:48:58 msa-nas1 kernel: [485181.128420] ata5.00: configured for UDMA/33 Jul 19 07:48:58 msa-nas1 kernel: [485181.137482] ata5.01: configured for PIO0 Jul 19 07:48:58 msa-nas1 kernel: [485181.137505] ata5: EH complete Thanks for any help with this. EDIT: Alright, I exchanged the cable of one of the drive where I thought it was ata5, now I realize there are two ata5 drives: lrwxrwxrwx 1 root root 0 Jul 27 19:26 sde -> ../devices/pci0000:00/0000:00:14.1/ata5/host4/target4:0:0/4:0:0:0/block/sde lrwxrwxrwx 1 root root 0 Jul 27 19:26 sdf -> ../devices/pci0000:00/0000:00:14.1/ata5/host4/target4:0:1/4:0:1:0/block/sdf The second one is a SSD drive directly connected to the mainboard. Any idea what options I have? Did smartctl checks on both drives. Both without any errors. EDIT2: assuming it's not the SSD causing the trouble, I exchanged the other drive and SATA cable with parts that are working without errors in another system. I still get the errors. How can a driver problem be identified, could the mainboard be faulty? EDIT3: found something in the SMART log of the SSD drive: 212 SATA_PHY_Error 0x0032 100 100 --- Old_age Always - 426 What does the SATA PHY Error stand for?
The steps I took to fix it: updated BIOS In the BIOS, diabled the SATA IDE Combined Mode with this help reading the kernel documentation about kernel parameters, since every solution online was about adding parameters to that. I found out that my SSD actually only supports SATA speed 3.0Gbps with a good shell script for i in `grep -l Gbps /sys/class/ata_link/*/sata_spd`; do echo Link "${i%/*}" Speed `cat $i` cat "${i%/*}"/device/dev*/ata_device/dev*/id | perl -nE 's/([0-9a-f]{2})/print chr hex $1/gie' | echo " " Device `strings` | cut -f 1-3 done In the grub configuration, set the SATA port of the SSD drive to maximum speed 3.0 vi /etc/default/grub changed the parameter in this line to allow only 3Gbps for SATA port 7 (my SSD) GRUB_CMDLINE_LINUX_DEFAULT="libata.force=7:3.0G quiet" update grub and reboot update-grub reboot The solution to this has come a long long way for me. I basically approached the whole problem every other day from scratch. The problems I found on the way where: I checked my SMART stats every day and compared. The error count didn't increase even though the exceptions kept being thrown. My SSD was actually the one causing the kernel exceptions, this script helped me lots to understand which ATA device was actually which hard drive in the case My SSD and two other drives where on a completely wrong speed setting (UDMA) root@msa-nas1:~# sudo hdparm -I /dev/sd{a,b,c,d,e,f,g} | grep -i udma DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6 DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6 DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6 DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6 DMA: mdma0 mdma1 mdma2 udma0 udma1 *udma2 udma3 udma4 udma5 udma6 DMA: mdma0 mdma1 mdma2 udma0 *udma1 udma2 udma3 udma4 udma5 udma6 DMA: mdma0 mdma1 mdma2 udma0 udma1 *udma2 udma3 udma4 udma5 udma6 The dmesg log showed some strange messages about 40-wire cables, even though those don't really exist anymore, I bought two different NEW cables, nothing helped. [ 1.193091] ata5.01: ATA-8: SanDisk SD6SF1M128G1022I, X231200, max UDMA/133 [ 1.193095] ata5.01: 250069680 sectors, multi 1: LBA48 NCQ (depth 0/32) [ 1.193743] ata5.00: limited to UDMA/33 due to 40-wire cable [ 1.193746] ata5.01: limited to UDMA/33 due to 40-wire cable Grub loaded a funny kernel for the last two drives: pata_atiixp. I was expecting the AHCI driver. [ 1.022724] scsi4 : pata_atiixp [ 1.022834] scsi5 : pata_atiixp [ 1.022887] ata5: PATA max UDMA/100 cmd 0x1f0 ctl 0x3f6 bmdma 0xf100 irq 14 [ 1.022888] ata6: PATA max UDMA/100 cmd 0x170 ctl 0x376 bmdma 0xf108 irq 15 I checked the power consumption and compared if it exceeded the power unit, it did not. Not even close. I replaced the SSD with exactly the same model from another machine. Excactly the same model. Still the same errors. The SSD!! was in fact incredibly slow, so the hdparm about the UDMA output was actually correct. root@msa-nas1:~# hdparm -t -T /dev/sdf /dev/sdf: Timing cached reads: 2144 MB in 2.00 seconds = 1072.18 MB/sec Timing buffered disk reads: 8 MB in 3.60 seconds = 2.22 MB/sec I tried reaching out to SandDisk, it was their hard drive giving me the exceptions, without any success. I could really not find anyone with the exact same problem, but many people with similar problems, in the end I tried a few of those suggested solutions and it turned out to be a mix of a few things. Now it all makes perfectly sense to me, afterwards everyone knows better I guess.
What causes the ata exceptions in my syslog and how to solve them
1,587,375,393,000
I was compiling a custom linux kernel for a newly installed machine, and after booting into the new kernel (3.12), the init process fails to find a root device, which I traced to the system getting an unknown partition table error on the device in question (/dev/sda). The generic kernel boots up and mounts the root partition just fine. I cannot seem to find anything that looks relevant in the kernel config, what could it be missing?
There are a bunch of options mostly named CONFIG_.*_PARTITION, you probably didn't set the one you need. These may only show up if you answer yes to CONFIG_PARTITION_ADVANCED (Advanced partition selection). You're going to want (on a PC) at least: CONFIG_MSDOS_PARTITION=y # traditional MS-DOS partition table CONFIG_EFI_PARTITION=y # EFI GPT partition table and maybe: LDM_PARTITION=y # Windows logical (dynamic) disks You may also want a few more (such as CONFIG_MAC_PARTITION and BSD_DISKLABEL) to read partition tables from other operating systems' disks you may actually run in to. You can see all of the partition table options in your kernel source tree (in block/partitions/Kconfig) or at Linux Cross Reference.
"unknown partition table" - misconfigured kernel
1,587,375,393,000
I'm trying to build kexec as a module, but I'm running into a weird problem. My obj-m is: obj-m += kexec.o machine_kexec.o relocate_kernel.o When I run the makefile, it complains that there's "no rule to make target relocate_kernel.c, needed by relocate_kernel.o" How should I be telling it to include the assembly file? I've looked in the kernel Makefile, and while I'm not very good with them, it DOES appear that there's a rule for .S > .o. Am I wrong about this?
As always, RTFM. Answering this and leaving it up to help others that may come across this. Per the Linux documentation project, I was using obj-m wrong: Sometimes it makes sense to divide a kernel module between several source files. Here's an example of such a kernel module. [ Source files ... ] And finally, the makefile: Example 2-10. Makefile obj-m += hello-1.o obj-m += hello-2.o obj-m += hello-3.o obj-m += hello-4.o obj-m += hello-5.o obj-m += startstop.o startstop-objs := start.o stop.o all: make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules clean: make -C /lib/modules/$(shell uname -r)/build M=$(PWD) clean Linux Kernel Module Programming Guide: 2.7. Modules Spanning Multiple Files So, my Makefile should have read: obj-m += kexecmod.o kexecmod-objs := kexec.o machine_kexec.o relocate_kernel.o Which compiled relocate_kernel.S into relocate_kernel.o (To anyone stumbling across this trying to compile kexec as a module, I still haven't found all the dependencies, but this is a start.)
Assembly files in obj-m list when building kernel modules
1,587,375,393,000
Some articles say that modules/drivers belong to kernel space as it take part in forming the kernel; (reference: http://www.freesoftwaremagazine.com/articles/drivers_linux) While others say that only Ring0(directly interact with hardware)can be called kernel space(excluded modules/drivers as they are at Ring2). (reference:http://jaseywang.me/2011/01/04/vfs-kernel-space-user-space-2/ ) Could anybody tell me which point of view is correct?
On AMD64 and clones, and ix86, Linux uses only ring 0 and 3. No other common architecture has the "rings" anyway, so using them fully would be totally non-portable. Besides, Linux is monolithic. The whole ring idea is to be able to run the microkernel on ring 0, and have service processes run on higher rings so they can't mess up the microkernel, and finally have userspace run in the highest ring, where it can't do much damage.
Does linux modules/drivers belong to kernal space or user space
1,587,375,393,000
I've built a kernel with loadable module support for various reasons, one of them the possibility to compile modules and load them without rebooting. This is supposed to be useful when I need a module that I had not enabled in the kernel config. Now, with drivers like nouveau, it's as easy as going to the source directory, and running make M=drivers/gpu/drm/nouveau. How can I build an updated iptables module without compiling a whole kernel and rebooting? Is it even possible?
Just go to your kernel source directory, make the changes you want, and make, then make modules_install. That's all it takes. If you want to build only one specific module, use: make M=path/to/module/directory For instance (from the kernel toplevel directory): make M=fs/ext4 make M=fs/ext4 modules_install To activate the changed modules, you must unload then re-insert them. If the module was not previously loaded, nothing special needs to be done. Note that you cannot change something from built-in to module this way (that requires a reboot), and some modules may have dependencies that require changes in built-in configuration - you'll need to reboot for that too.
How do I build the iptables kernel module for a loaded kernel?
1,587,375,393,000
I've read that I can boot the kernel without initrd, and I've also read that additional modules are loaded during the initrd stage -- as I understand it, for necessary drivers that weren't included in the kernel. If I build the kernel with make defconfig && make, on what kind of hardware can I expect the kernel to boot in? A reasonably modern desktop? Virtualbox? When would I really need initrd/initramfs? I'm trying to put together a minimal system to tets out on Virtualbox, and if possible, I'd like to keep things simple and not use an initrd.
You can boot without initrd on any hardware. I never use it myself on desktops/laptops and home servers, because it just adds to boot time. The only situation where I've found it really necessary so far, is when your root filesystem is on an LVM (but I may be in error - there might be some way to go about this also). If you want your setup to be fast and simple, you should first of all try to remove all the unnecessary stuff from your kernel configuration. There are two general ways you can do that while configuring your kernel: strip down - try to get rid of all the unnecessary modules and options or build up - take a minimal config and add in just the stuff you need I personally recommend the second option - simply because it takes less time and you avoid being overwhelmed by uncertainty about all the options. For a great starting point, you can pick an adequate Pappy's kernel seed. You can find more information about those on his webpage. With this approach, a general tip from my side is to first run lspci -knn, which will tell you, what modules are currently used by most of your hardware.
When would an initrd be necessary?
1,587,375,393,000
I tried compiling a kernel from sources that I got from kernel.org (mainline) with make allyesconfig and make allmodconfig, but both builds resulted in a kernel, that won't boot. I was thinking, that by compiling everything, It should work on close to any hardware. What am I doing wrong? And how do I compile a working kernel?
One thing you can do is boot a working kernel, run lsmod, and make sure that all the modules listed are turned on in your config (either built-in or as modules). It's easiest to start with a working config, and then tweak it. If you're lucky, your distribution ships the config file along with the kernel. For example, in Ubuntu you'll find it in /boot/config-version. Copy that file into your new kernel directory and name it .config. If it's for an older kernel, you can try make oldconfig to be asked only about new options. In general, accept the default answer for everything unless you know what it is.
How to compile a decent kernel from kernel.org?
1,587,375,393,000
So I found a bug, fixed it, and want to send the patch for it. I followed the Write and submit your first Linux kernel patch youtube video, set up git-email, formatted everything, etc., and want to send it. My issue: I don't want to send it alone but with an accompanying message and sample userspace code that gets fixed by the patch. Both seem ill-suited for the commit message. Do I send that via a separate email? Should that need to be somehow specially formatted too?
When you extract your commit, add --cover-letter to the git format-patch options. This will extract your commit with a 0001 prefix, and create a cover letter template with a 0000 prefix. You can edit that to contain your accompanying message and sample code, then send both with a single git send-email invocation: your cover letter will be sent first, with your actual patch in reply to it.
How do I send a patch with an accompanying message to the Linux kernel?
1,587,375,393,000
I am trying kernel patching for the first time. I am not sure the following is encountered with an error and if I am doing it correctly. But in all tutorials and videos shows, .patch extension files, but I have a .xz file. Downloaded stable release 5.12.1 from https://www.kernel.org: root@learn:/usr/local/src# wget https://cdn.kernel.org/pub/linux/kernel/v5.x/linux-5.12.1.tar.xz root@learn:/usr/local/src# mkdir Linux-Kernel-5.12.1 root@learn:/usr/local/src# tar xvf linux-5.12.1.tar.xz -C Linux-Kernel-5.12.1/ --strip-components=1 root@learn:/usr/local/src# cd Linux-Kernel-5.12.1/ root@learn:/usr/local/src/Linux-Kernel-5.12.1# cp /boot/config-$(uname -r) ./.config Downloaded the patch .xz file from https://www.kernel.org/ to the directory: root@learn:/usr/local/src/Linux-Kernel-5.12.1# wget https://cdn.kernel.org/pub/linux/kernel/v5.x/patch-5.12.1.xz When applying: root@learn:/usr/local/src/Linux-Kernel-5.12.1# patch -p1 < patch-5.12.1 patching file Makefile Reversed (or previously applied) patch detected! Assume -R? [n] What does that mean?. What am I supposed to do at that point?. Also, for Ubuntu/Debian, is downloading stable kernel and its patch from https://www.kernel.org/ is the right way or does it have its own source URL other than kernel.org?.
This error message Reversed (or previously applied) patch detected! Assume -R? [n] ... means that the patch command detected that your patch has already been applied to the sources. It suggests you to use patch -R but it's not what you want, since it would unapply the patch and thus you would get an earlier version of the Linux sources. This is due to a misunderstanding of yours. Look at the first lines of the patch: --- a/Makefile +++ b/Makefile @@ -1,7 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 VERSION = 5 PATCHLEVEL = 12 -SUBLEVEL = 0 +SUBLEVEL = 1 EXTRAVERSION = NAME = Frozen Wasteland What this chunk does is changing the 4th line of Makefile so that SUBLEVEL goes from 0 to 1. In effect, this patch changes the Linux version from 5.12.0 to 5.12.1, the version you already have (hence the error message). So, this is not the right patch. What you want is the 5.12.2 patch. But if you take a look at it (like above) you will realize it applies to the 5.12.0 source tree, not the 5.12.1 one: --- a/Makefile +++ b/Makefile @@ -1,7 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 VERSION = 5 PATCHLEVEL = 12 -SUBLEVEL = 0 +SUBLEVEL = 2 EXTRAVERSION = NAME = Frozen Wasteland
Kernel patching prompts “Reversed (or previously applied) patch detected! Assume -R? [n]”
1,587,375,393,000
Updated my packages today on my Lenovo Thinkpad X1 Carbon (6th Gen). I didn't expect anything to happen but it did. The Mute indicator LED's on the F1 and F4 key stopped working. I actually know this will be fixed soon. I'm just making a question so that I can answer it, in case anyone else is looking for a solution.
Situation A Lenovo Thinkpad X1 Carbon running Arch Linux updated its packages on December 15th since October 22nd. After a reboot the LED indicators of the Mute Mic and Mute Speakers keys stopped working. Investigation I started my investigation by simply searching for phrases like "Mute LED not working" but couldn't find anything recent. I knew it had to be recent since the symptoms emerged exactly after an update. Other buttons worked fine, Caps Lock, Fn Lock, but only the Mute buttons didn't work. Finding the responsible package In /var/log/pacman.log/ I checked the packages that were updated. It were a whole lot, but nothing that could interfere with my Thinkpad buttons, except for an update to PulseAudio, ALSA, and the Linux Kernel. I decided I would check the kernel first. To downgrade the kernel to the previous version I executed ~ # pacman -U /var/cache/pacman/linux-4.18.16.arch1-1-x86_64.pkg.tar.xz After rebooting the lights worked again. I now knew for certain that the problem came from within an update to the Linux kernel. Finding the responsible kernel version I now know Kernel 4.18.16 is working, and I know 4.19.8 is not! From the Arch Linux Package Archive (https://archive.archlinux.org/) I downloaded version 4.19 through 4.19.8, knowing the kernel broke somewhere in between. Because I was on 4.18.16, I upgraded to 4.19.4. 19.4 worked like 18.16 did, so the bug was introduced after 19.4 but before 19.8 Next up was 19.6. This version also works fine, so I now know the bug was introduced in 19.7. After upgrading once more, sure enough, 4.19.7 was the first release where this "regression" (as they call it), took place. Finding the commit responsible Thanks to linux being open source, you can look up the changelog of every linux release on https://kernel.org Here is the changelog for version 4.19.7: https://cdn.kernel.org/pub/linux/kernel/v4.x/ChangeLog-4.19.9 Warning, it is very big! To find some indication of where the problem has begun, I decided to CTRL+F some keywords into the file. First I tried "led", but there were no commits that looked promising. Then I searched for "mute", but again, no hits. After a few other keywords I tried "carbon", and I found a commit named: dcd51305cd41e77bf775992e6d6cee52f83426b7 ALSA: hda/realtek - fix the pop noise on headphone for lenovo laptops My first thought was "Oh great, they fixed that!", but since this was also the only commit mentioning Lenovo, and this was the changelog of the regressed kernel, my best bet was to investigate. Thankfully this commit included a BugLink to launchpad.net: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1805079 I thought that there I could ask the developer if they too had a problem with their LED lights. But I didn't even need to ask, since another user already described the problem I was having: (Link) I'm on Lenovo ThinkPad X1 Carbon 6th, model 20KH006JGE. After upgrading to Linux 4.19.7, the audio mute and microphone mute LEDs (on F1 and F4 keys) stopped working. The creator of the commit had already responded, and had even provided a solution! Great stuff, but now what? I don't want to compile my own kernel... Yuck! I need to know when this fix will be implemented. Finding the repairing commit Luckily the whole linux kernel is on GitHub! https://github.com/torvalds/linux I dove into the commit history, and CTRL+F'd for "LED", and sure enough, on page 4 I found the fixing commit! (link) 6ba189c5c1a4bda70dc1e4826c58b0246068bb8d ALSA: hda/realtek - Fix the mute LED regresion on Lenovo X1 Carbon Awesome, it seems like this commit is already posted and reviewed, so where is it now? When is the fix going to be released? Finding out when the fix will be released So linux' releases are maintained by Greg Kroah-Hartman. You might have seen his name on top of the changelog we looked at earlier. Every few days he gathers usefull commits from the repository and bundles them into a new stable. You can track the progress and discussion of the release cycle in the mailing list stable from kernel.org. If you don't want to subscribe, but just read, you can find an archive right here: https://www.spinics.net/lists/stable/ There I simply pulled out my trusty CTRL+F once again and searched for "LED", and hell yeah, sure enough: [PATCH 4.19 140/142] was the commit I was looking for. Greg has included the commit we wanted and is currently reviewing it's release. It is only a couple of days before I closes discussion, followed by release of 4.19.10. Finally: The solution To fix the issue, please downgrade the kernel to at most version 4.19.6. You can also wait a couple of days, since version 4.19.10 will fix the regression, and it is expected to be released on December 16th or 17th. I hope my journey was mildly interesting to read, and can help you troubleshoot your issues in the future. I learned about changelogs, commits, repo's, releases, mailing lists and a whole lot more, so I just had to share. Kind regards and have fun!
Lenovo Thinkpad Mute LED Stopped working after update
1,587,375,393,000
This is a follow up question from my previous question. Based on the answer, a system call is an example of when we jump into kernel part of virtual memory of our process. What are other examples of a normal process (non kernel) using this part of virtual memory other than system calls? like is there any function call that directly jumps into this kernel part or..? When we jump into this section of memory, does the processor automatically set the kernel mode bit to 1 in order for our process to access this part or there is no need to set this bit? Does all of the execution inside of this kernel part happen without any need for context switching to a kernel process? (I didn't want to ask these follow up questions on comments so I opened another thread.)
Processes running in user mode don’t have access to the kernel’s address space, at all. There are a number of ways for the processor to switch to kernel mode and run kernel code, but they are all set up by the kernel and happen in well-defined contexts: to run a system call, to respond to an interrupt, or to handle a fault. System calls don’t involve calling into kernel code directly; they involve an architecture-specific mechanism to ask the CPU to transfer control to the kernel, to run a specific system call, identified by its number, on behalf of the calling process. LWN has a series of articles explaining how this works: Anatomy of a system call part one, part two, and additional content. If a process attempts to access memory in the kernel’s address space, it will switch to kernel mode, but as a result of a fault; the kernel will then kill the process with a segmentation violation (SIGSEGV). On 32-bit x86, there is a mechanism to switch to kernel mode using far calls, call gates; but Linux doesn’t use that. (And they rely on special code segment descriptors rather than calling into kernel addresses.) See above: you can’t jump into kernel memory. In the circumstances described above, when transitioning to kernel mode, the CPU checks that the transition is allowed, and if so, switches to kernel mode using whichever mechanism is appropriate on the architecture being used. On x86 Linux, that means switching from ring 3 to ring 0. Transitioning to kernel mode doesn’t involve a change of process, so yes, all this happens without a context switch (as counted by the kernel).
When do we jump into kernel part of our process virtual memory other than when we use system calls? (In Linux)
1,587,375,393,000
I want to add a public key from the keypair that I used to sign my kernel module, into system_keyring. However, there's a problem: With the command of cat /proc/keys | grep system_keyring, I've got the entry (ID) of system_keyring. However, when trying to add my public key with this command: keyctl padd asymmetric "" 0xXXXXXXXX</test/signing_key.x509, I've got `Permission denied" error. I think it is due to the restriction described in the "module_signing.txt" https://01.org/linuxgraphics/gfx-docs/drm/admin-guide/module-signing.html : Note, however, that the kernel will only permit keys to be added to .system_keyring if the new key's X.509 wrapper is validly signed by a key that is already resident in the .system_keyring at the time the key was added. However, I cannot find any document to describe how to sign the "X.509 wrapper" with the key already resident in the .system_keyring. Also, I think the keys in that keyring is only public key. So, I don't even think it will work, even if I can extract the public key out of the keyring and sign the "X.509 wrapper" with that public key. Anyway, need some help here. Or, even if something can give me a hint of how to submit my kernel module to RedHat so that it can be signed by RedHat and installed on user's installation without rebuild the kernel?
The system keyring gets its contents from five sources: keys embedded in kernel at compile time (obviously not changeable without recompiling) UEFI Secure Boot variable db - depending on your firmware, you might or might not be able to change this UEFI Secure Boot variable dbx- as the previous one, but this is a blacklist so you would not want to add your key here anyway keys embedded in shim.efi - not changeable without recompiling, and you would probably have to get the shim re-signed afterwards unless you have taken control of your secure boot PK = too much of a hassle UEFI variable MOK (used by shim.efi) - this might be your best hope. To import your key into MOK, you should first ensure that shim.efi is involved in your boot process (see efibootmgr -v). Then have the key/certificate that your module is signed with in DER format, and start the import process using the mokutil command: mokutil --import your_signing_key.pub.der The command will require you to set a new import password: this password will be used in the next step, and is not any password that exists before. As usual when setting a new password, mokutil will require you to type this password twice. Then, next time you reboot the system, shim.efi will see that a new MOK key is ready for importation, and it will require you to type the import password you set in the previous step. After you've done this once, the new key will be stored in the UEFI MOK variable persistently, and the kernel will automatically include it in the system keyring. If you are not using UEFI, you cannot add new keys to the system keyring without recompiling the kernel. But on the other hand, if Secure Boot is not enabled, the kernel will allow loading of kernel modules with no signature or with an unverifiable signature - it just sets one of the kernel's taint flags to mark that a non-distribution kernel module has been loaded. Source: RHEL 7 Kernel Administration Guide, Chapter 2.8 "Signing Kernel Modules for Secure Boot"
How to add a public key into system keyring for kernel without recompile?
1,587,375,393,000
I found zstd in /drivers/block/zram/zcomp.c, but I can't find anything zstd-related in /crypto. So is zstd for zram actually available in Linux 4.15 or not?
It’s supposed to be available in 4.15, as long as the CONFIG_CRYPTO_ZSTD setting is enabled. The implementation lives in lib/zstd. However the zram integration expects to find zstd via the crypto API, as you discovered, and that part’s missing — which explains why there’s no way to actually enable CRYPTO_ZSTD, and why there’s no code registering zstd with the crypto framework.
Is zstd for zram actually available in Linux 4.15?
1,587,375,393,000
I want to use ULOG and send firewall logs to ulogd2 iptables -A INPUT -i eth0 -j ULOG gives me following error: iptables: No chain/target/match by that name I have these LOG-related options enabled in my kernel: CONFIG_NETFILTER_NETLINK_LOG=y CONFIG_NF_LOG_COMMON=y CONFIG_NETFILTER_XT_TARGET_LOG=y CONFIG_NETFILTER_XT_TARGET_NFLOG=y CONFIG_NF_LOG_IPV4=y What else do I need for ULOG to work ? I don't see any ULOG options (nothing found when I search for ULOG) My kernel is 4.4.
ULOG has been deprecated, and if you don't have module ipt_ULOG you should move on to the newer NFLOG target. ulogd handles both of these, even though it is still called "ulog". Check out man iptables-extensions.
iptables: No chain/target/match ULOG
1,587,375,393,000
   There are some trivial troubles that always obsess me.My Gentoo always complains 'Could not find the root block device in UUID=5f7c7e13-2a46-4ae4-a8c0-f77f84e80900' and stuck,once I try to boot. However, if I type the same device name /dev/sda2 in, the system goes on.I don´t know why. My Gentoo was installed in one partition /dev/sda2 and I mounted / into /dev/sda2.    I also have found some posts on the internet. Most posts say it is caused by kernel config,and compiling the corresponding fs as built-in into the kernel ,not as module can solve it.some says rootfs should be specified in grub after the kernel command, device name after root command in grub should be substituted by the UUID. I did it all, but those didn't work. Here is my configuration in grub. 533 menuentry 'Gentoo (on /dev/sda2)' --class gentoo --class linux-gnu --class os $menuentry_id_option 'osprober-chain-225E1F815E1F4D43' { 534 insmod part_msdos 535 insmod ext4 536 set root='hd0,msdos2' 537 if [ x$feature_platform_search_hint = xy ]; then 538 ¦ ¦ search --no-floppy --fs-uuid --set=root --hint- bios=hd1,msdos2 --hint-efi=hd1,msdos2 --hint-baremetal=ahci1,msdos2 5f7c7e13-2a46-4ae4-a8c0-f77f84e80900 539 ¦ else 540 ¦ ¦ search --no-floppy --fs-uuid --set=root 5f7c7e13-2a46-4ae4-a8c0-f77f84e80900 541 ¦ fi 542 ¦ ¦ echo 'Loading Linux x86_64-4.4.39-gentoo ...' 543 ¦ ¦ linux /boot/kernel-genkernel-x86_64-4.4.39-gentoo root=UUID=5f7c7e13-2a46-4ae4-a8c0-f77f84e80900 ro 544 ¦ echo 'Loading initial ramdisk ...' 545 ¦ ¦ initrd /boot/initramfs-genkernel-x86_64-4.4.39-gentoo 546 ¦ boot 547 548 } The Gentoo coexists with Ubuntu. My /etc/fstab. 1 # /etc/fstab: static file system information. 2 # 3 # noatime turns off atimes for increased performance (atimes normally aren't 4 # needed); notail increases performance of ReiserFS (at the expense of storage 5 # efficiency). It's safe to drop the noatime options if you want and to 6 # switch between notail / tail freely. 7 # 8 # The root filesystem should have a pass number of either 0 or 1. 9 # All other filesystems should have a pass number of 0 or greater than 1. 10 # 11 # See the manpage fstab(5) for more information. 12 # 13 14 # <fs> <mountpoint> <type> <opts> <dump/pass> 15 16 # NOTE: If your BOOT partition is ReiserFS, add the notail option to opts. 17 UUID=5f7c7e13-2a46-4ae4-a8c0-f77f84e80900 / ext4 noatime 0 1 18 UUID=B66EAE686EAE215B /mnt/D/ ntfs errors=remount-ro 19 UUID of the corresponding name /dev/sda2: UUID="5f7c7e13-2a46-4ae4-a8c0-f77f84e80900" TYPE="ext4" PARTUUID="000e21f3-02" /dev/sda4: UUID="B66EAE686EAE215B" TYPE="ntfs" PARTUUID="000e21f3-04" Is there anyone has some ideas? thanks.
  Finally, I figured it out after several days have gone.It is caused by driver problem. My Gentoo is installed in my external hard-disk connected with my laptop by a USB cable.However, the USB Mass Storage Support option wasn't masked build-in when I built my kernel.Hence, it always blocked in that way.If some are in the same boat with me, and you make sure you have compiled all the referenced file system as built-in, please check if the options as follows are built-in in your kernel. Device Driver-->USB Support -->USB Mass Storage Support Device Driver-->USB Support -->xHCI HCD (USB 3.0) support Device Driver-->USB Support --> EHCI HCD (USB 2.0) support Device Driver-->USB Support --> UHCI HCD (most Intel and VIA) support Device Driver-->USB Support --> Support for Host-side USB If they don't, check it on.
Could not find the root block device (in Gentoo)
1,587,375,393,000
It's interesting to look at the entries in dmesg, but how can I find out what they all mean? I did man dmesg, but I can't find anything about decoding the messages themselves. I wonder: Is there a way to drill down and find out the meaning and origin of each entry? For example which driver that wrote it (if it was a driver), and what the message means in detail? Example of dmesg output: [101466.656676] Read(10): 28 00 00 07 c4 25 00 00 01 00 [101466.656706] end_request: I/O error, dev sr0, sector 2035860 [101466.656722] Buffer I/O error on device sr0, logical block 508965 [101471.444586] sr 1:0:0:0: [sr0] Unhandled sense code [101471.444607] sr 1:0:0:0: [sr0] [101471.444616] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [101471.444627] sr 1:0:0:0: [sr0] [101471.444634] Sense Key : Medium Error [current] [101471.444649] sr 1:0:0:0: [sr0] [101471.444657] Add. Sense: No seek complete [101471.444668] sr 1:0:0:0: [sr0] CDB: [101471.444675] Read(10): 28 00 00 07 c4 24 00 00 01 00 [101471.444705] end_request: I/O error, dev sr0, sector 2035856 [101471.444721] Buffer I/O error on device sr0, logical block 508964
There's no easy way. These messages are intended for kernel developers and experienced system administrators, not for ordinary users. There's no general structure to them (apart from the number in brackets, which is the number of seconds since the kernel booted). You can look for the message text in the kernel source code. That can provide useful information even if you don't know the C programming language — at least finding in which file the message is can tell you which driver is responsible. Either keep a local copy (most distributions have a package with the source of the kernel, e.g. apt-get install kernel-source-X.XX && cd /usr/src && sudo tar xf linux-source-X.XX.tar.xz on Debian and derivatives), or use an online browser such as LXR at Free Electrons or LXR at linux.no (better search but often down). When searching, keep in minds that messages do not appear in the source code literally. They are often composed from a template and parameters. For example, the second line comes from the blk_update_request function in block/blk-core.c: printk_ratelimited(KERN_ERR "end_request: %s error, dev %s, sector %llu\n", error_type, req->rq_disk ? req->rq_disk->disk_name : "?", (unsigned long long)blk_rq_pos(req)); The first %s in the template is replaced by the value of error_type, the second %s is replaced by req->rq_disk->disk_name (or a ? if this is not set), and the %llu is replaced by the integer returned by blk_rq_pos(req). Given the file the message is in, it concerns a block device. The disk name tells you which device: sr0. If you look at the standard device names, that's “First SCSI CD-ROM” (actually, first optical drive that talks a SCSI-like protocol, including most IDE/SATA and USB drives). You can continue exploring the messages, but there's an evident pattern here: they're all related to sr. All of them are caused by the same problem: an error reading the DVD, around sector 2035860 (i.e. about 1 GB in — a sector is 512 bytes). The computer was suddenly told that there was no disk present (or an unreadable disk), and tried moving to another sector and reading that one failed as well. This could be a speck of dust, or a scratched or otherwise damaged drive. Other problems could cause read errors, such as a damaged drive or a bad cable, but those would affect reading all the time, not just a particular area of a particular disk.
How can I find out what the entries in dmesg means?
1,587,375,393,000
I have a system freeze issue and I found this discusion on github, one of them suggest to add 5 patches: When I type: patch p1 < 0001-PM-autocomplet.patch It asked File to patch, I don't know what to fill. How can I proceed and apply these patches.
To apply a patch of this form: diff --git a/include/linux/pm_qos.h b/include/linux/pm_qos.h index 0f65d36..ff59753 100644 --- a/include/linux/pm_qos.h with patch -p1, several conditions have to be met. You have to be in the top-level directory of a kernel source tree. The -p1 option will strip one path component from the file names, so that a/include/linux/pm_qos.h will be treated as include/linux/pm_qos.h. This is a relative path which has to resolve from your current working directory. The file include/linux/pm_qos.h has to exist in the version of the kernel that you're trying to patch. If 1 and 2 are not met, then patch will not find the file to apply and interactively ask you to supply the path name. Then of course: The include/linux/pm_qos.h file has to be "sufficiently similar" to the one from which the patch was produced, otherwise the patch will fail to apply. The same remarks apply separately to all other files that are mentioned in the patch. If you're trying to patch a different version of the kernel from the one against which the patch was produced, I'm afraid you're "in over your head"; that requires some level of understanding of kernel development (depending on how complex are the adjustments required in the patch for it to apply). Sometimes we find that kernel files have just been renamed; a patch will apply fairly cleanly if the files mentioned in it are renamed to the new names. On the opposite end, in the worst cases, you have to actually understand what the patch is doing (possibly by looking at the original kernel where it was made), and then implement the same logic from scratch in the target kernel. In cases of "intermediate difficulty", you just have to deal with issues like variable names, function names and struct member names having been renamed; the patch will apply if it just follows new names.
apply patch when asked `File to patch`, what should I do?
1,587,375,393,000
I have a CentOS 7 headless system with no serial ports. I sometimes want to access the server using a serial cable, so I plug in a USB serial cable (to my laptop's serial port) but I can't get a console/BASH from the connection. Is there something I have to do to tell the kernel to always create a serial console on appearance of a USB serial port?
EDIT: This won't work if you have a recent udev version, because udev prevents you from starting long-lived background processes in RUN scripts. You may or may not be able to get around this by prefixing the getty command with setsid but in any case it's discouraged if not outright disallowed. If you have a system which uses systemd then there is another way to achieve this, which I hope someone will supply with another answer. In the meantime, I'm leaving this answer here in case it works for you. You cannot use a USB serial port as a console because USB is initialized too late in the boot sequence, long after the console needs to be working. You can run getty on a USB serial port to allow you to log in and get a shell session on that port, but it will not be the system's console. To get getty to start automatically, try this udev rule: ACTION=="add", SUBSYSTEM=="tty", ENV{ID_BUS}=="usb", RUN+="/usr/local/sbin/usbrungetty" Put that in a rules file in /etc/udev/rules.d and then create this executable script /usr/local/sbin/usbrungetty: #!/bin/sh /sbin/getty -L "$DEVNAME" 115200 vt102 &
Create serial console on plug in USB serial device
1,587,375,393,000
I am trying to get a conceptual understanding of the purposes the kernel modules can have. The motivation for this question was a realization that interface to an actual hardware goes through multiple kernel modules. For example, the USB gadget driver has multiple kernel modules, where only one is used to actually communicate with the hardware. ( http://www.linux-usb.org/gadget/ ) What is the reason of implementing this "kernel module stack" structure? Doesn't it just complicate the process of getting a hardware device to work (you have to worry that you have 3 modules running instead of one)?
What is the reason of implementing this "kernel module stack" structure? This is the way pretty much all software is written, in modular stacks. Consider your GUI: there's all the kernel space stuff involved including a driver stack, then in userspace you have the X server, and on top of that a window manager, and on top of that probably a desktop environment, and on top of that (e.g.) your browser. That's a software stack. The reason is fairly straightforward: consider the situation if there were no such userspace stack. Every GUI application would have to write it's own code interfacing with the kernel to access the screen, etc. Staying organized in relation to other GUI applications would be completely voluntary (read: a serious mess), the system's memory would be completely filled with redundant things, and almost no one would bother writing anything because of the immense amount of work involved. The situation is exactly the same with same with kernel modules. A piece which can be put to more than one purpose must be an independent piece. So WRT to USB devices, rather than every driver having to build into itself a driver for the USB controller as well, you have one driver for the controller and individual device drivers interface with that. Doesn't it just complicate the process? No, it greatly simplifies it. True, you may have 3 modules involved instead of one, but if there were just one, it would have to implement the things the other two implement anyway. There are many benefits to modular design and it is a fundamental tenant of software engineering. Strong modularity is essential to avoiding the sin of excessive coupling. I am sure if you ask anyone who has spent a lot of time programming that as they have gotten better at it and become more competent with larger and more complex projects, they have become more and more modular with their work -- i.e., they have grown to write smaller and more discrete pieces. This amounts to realizing that you will be better off with 3 parts instead of 1 whenever it makes sense to do so (finding "where it makes sense" is part of the skill -- the more places you can see, the better).1 If that seems counter-intuitive, consider what happens if a module, bigfoobar, misbehaves indicating a bug. Figuring out where it is will be much simpler if it is actually composed of three smaller parts, because you can independently test big, foo, and bar to determine which one is the culprit. Furthermore, foo may have a general use elsewhere (e.g., as part of altfoothing, but note that naming conventions don't really work that way). The more places foo is used, the more contexts it is subjected to and the more robust (functional, efficient, bug-free) it is likely to end up. 1. The further you look into a stack the more you will recognize it is composed of a regress of other stacks on a smaller and smaller scale. 90% (don't quote me) of the userspace code your CPU executes is actually part of the native C library, which is a relatively small executable. This is part of what makes it possible to run a wide variety of complex software efficiently -- because everything is made from the same few little pieces. Think about lego and the difference between having 5 big blocks or 50 smaller ones.
Using multiple layer of kernel modules for interfacing a hardware device?
1,587,375,393,000
In Ubuntu 13.10 on my (Dual Core i5 Lenovo G570) laptop, I recently discovered the wonders of indicator-cpufreq, so I can extend my battery life dramatically by setting it to 'ondemand' or 'powersave' governor - here is the menu it shows:                                                        I was wondering whether I could implement this in the other half other my dual boot on my laptop, Fedora 20. However, after looking at this documentation, and installing the kernel-tools package, when I run the command to list the available modes. On Fedora I get: wilf@whm1:~$ cpupower frequency-info --governors analyzing CPU 0: powersave performance On Ubuntu I get: wilf@whm2:~$ cpupower frequency-info --governors analyzing CPU 0: conservative ondemand userspace powersave performance So can I get the conservative, ondemand, & userspace modes in Fedora? Mainly the ondemand one Fedora System Info Kernel Linux whm1 3.12.10-300.fc20.i686+PAE #1 SMP Thu Feb 6 22:31:13 UTC 2014 i686 i686 i386 GNU/Linux Version Fedora release 20 (Heisenbug) Kernel 3.12.10-300.fc20.i686+PAE on an i686 /proc/cpuinfo, relevant /etc/default/grub (Fedora manages Grub, not Ubuntu): #GRUB_CMDLINE_LINUX="acpi_osi=Linux acpi_backlight=vendor pcie_aspm=force" GRUB_CMDLINE_LINUX="vconsole.font=latarcyrheb-sun16 $([ -x /usr/sbin/rhcrashkernel-param ] && /usr/sbin/rhcrashkernel-param || :) rhgb quiet acpi_osi=Linux acpi_backlight=vendor pcie_aspm=force" Ubuntu System Info Kernel Linux whm2 3.11.0-15-generic #25-Ubuntu SMP Thu Jan 30 17:25:07 UTC 2014 i686 i686 i686 GNU/Linux /proc/cpuinfo, relevant /etc/default/grub (I think is loaded by Fedora Grub): GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX=""
This is related to a new driver introduced in Fedora 20 that does not need more than those two governors. See this thread CPU Governors - where is ONDEMAND? for details. To have the missing governors, you should boot with the kernel parameter intel_pstate=disable. To do so, in the GRUB boot screen, choose "edit boot commandline" and add this to the line which starts with kernel. You can also add it permanently to the grub config file. Note that normally you should not need others governors than those proposed by the new driver which does its job perfectly.
How to get ondemand governor on fedora
1,587,375,393,000
How do I check that packet socket support has been compiled into my kernel? I'm running Crunchbang, a Debian-based distribution.
Most Linux distributions include the config parameters used to compile the kernel in /boot/config-<kernel-version>. So grep -x 'CONFIG_PACKET=[ym]' "/boot/config-$(uname -r)" Should tell you if AF_PACKET socket support is included (m for as a module). Otherwise, you can just try and create a socket (using socket(2), see packet(7) for how to do it) in the AF_PACKET family and check if reports an error.
How do I check if I have packet socket support enabled in my distro's kernel?
1,587,375,393,000
I've added few modules into mkinitcpio.conf. Are they automatically loaded at kernel boot or with udev if I don't specify them in modules list in rc.conf?
The modules listed in /etc/mkinitcpio.conf are included in the intitrd when it is generated with mkinitcpio -p linux. This loads the temporary filesystem into memory, and needs to include modules necessary to create this successfully, depending on your setup. An example would be adding raid1 to your modules line in /etc/mkinitcpio.conf to assemble a Raid1 array. In your modules line in /etc/rc.conf you would only need to include modules that are not automatically loaded by udev but that you may require to run specific applications once your filesystems are mounted, such as fuse or loop. Note: in the case of a Raid array, you would also include USEDMRAID="yes" in your /etc/rc.conf
Do I have to list modules in both `/etc/mkinitcpio.conf` and `/etc/rc.conf`?
1,587,375,393,000
After I updated my Kernel to version 3.0, i always get this line when my Arch-Linux system is booting up: Loading User-specified Modules [BUSY] [FAIL] I have no idea what could cause this to happen. My MODULES-array in the /etc/rc.conf-file looks like this: MODULES=(fuse wl !b43 !ssb !usblp vboxdrv vboxnetflt) I checked the modules which are loaded (using modprobe) and they all load just fine. My idea was, that one of the Blacklisted modules in my MODULES-array got kicked out of the kernel or renamed and now the system can't find it (to block it). So i checked all available kernel-modules using: ls -R /lib/modules/3.0-ARCH/kernel/ | grep <module-name> I found all of the modules in the array except for the last two (from VirtualBox). However, trying to manually load them with modprobe works and lsmod shows that they are probably loaded after boot. Also, I checked the /var/log/kernel.log-logfile (nothing obvious here). So, I need ideas on what could probably cause this to happen or where I can find the corresponding logfile (since there is a daemons.log-file, but no modules.log-file).
Blacklisting of modules in the MODULES array is deprecated, as announced here. Perhaps this is the cause.
Problem at boot-time: "Loading user-defined modules [FAIL]"
1,587,375,393,000
Am currently running POP_OS 20.04 (LTS). When I open the terminal and run the command, dpkg --list | grep linux-image it returns a list of apparently installed images, including my current (6.0.12) and most recent (5.17.5), about nine images from version set 5.0 and six from version set 5.3 ...plus one from 4.18 and one from 5.4. Previously I have used synaptic to remove older kernels from the system, but for some reason when I search for linux-image (and sort installed images to the top) only my current and most recent are showing as installed ~ the box is green. These older 5.0/5.3 images don't show in the search results. Stacer uninstaller also cannot find them. Can anyone tell me why these images show in the console but not synaptic? What are they doing on the system and are they even really on the system? If they are indeed taking up space on my home partition how can I safely remove them? One consideration comes to mind is this system started out at POP_OS version 18.10 and has been upgraded with each release up to 20.04 (LTS) -- although not sure if that has anything to do with it. Another thought is maybe this has something to do with the recovery partition, but I have no idea how or why it would need so many versions of the kernel, or why they would accumulate in such untidy manner.
The rc marker at the start of each line indicates that the packages are removed, but configured — i.e. all their contents have been removed, apart from configuration files. Packages in this state don’t appear in Synaptic by default. You can remove them with sudo dpkg --purge or sudo apt purge, listing the packages you want to remove. sudo apt autoremove --purge should purge them automatically without having to list them.
How to remove old kernel images
1,587,375,393,000
I use a Measurement Computing DAQ in Ubuntu to perform continuous analog reads and writes from another system connected to the board. I have been using Ubuntu 16.04 (which went up to Linux kernel 4.15) for about five years now. I was recently exploring upgrading the system to Ubuntu 20.04 - 22.04 and each of those operating systems ships with Linux kernel 5.10 - 5.15. I am noticing that I am getting what appears to be periodic interrupts that are quite drastic (about 50 milliseconds) for every kernel 5.10 or higher. So something appears to have changed from the 5.9 kernel to the 5.10 kernel that is affecting system read() and write() calls with the A/D board. The differences can be seen in my data acquisition software: And also in an average loop time program I have (that loops through successive read and write calls - along with some math in between): Note how the maximum times I am seeing go from about 43 microseconds for Linux kernel 5.9 and below to 50 milliseconds for Linux kernel 5.10 and above. I obviously would like to fix this problem, but I am not sure what was changed that could have caused it. Does anyone have any idea what the culprit is, and if it could be fixed by perhaps changing a kernel parameter in the GRUB bootloader? Any pointers at all would certainly be appreciated. Thanks! EDIT: I have implemented a minimal example where we continuously call write system commands to update the DAC Outputs. At minimum, the DAC Write command is calling "get_user" to obtain data from user-space to kernel, and calling "outw" to write the data into the DAC Register. Now when we are executing the minimal example, we're doing back-to-back write system commands and we're noticing this 50 millisecond delay. However, when we add a 1 microsecond delay between the write system commands, then the 50 millisecond delay vanishes. Is this possibly an issue with trying to access the user-space information or writing from the kernel to the device too quickly? Is there a way to analyze what the kernel is doing between accessing user-space and writing data from the kernel to device?
I was having the same kind of problem with real-time threads when updating the kernel on my computer. For me sysctl -w kernel.sched_rt_runtime_us=-1 helped. The change that broke this was: Disable RT_RUNTIME_SHARE by default
What changed from Linux Kernel 5.9 to 5.10?
1,587,375,393,000
I use kernel.org docs to read about kernel functions. Now I am trying to make possible reading manuals for kernel-mode functions, such as printk with man 9 printk. The section 9 is used for this purpose: from man man section descriptions: 9 - Kernel routines [Non standard] Running make mandocs at /usr/src/linux/ throws this: make: *** No rule to make target 'mandocs'. Stop. (I think mandocs is obsolete or was removed). I cannot find man9 packages on my Gentoo GNU/Linux system. Also, I tried to install those manpages on Debian 11 virtual machine, but it fails too. How can I install/make/download those non-standard manpages for kernel functions to be able to run man printk to get docs without having to search documentation online or browsing header files? I know that similar questions were already asked (this and this), but they are deprecated (no make mandocs now).
During May 2017, Linux kernel documentation migrated to use ReST instead of DocBook (commit). During the final steps of that migration, the make mandocs target was removed from kernel Makefile system (commit). Apparently nobody has missed the manpage format enough to submit patches for a process that would build the kernel functions man pages from the new ReST documentation source format. Note that you can run make htmldocs, make latexdocs, make pdfdocs or make epubdocs to get a local version of the kernel documentation in HTML, LaTeX, PDF or EPUB formats.
Installing man pages for section 9 (kernel routines)
1,587,375,393,000
I'm working on an embedded system (based on a Cortex-A8 CPU) running Linux kernel 4.19, OpenSSH_8.3p1, OpenSSL 1.1.1h, glibc 2.32, compiled with GCC 10.2 using buildroot. When a client tries to connect over ssh, the following message is logged to the console, and the client gets disconnected: [ 120.954119] audit: type=1326 audit(1599913110.890:2): auid=4294967295 uid=1001 gid=1001 ses=4294967295 pid=430 comm="sshd" exe="/usr/sbin/sshd" sig=31 arch=40000028 syscall=407 compat=0 ip=0xb6b5b080 code=0x0 [ 120.979667] audit: type=1701 audit(1599913110.910:3): auid=4294967295 uid=1001 gid=1001 ses=4294967295 pid=430 comm="sshd" exe="/usr/sbin/sshd" sig=31 res=1 After adding the audit package, ausearch -i has the following output: type=SECCOMP msg=audit(09/12/20 12:32:13.500:4) : auid=unset uid=sshd gid=sshd ses=unset pid=369 comm=sshd exe=/usr/sbin/sshd sig=SIGSYS arch=armeb syscall=unknown-syscall(407) compat=0 ip=0xb6b3f080 code=kill ---- type=ANOM_ABEND msg=audit(09/12/20 12:32:13.510:5) : auid=unset uid=sshd gid=sshd ses=unset pid=369 comm=sshd exe=/usr/sbin/sshd sig=SIGSYS res=yes When I attach strace to the running sshd process by running strace -y -p $(pgrep sshd), I get the following output: [pid 2248] write(5<socket:[8970]>, "\0\0\0\16ssh-connection\0\0\0\0", 22 <unfinished ...> [pid 2244] read(6<socket:[8971]>, <unfinished ...> [pid 2248] <... write resumed>) = 22 [pid 2244] <... read resumed>"\0\0\0\27", 4) = 4 [pid 2248] clock_gettime(CLOCK_BOOTTIME, <unfinished ...> [pid 2244] read(6<socket:[8971]>, <unfinished ...> [pid 2248] <... clock_gettime resumed>{tv_sec=1838, tv_nsec=947294512}) = 0 [pid 2244] <... read resumed>"\4\0\0\0\16ssh-connection\0\0\0\0", 23) = 23 [pid 2248] clock_nanosleep_time64(CLOCK_REALTIME, 0, {tv_sec=0, tv_nsec=22439932944646645}, <unfinished ...> [pid 2244] poll([{fd=6<socket:[8971]>, events=POLLIN}, {fd=7<pipe:[8972]>, events=POLLIN}], 2, -1 <unfinished ...> [pid 2248] <... clock_nanosleep_time64 resumed> <unfinished ...>) = ? [pid 2244] <... poll resumed>) = 1 ([{fd=7, revents=POLLHUP}]) [pid 2244] read(7<pipe:[8972]>, <unfinished ...> [pid 2248] +++ killed by SIGSYS +++ The issue is also present when I build the system using GCC 9.3 and glibc 2.31. Is there a way to find out what this unknown syscall would be? Is there something missing from the kernel?
As user414777 commented, the missing syscall is clock_nanosleep_time64. This was originally added to the kernel on the 5.6 branch as part of the solution to the Year 2038 problem, and it was backported to every branch starting with 5.1. The GNU C Library started utilising these 64-bit time functions in v2.31, and the issue I encountered with OpenSSH is mentioned in the release notes: System call wrappers for time system calls now use the new time64 system calls when available. On 32-bit targets, these wrappers attempt to call the new system calls first and fall back to the older 32-bit time system calls if they are not present. This may cause issues in environments that cannot handle unsupported system calls gracefully by returning -ENOSYS. Seccomp sandboxes are affected by this issue. To resolve my issue, I could either: Update the kernel to at least 5.1 Downgrade glibc to 2.30 Patch glibc 2.32 to omit the time64 system calls Compile OpenSSH with a different sandbox I decided to go with the kernel update path, as this one seemed to be the most future-proof.
sshd disconnects after unknown syscall
1,587,375,393,000
I just stumbled upon a random question that goes like, does the Linux kernel swap out any memory pages even when there are still some available memory spaces? I thought it does not in principle, but the Linux distributions still usually demand a dedicated swap partition when it's installed. To put this in another way, can I unset the entire swap partition and still get the stable Linux system, given that I have a sufficient amount of main memory that I cannot exhaustively use?
does the Linux kernel swap out any memory pages even when there are still some available memory spaces Check out vim /proc/sys/vm/swapiness (on Ubuntu at least). This specifies how often swaps are done and can imply that swaps are done even when memory is available. The real reasoning to find an optimal value for this heavily depends on the way the OS works, the available memory, and the processor itself. (My swapiness is specified at 60.) From what I see in newer updates is that Linux automatically creates a /swapfile (which grows in size to 1-2 GBs) using available storage space, if no swap partition is specified. This does not explicitly exhaust your secondary storage but just makes your computer run smoother. Look at the output of ubuntu@ubuntu:/home/ubuntu$ swapon. Mine is: NAME TYPE SIZE USED PRIO /swapfile file 1.1G 123.5M -2 /dev/sda6 partition 2G 1.7G 1 This means you can pretty much "get the stable Linux system", without a swap partition. The only exception is that a swap partition makes reloading semi-saved (or unsaved) information easier when your OS hibernate/crashes and you switch to another OS in between. (I am unsure but I think this is because the swap partition holds a /hiberfile.)
Does page swapping happen when the main memory is still available?
1,587,375,393,000
I have written own init process (pid 1) for my system, to improve security I decided to add hidepid=2 while mounting procfs in /proc location (procfs not mounted by default). After mounting the procfs I ran the mount command to check everything mount fine with given mount option, I noticed hidepid=2 was not listed in option. After some time I found hidepid=2 is able to add in list only after remount. I also confirmed the behavior using command line too, like below initially /proc was not mounted with procfs executed mount -t proc -o hidepid=2 proc /proc executed mount, showed proc on /proc type proc (rw,relatime) executed mount -t proc -o remount,hidepid=2 proc /proc executed mount, showed proc on /proc type proc (rw,relatime,hidepid=2) Anyone kindly explain me why I was not able to mount procfs with hidepid=2 in single attempt?
There is a commit in the Linux kernel (https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=69879c01a0c3f70e0887cfb4d9ff439814361e46) that says: In addition removing the unnecessary complexity of the kernel mount fixes a regression that caused the proc mount options to be ignored. Now that the initial mount of proc comes from userspace, those mount options are again honored. This fixes Android's usage of the proc hidepid option. So it seems this is a bug in the Linux kernel and currently (https://github.com/torvalds/linux/commit/69879c01a0c3f70e0887cfb4d9ff439814361e46), it is fixed only in the release candidate tags of the v5.7 version (v5.7-rc4, v5.7-rc3, v5.7-rc2 and v5.7-rc1).
Why procfs mount option only working on remount?
1,562,858,941,000
It seems open-vm-tools-dkms was deprecated in Debian 9 (Stretch) and is no longer available in Debian 10 (Buster). What does this mean — is the functionality obsolete or has it moved to some other package?
The package provided the vmxnet driver, which is obsolete and replaced by the vmxnet3 driver which is part of the kernel. This is alluded to in the package description in Stretch: This package provides the source code for the vmxnet module, which was superseded by vmxnet3. You should only install this package if you know that you need the legacy vnxnet module. Kernel source or headers are required to compile it using DKMS. vmxnet3 is provided in the kernel packages in Debian, you don’t need a separate DKMS package to get it.
What has happened to the package open-vm-tools-dkms in Debian?
1,562,858,941,000
I cannot launch any program in my desktop environment. I get these errors in dmesg: traps: terminator[3670] trap int3 ip:374dda71261 sp:388624bbec0 error:0 traps: pcmanfm[3685] trap int3 ip:380699ca261 sp:3e15d350150 error:0 traps: audacious[3687] trap int3 ip:3636d699261 sp:3a18365ccb0 error:0 what do those messages mean, and how can I fix it ? Everything worked until recently. I suspect it broke after regular update (apt-get upgrade). I am using Debian Stretch and LXDE as desktop environment.
this looks like prroblems caused by filesystem / hdd failure
unable to launch any program: trap int3 ip error
1,562,858,941,000
I'm trying to manually create my own custom usb drive, with a bunch of iso files on it, and a partition for data. I used the instruction I put here to create my key, but to sum-up, I have done a partition /dev/sda1 for data a partition /dev/sda2 that has grub installed a partition /dev/sda3 that contains my iso files in the folder linux-iso/ I put in the file grub2/grub/conf (on /dev/sda2) the following file : insmod loopback insmod iso9660 menuentry 'XUbuntu 16.04 "Xenial Xerus" -- amd64' { set isofile="/linux-iso/xubuntu-16.04.1-desktop-amd64.iso" search --no-floppy --set -f $isofile loopback loop $isofile linux (loop)/casper/vmlinuz.efi locale=fr_FR bootkbd=fr console-setup/layoutcode=fr iso-scan/filename=$isofile boot=casper persistent file=/cdrom/preseed/ubuntu.seed noprompt ro quiet splash noeject -- initrd (loop)/casper/initrd.lz } menuentry 'Debian 9.3.0 amd64 netinst test 3' { set isofile="/linux-iso/debian-9.3.0-amd64-netinst.iso" search --no-floppy --set -f $isofile loopback loop $isofile linux (loop)/install.amd/vmlinuz priority=low config fromiso=/dev/sdb3/$isofile initrd (loop)/install.amd/initrd.gz } This way, when I load ubuntu everything works great... But when I load debian it fails at the step "Configure CD-Rom", with the error: Incorrect CD-ROM detected. The CD-ROM drive contains a CD which cannot be used for installation. Please insert a suitable CD to continue with the installation." I also tried to mount /dev/sdb3 at /cdrom, but in that case I've an error on the next step: Load installer components from CD: There was a problem reading data from the CD-ROM. Please make sure it is in the drive. Failed to copy file from CD-ROM. Retry?" Do you know how to solve this problem? Thank you!
It seems that it isn't grub related or that your conf is at fault; it seems that it is Debian related, based on this article and citing textually: Now the first time I tried to boot the most recent Debian installer this way, I ran into a bit of a problem. It turns out that the initrd that comes on the ISO itself does not contain the installer scripts you need to install from an ISO on a hard drive. It assumes you will boot only off a DVD or USB disk. Because of that, I discovered I had to download a different Debian installer initrd and put it on the rescue disk for things to work. I was able to find an initrd that worked here. By here it means this file but in your case it should be this other file. I suggest reading the full article and the parts about the issue. Good luck
Manually create grub entry for iso debian file : cannot copy cdrom
1,562,858,941,000
Supposedly I passed the kernel a parameter that it doesn't understand, for example blabla or eat=cake, what would the kernel do with these unknown parameters, the traditional case would be passing any unknown parameter to init, in case if the the Linux kernel starts with early user space (initramfs) would it pass it to /init in initramfs?
From the kernel documentation: The kernel parses parameters from the kernel command line up to --; if it doesn't recognize a parameter and it doesn't contain a ., the parameter gets passed to init: parameters with = go into init's environment, others are passed as command line arguments to init. Everything after -- is passed as an argument to init. This also applies to /init on an initramfs. In the source code, both the initramfs's /init and the final root's /sbin/init (or other locations) are invoked via run_init_process which uses the same arguments (apart from argument 0 which is the path to the executable). I can't find it stated in the documentation but kernel interfaces are stable so this won't change. Note that this does not apply to /linuxrc on an initrd. This one is invoked with no arguments, but with the same environment as /init and /sbin/init. It can mount the proc filesystem and read /proc/cmdline to see the kernel command line arguments.
What does the Linux kernel do with unknown kernel parameters?
1,562,858,941,000
I was revising Linux system calls. I found that a few calls are unimplemented system calls. For example: afs_syscall. I don't understand why they are included in man pages if they are not yet available. They are not implemented in the kernel. So who will implement them? Will they be available in future kernel releases? Or does the user have to implement them? Or will distributions implement them? Are they really necessary? What is the use of unimplemented system calls? If some one implemented these calls, how can I know they are implemented, what arguments do I have to pass and what will they return?
Most of them used to be implemented at some point in Linux kernel history time, but some like at least vserver are still implemented in specific kernels. The majority of these calls is now essentially obsolete but their slot remains and contains a stub which role is not to break old code and allow a re-implementation in a specialized or new kernel should it be needed.
What are unimplemented system calls?
1,562,858,941,000
I have checked out linux kernel git repository git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git I know how to use git log, git show and similar commands to see changes/commits in the main kernel tree. For my particular purpose, I am however only interested in changes of the kernel tree 3.18. How can I see only changes relating to 3.18 ? How can I see, for example, which files have been changed between 3.18.6 and 3.18.7 ?
I would rather clone this git. And then do git diff --stat $ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git` $ cd linux-stable/ $ git diff --stat v3.18.6 v3.18.7
git: show which files have changed between kernel 3.18.6 and 3.18.7
1,562,858,941,000
While i was reading dmesg log just to check that everything is fine, i met [ 18.956187] [drm] Wrong MCH_SSKPD value: 0x16040307 [ 18.956190] [drm] This can cause pipe underruns and display issues. [ 18.956192] [drm] Please upgrade your BIOS to fix this. Looks that it does not cause problems on my laptop, but what does this message stands for? What can it cause? Where can I read more about MCH_SSKPD?
Dissecting the acronym, I get that MCH stands for 'Memory Controller Hub' with is an older name for the northbridge. This chip is part of your I/O controller hub. As for SSKPD, there is not much information I can find other than what is in various intel manuals. Here is a snippet from one of them: SSKPD — Sticky Scratchpad Data Register This register holds 64 writable bits with no functionality behind them. It is for the convenience of BIOS and graphics drivers. Unfortunately this doesn't give much information on what it is. According to Wikipedia, scratchpad is "special high-speed memory circuit used to hold small items of data for rapid retrieval." Another piece of information is the log from the commit that added the warning: drm/i915: detect wrong MCH watermark values Some early bios versions seem to ship with the wrong tuning values for the MCH, possible resulting in pipe underruns under load. Especially on DP outputs this can lead to black screen, since DP really doesn't like an occasional whack from an underrun. Unfortunately the registers seem to be locked after boot, so the only thing we can do is politely point out issues and suggest a BIOS upgrade. Arthur Runyan pointed us at this issue while discussion DP bugs - thus far no confirmation from a bug report yet that it helps. But at least some of my machines here have wrong values, so this might be useful in understanding bug reports. v2: After a bit more discussion with Art and Ben we've decided to only the check the watermark values, since the OREF ones could be be a notch more aggressive on certain machines. So seemingly the value of the register has some meaning on some processors. There isn't anything I can find on the internet at this time which explains exactly what could go wrong by it having the wrong value, but I think this gives a good overall idea. If you really want to dig further, you could email one of the guys who wrote or reviewed the commit.
What is MCH_SSKPD warning in dmesg?
1,562,858,941,000
Is there any proper way to build a minimal kernel for FreeBSD? The FreeBSD Handbook has the lack of information about this. By default /boot/kernel directory has the pretty big size - around 450MB. I want to minimize kernel fingerprint and remove all unnecessary kernel modules and options. Should I use "NO_MODULES" option in /etc/make.conf? Or use C compilation flags?
There are a number of things you can do to reduce the size and number of files in /boot/kernel. Possibly the best space saving is to be had by setting WITHOUT_KERNEL_SYMBOLS in /etc/src.conf (if this file doesn't already exist, just create it), and the next time you installkernel, the debug symbol files won't be installed. It's safe to delete them now, if you need the space immediately (rm /boot/kernel/*.symbols) There are a few make.conf settings that control what modules are built: NO_MODULES - disable building of modules completely MODULES_OVERRIDE - specify the modules you want to build WITHOUT_MODULES - list of modules that should not be built The NO_MODULES option is probably a bit too heavy-handed, so a judicious combination of the other two is a better choice. If you know exactly which modules you want, you can simply set them in MODULES_OVERRIDE. Note that WITHOUT_MODULES is evaluated after MODULES_OVERRIDE, so any module named in both lists will not be built. If you really want to suppress building of all modules, you can use NO_MODULES, and ensure that all required drivers and modules are statically compiled into the kernel. Each driver's manpage shows the appropriate lines to add to your kernel config file, so you should be able to figure out what you need. If you still find that space is a problem, or if you just want to strip down the kernel as much as possible, you can edit your kernel config to remove any devices and subsystems your machine doesn't support, or which you are sure you won't want to use. The build system is pretty sensible, and if you inadvertently remove a module required by one still active in the config, you will get a failed build and an error message explaining what went wrong. Although it can be extremely tedious, the best approach is to take small steps, removing one or two things at a time and ensuring that the resultant configuration both builds and boots correctly. Whatever you do, though, I highly recommend you make a copy of /usr/src/sys/<arch>/config/GENERIC, and edit the copy. If you ever get so muddled that the only recourse is to start from the default config, you'll be glad you've still got the GENERIC file on your system! In order to build your custom kernel, you can either pass the name of the config on the command line as make KERNCONF=MYKERNCONF buildkernel, or you can set KERNCONF in /etc/make.conf. Make sure you place the custom config file in /usr/src/sys/<arch>/config and the build system will be able to find it.
How to properly build a minimal FreeBSD kernel?
1,562,858,941,000
I have found a number of examples how to prevent a kernel module from being loaded. One of them is about USB storage module. The next code has been provided: echo "blacklist usb-storage" | sudo tee -a /etc/modprobe.d/blacklist.conf then I decided to check the name by lsmod, so I plugged a USB stick in and found it to be different: Module Size Used by usb_storage 62209 1 My question is: what spelling should I apply to the blacklist: usb-storage or usb_storage? I have doubts about it being one name for an earlier kernel, and changed to another one for a later kernel. Currently I am running the Kernel version 3.13.0-30-generic
The module names may contain both - and _ . Both symbols can be interchanged while using with modprobe or lsmod and also in the conf files in /etc/modprobe.d/ . So that means you can use any of usb_storage or usb-storage for blacklisting.
Which kernel module name is currently correct "usb-storage" or "usb_storage"?
1,562,858,941,000
I am trying to set the oom_adj value for the out of memory killer, and each time I do (regardless of the process) I get back exactly one less than I set (at least for positive integers. I haven't tried negative integers since I want these processes to be killed by OOM Killer first). [root@server ~]# echo 10 > /proc/12581/oom_adj [root@server ~]# cat /proc/12581/oom_adj 9 [root@server ~]# echo 9 > /proc/12581/oom_adj [root@server ~]# cat /proc/12581/oom_adj 8 [root@server ~]# echo 8 > /proc/12581/oom_adj [root@server ~]# cat /proc/12581/oom_adj 7 [root@server ~]# echo 7 > /proc/12581/oom_adj [root@server ~]# cat /proc/12581/oom_adj 6 [root@server ~]# Is this expected behavior? If not, why is this happening?
oom_adj is deprecated and provided for legacy purposes only. Internally Linux uses oom_score_adj which has a greater range: oom_adj goes up to 15 while oom_score_adj goes up to 1000. Whenever you write to oom_adj (let's say 9) the kernel does this: oom_adj = (oom_adj * OOM_SCORE_ADJ_MAX) / -OOM_DISABLE; and stores that to oom_score_adj. OOM_SCORE_ADJ_MAX is 1000 and OOM_DISABLE is -17. So for 9 you'll get oom_adj=(9 * 1000) / 17 ~= 529.411 and since these values are integers, oom_score_adj will hold 529. Now when you read oom_adj the kernel will do this: oom_adj = (task->signal->oom_score_adj * -OOM_DISABLE) / OOM_SCORE_ADJ_MAX; So for 529 you'll get: oom_adj = (529 * 17) / 1000 = 8.993 and since the kernel is using integers and integer arithmetic, this will become 8. So there... you write 9 and you get 8 because of fixed point / integer arithmetic.
OOM Killer value always one less than set
1,562,858,941,000
I'm learning Unix from the Unix Architecture book by Maurice J Bach. My confusion is with the concept of kernel. What is a kernel? I understand it's the operating system and it is a process. But when my teacher teaches he says a system call(), results in a process going from user mode to kernel mode. What actually happens in a system call? Does the user process go to sleep and the operating system execute it on behalf of user process and returns the value to the user process or does the user process execute in kernel mode? If the latter is correct, what does it mean?
What is a kernel? In the sense of your question, it is a single large program that runs at a special privilege level on the processor. It provides all of the core operating system facilities: multitasking, IPC, file systems, etc. It is also the process that runs the device drivers, which in turn control the computer's hardware on behalf of the kernel. I understand it's the operating system Actually, no. An operating system is much more than just the kernel. Even back in the days when Maurice Bach was writing his book, the OS included shells, compilers, utilities, text editors, etc. Over time, the term OS has come to include even more things, like the GUI subsystem. It's a personal decision where you draw the line between the OS and normal user programs. Most people would agree that a GUI word processor is not part of the OS even if it was installed along with the OS proper. But, many would also agree that the plain text editor that came with the OS is part of the OS. Many in that camp would also agree that the markup processors that come with the OS — troff, TeX, etc. — are also considered OS facilities these days. But combine a text editor and a markup processor, and you have something indistinguishable from a word processor in some ways. Drawing a stark line that everyone can agree on in impossible. it is a process. Not really, no. A microkernel architecture is as close as you're going to get to making that statement true. Even then, the kernel is a collection of processes, one of which is special in that it is the one that can run all of the other processes. So even in that case, there is still a core — a tiny kernel — that cannot itself be said to be a normal process. In the case of a monolithic kernel, the kernel is in a special position, and it runs all of the processes. What actually happens in a System call? Read the rest of Bach's book. You will notice that this answer has many Wikipedia links, and most of the articles I link to are long and complicated, with many more links leading off. This is because you've basically asked us to distill a very complicated topic into a simple answer. There is not a simple answer, so I have tried to provide a guide to the answers, plural. Does the user process go to sleep and the operating System Executes it on behalf of User process and returns the value to user process…? In a classic monolithic kernel as discussed by Bach, yes. Modern systems fuzz this simple picture, though. First, the "operating system" doesn't execute the system call, the kernel does. I am not just being pedantic. Since the huge bulk of a modern OS is composed of assorted user-space programs, and modern OSes are multitasking, you can't say that the OS just stops and runs the system call. The OS may be doing many things at the same time, one of which is handling a single user program's system call. But second, and this is much more important, modern OS kernels are no longer single-tasking programs that handle one system call at a time. An OS may be in the middle of many system calls at once. A single-tasking user space program that makes a system call may perceive that the world stops until the system call finishes, but the kernel may be doing many other things while that system call proceeds. Even in the case of an old-style single-tasking kernel, you had things like driver top- and bottom-halves, which allowed the kernel to go off and handle things like disk I/O in order to provide low-level service to a relatively high-level system call like open(2). or the user process executes in kernel Mode? You could look at it that way, but it's only true in the same sort of way that my web browser and Stack Exchange are the same program because they are interoperating to provide a single cohesive experience.
what is a Kernel? [closed]
1,562,858,941,000
Here is my problem. My laptop has an NVidia Geforce 310 M graphic card. And since kernel 2.6 I had problems with the display. I had to use nomodeset option in the grub, if not my screen will be off when the xorg starts even though my computer is running. Fortunately, this problem was fixed in early 3.0 kernels. It was fine until kernel 3.4.x in which the problem was back. So I've been using the old kernel(3.3.x) from my grub list, and reported a bug at bugzilla. But they never replied to my questions about the bug on bugzilla. Now, the real problem is the update manager says there are software updates and it want to remove the old kernel and install some 3.4.x kernel which I'm not sure if they fixed the problem. If I don't get display, I lost my fedora I should re-install some other OS or OS with kernel 3.3.x What should I do?
I think this is a tricky question...There's no way we could recommend to update or not without being in same situation as you are right now. @darnir made a good suggestion, my approach would be really close to it, upgrade but keep your old kernel close enough so you can always go back to it. Either downloading the RPM file or backing the kernel itself, its initramfs and its modules. Also have look at how you could protect a package from being removed: http://docs.fedoraproject.org/en-US/Fedora/14/html/Software_Management_Guide/ch04s08s07.html Also...I was just wondering, which driver are you using? Open source (nouveau) or close source (Nvidia), have you tried both? maybe your bug is only hit on one of them. I agree with @darnir, your bug report might be helpful.
Should I update my Fedora or not?
1,562,858,941,000
From here: http://fedoraproject.org/wiki/Common_kernel_problems#Can.27t_find_root_filesystem_.2F_error_mounting_.2Fdev.2Froot A lot of these bugs end up being a broken initrd due to bugs in mkinitrd. Get the user to attach their initrd for their kernel to the bz, and also their /etc/modprobe.conf, or have them examine the contents themselves if they are capable of that. Picking apart the initrd of a working and failing kernel and doing a diff of the init script can reveal clues. To take apart an initrd, do the following .. mkdir initrd cd initrd/ gzip -dc /boot/initrd-2.6.23-0.104.rc3.fc8.img | cpio -id I wish to understand what exactly is being done here. What has initrd to do with anything? Where are we supposed to create the directory initrd?
An initrd (short for “initial RAM drive”) is a filesystem that's mounted when the Linux kernel boots, before the “real” root filesystem. This filesystem is loaded into memory by the bootloader, and remains in memory until the real boot. The kernel executes the program /linuxrc on the initrd; its job is to mount the real root, and when /linuxrc terminates the kernel runs /sbin/init. A bug somewhere in the initrd can explain why the system doesn't boot. So the document you link to recommends that you compare your initrd with an official one if you have trouble booting. In the provided instructions, initrd is just some temporary directory, you can call it anisha_initrd or fred if you like. The initrd is stored in the file /boot/initrd-SOMETHING.img as a gzipped cpio archive; the instructions unpack that archive in the temporary directory you created. After unpacking, you can compare it with an official initrd (unpack the official initrd and run a command like diff -ru /path/to/official_initrd /path/to/anisha_initrd).
Kernel Panic - Can't find root filesystem / error mounting /dev/root
1,562,858,941,000
Is there anyone that can tell me why, on a preemptive kernel, PAE would not work? This question is an exam question, however I haven't got a clue why it would not work.
The clue likely lies here, from O'Reilly's Understanding the Linux Kernel: "Some real-time operating systems feature preemptive kernels, which means that a process running in Kernel Mode can be interrupted after any instruction, just as it can in User Mode. The Linux kernel is not preemptive, which means that a process can be preempted only while running in User Mode; nonpreemptive kernel design is much simpler, since most synchronization problems involving the kernel data structures are easily avoided (see the section "Nonpreemptability of Processes in Kernel Mode" in Chapter 11, Kernel Synchronization)." I'm betting it's difficult to keep page tables in proper order when user processes can interrupt kernel processes.
Preemptive kernel and Physical Address Extension
1,562,858,941,000
as I've already wrote in the topic, I compiled a new Kernel with make defconfig, the bzImage is where it there, so is vmlinux.bin. I've installed modules with make modules_install. Now, what is the next step? should I rename bzImage to my liking and put it into /boot? And how do I create an initramfs? vmlinux.bin is executable, is that my Kernel? I'm using GRUB, and I'm quite familiar with using and configuring it. But I'm having a hard time putting the kernel together.
Once you've made make modules_install, the next steps are: make install this will take care to move the bzImage, System.map and .config to /boot with the right names, e.g. config-2.6.39-rc1, System.map-2.6.39-rc1, etc... the next step is to build the initramfs. That depends on the distro. On a debian-like distro, it would be mkinitramfs -c -k 2.6.39-rc1. A RH like distro that would be mkinitrd /boot/initrd-2.6.39-rc1.img 2.6.39-rc1 Add the new kernel to your boot loader, on a modern distro, that would be a simple update-grub Note: make defconfig may generate a kernel that lacks the proper drivers for your hardware. Safer alternatives would be to either copy the .config of your currently running kernel (look in /boot or /proc/config.gz), or to manually determine the necessary drivers by 'hand' and running a make xconfig Note2: -rc1 is very fresh, expect it to contain bugs.
Compiled a Kernel (2.6.39-rc1), where is the corresponding initramfs?
1,562,858,941,000
nc -l -u 6666 on the receiving machine gets no messages from netconsole. tested by doing "echo test > /dev/kmsg" i am able to connect with netcat by doing "nc -u 10.0.0.192 6666" on the netconsole machine "sudo tcpdump -i wlp170s0 -n -e port 6666" outputs nothing on the listening machine netconsole options: modprobe netconsole [email protected]/enp8s0,[email protected]/54:14:f3:52:82:94 oops_only=0 ifconfig on netconsole machine: enp8s0 Link encap:Ethernet HWaddr 70:85:C2:D7:65:F3 inet addr:10.0.0.42 Bcast:10.0.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1032 errors:0 dropped:0 overruns:0 frame:0 TX packets:791 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:95338 TX bytes:230456 lo Link encap:Local Loopback inet addr:127.0.0.1 Bcast:0.0.0.0 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 TX bytes:0 ifconfig on listening machine: lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MULTICAST MTU:65536 Metric:1 RX packets:400 errors:0 dropped:0 overruns:0 frame:0 TX packets:400 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:46289 TX bytes:46289 nfs Link encap:(hwtype unknown) inet addr:10.8.0.3 P-t-P:10.8.0.3 Mask:255.255.255.0 UP POINTOPOINT RUNNING NOARP MTU:1420 Metric:1 RX packets:7418 errors:0 dropped:0 overruns:0 frame:0 TX packets:22098 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1806372 TX bytes:26188072 wlp170s0 Link encap:Ethernet HWaddr 54:14:F3:52:82:94 inet addr:10.0.0.192 Bcast:10.0.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2771549 errors:0 dropped:54 overruns:0 frame:0 TX packets:1029444 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3024953926 TX bytes:153598327
Turns out you have to increase the verbosity with dmesg -n 8
Unable to get output from netconsole
1,562,858,941,000
Is linux-next going to be the the mainline kernel later? If yes, is it the one first turns out to be RCs and eventually the stable kernel version?
Not quite; linux-next is described as the holding area for patches aimed at the next kernel merge window. It gives an indication of what’s likely to be in the next release, but a patch’s presence in linux-next doesn’t guarantee that Linus will merge it into the main tree, not does a patch’s absence mean that it won’t be included in the main tree. If you compare the linux-next and mainline trees, you’ll see that merges into the former are done by a number of developers; whereas merges into the latter are done by Linus Torvalds. linux-next serves two major purposes: it allows subsystem maintainers to minimise the risk of merge conflicts during the merge window, and it allows developers to base their work on the very latest code queued for the forthcoming release. Mainline isn’t built from linux-next; Linus merges pull requests from maintainers during the merge windows, from the maintainers’ trees. (This is finishing up right now for 5.19.) When the merge window closes, the tip of mainline becomes rc1.
Linux-next kernel version of the kernel tree
1,562,858,941,000
After a few weeks of not having time for games, today I am unable to launch Steam. I do not remember some bigger change in the system = Linux Mint 20 than upgrading my Mint today from version 20.2 to 20.3. Here is dmesg snippet: $ \dmesg --human --color=auto --ctime [Fri Jan 7 19:41:33 2022] ------------[ cut here ]------------ [Fri Jan 7 19:41:33 2022] kernel BUG at drivers/gpu/drm/drm_gem.c:154! [Fri Jan 7 19:41:33 2022] invalid opcode: 0000 [#1] SMP PTI [Fri Jan 7 19:41:33 2022] CPU: 6 PID: 5273 Comm: steam Tainted: P OE 5.4.0-92-generic #103-Ubuntu [Fri Jan 7 19:41:33 2022] Hardware name: Dell Inc. Inspiron 7577/0J8HMF, BIOS 1.15.0 10/08/2021 [Fri Jan 7 19:41:33 2022] RIP: 0010:drm_gem_private_object_init+0xa2/0xb0 [drm] [Fri Jan 7 19:41:33 2022] Code: 00 31 c0 c1 e9 03 f3 48 ab 48 c7 43 18 00 00 00 00 48 c7 83 c0 00 00 00 00 00 00 00 5b 41 5c 5d c3 4c 89 a3 f0 00 00 00 eb b2 <0f> 0b 66 66 2e 0f 1f 84 00 00 00 00 00 90 0f 1f 44 00 00 55 48 89 [Fri Jan 7 19:41:33 2022] RSP: 0018:ffffa720444a7d08 EFLAGS: 00010206 [Fri Jan 7 19:41:33 2022] RAX: ffff906ae37ca568 RBX: ffff906ae37ca558 RCX: 0000000000000200 [Fri Jan 7 19:41:33 2022] RDX: 0000000000000200 RSI: ffff906ae37ca400 RDI: ffff906b2c174000 [Fri Jan 7 19:41:33 2022] RBP: ffffa720444a7d30 R08: ffff906ae29f7908 R09: ffff906ae29f7908 [Fri Jan 7 19:41:33 2022] R10: ffff906b12cf4008 R11: 0000000000000001 R12: ffff906ae37ca400 [Fri Jan 7 19:41:33 2022] R13: 0000000000000200 R14: ffff906b2c174000 R15: ffff906ae239f800 [Fri Jan 7 19:41:33 2022] FS: 0000000000000000(0000) GS:ffff906b30380000(0063) knlGS:00000000f7821740 [Fri Jan 7 19:41:33 2022] CS: 0010 DS: 002b ES: 002b CR0: 0000000080050033 [Fri Jan 7 19:41:33 2022] CR2: 00000000587a3000 CR3: 0000000823210004 CR4: 00000000003606e0 [Fri Jan 7 19:41:33 2022] Call Trace: [Fri Jan 7 19:41:33 2022] ? nv_drm_gem_object_init+0x54/0x60 [nvidia_drm] [Fri Jan 7 19:41:33 2022] nv_drm_gem_import_nvkms_memory_ioctl+0xa7/0x100 [nvidia_drm] [Fri Jan 7 19:41:33 2022] ? nv_drm_dumb_create+0x1b0/0x1b0 [nvidia_drm] [Fri Jan 7 19:41:33 2022] drm_ioctl_kernel+0xae/0xf0 [drm] [Fri Jan 7 19:41:33 2022] ? _nv038665rm+0xac/0x1a0 [nvidia] [Fri Jan 7 19:41:33 2022] drm_ioctl+0x24a/0x3f0 [drm] [Fri Jan 7 19:41:33 2022] ? nv_drm_dumb_create+0x1b0/0x1b0 [nvidia_drm] [Fri Jan 7 19:41:33 2022] ? __check_object_size+0x13f/0x150 [Fri Jan 7 19:41:33 2022] ? nvidia_ioctl+0x39b/0x8d0 [nvidia] [Fri Jan 7 19:41:33 2022] drm_compat_ioctl+0xcb/0xe0 [drm] [Fri Jan 7 19:41:33 2022] __ia32_compat_sys_ioctl+0x194/0x220 [Fri Jan 7 19:41:33 2022] do_fast_syscall_32+0x9d/0x260 [Fri Jan 7 19:41:33 2022] entry_SYSENTER_compat+0x7f/0x91 [Fri Jan 7 19:41:33 2022] RIP: 0023:0xf7efdb49 [Fri Jan 7 19:41:33 2022] Code: c4 8b 04 24 c3 8b 14 24 c3 8b 1c 24 c3 8b 34 24 c3 8b 3c 24 c3 90 90 90 90 90 90 90 90 90 90 90 90 51 52 55 89 e5 0f 34 cd 80 <5d> 5a 59 c3 90 90 90 90 8d b4 26 00 00 00 00 8d b4 26 00 00 00 00 [Fri Jan 7 19:41:33 2022] RSP: 002b:00000000ffa2ea18 EFLAGS: 00000282 ORIG_RAX: 0000000000000036 [Fri Jan 7 19:41:33 2022] RAX: ffffffffffffffda RBX: 0000000000000011 RCX: 00000000c0206441 [Fri Jan 7 19:41:33 2022] RDX: 00000000ffa2ead4 RSI: 0000000000000001 RDI: 000000005877af90 [Fri Jan 7 19:41:33 2022] RBP: 00000000f696eba4 R08: 0000000000000000 R09: 0000000000000000 [Fri Jan 7 19:41:33 2022] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000 [Fri Jan 7 19:41:33 2022] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 [Fri Jan 7 19:41:33 2022] Modules linked in: rfcomm ccm xt_multiport xt_owner ip6_tables cmac algif_hash algif_skcipher af_alg bnep xt_limit ipt_REJECT nf_reject_ipv4 xt_conntrack xt_tcpudp xt_length xt_comment xt_u32 iptable_filter iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 xt_mark xt_cgroup iptable_mangle iptable_raw bpfilter binfmt_misc nvidia_uvm(OE) nls_iso8859_1 joydev mei_hdcp intel_rapl_msr nvidia_drm(POE) nvidia_modeset(POE) x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel dell_laptop kvm nvidia(POE) dell_smm_hwmon dell_wmi snd_hda_codec_hdmi iwlmvm mac80211 snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio libarc4 dell_smbios dcdbas rapl snd_hda_intel snd_intel_dspcfg snd_hda_codec snd_hda_core snd_hwdep snd_pcm snd_seq_midi snd_seq_midi_event snd_rawmidi uvcvideo btusb videobuf2_vmalloc videobuf2_memops btrtl videobuf2_v4l2 btbcm snd_seq btintel videobuf2_common bluetooth videodev intel_cstate snd_seq_device mc cdc_acm ecdh_generic ecc snd_timer mxm_wmi [Fri Jan 7 19:41:33 2022] hid_multitouch iwlwifi input_leds serio_raw dell_wmi_descriptor wmi_bmof intel_wmi_thunderbolt cfg80211 snd processor_thermal_device soundcore intel_rapl_common ucsi_acpi mei_me typec_ucsi mei intel_soc_dts_iosf intel_pch_thermal typec int3403_thermal intel_hid int3402_thermal sparse_keymap int3400_thermal int340x_thermal_zone acpi_thermal_rel acpi_pad mac_hid sch_fq_codel msr parport_pc ppdev lp parport ip_tables x_tables autofs4 btrfs xor zstd_compress raid6_pq libcrc32c usbhid hid_generic i915 i2c_algo_bit ahci crct10dif_pclmul drm_kms_helper crc32_pclmul ghash_clmulni_intel aesni_intel syscopyarea sysfillrect sysimgblt intel_lpss_pci i2c_hid fb_sys_fops crypto_simd intel_lpss nvme cryptd r8169 idma64 glue_helper psmouse drm nvme_core i2c_i801 libahci realtek virt_dma hid wmi video [Fri Jan 7 19:41:33 2022] ---[ end trace 34d885d0821661da ]--- [Fri Jan 7 19:41:33 2022] RIP: 0010:drm_gem_private_object_init+0xa2/0xb0 [drm] [Fri Jan 7 19:41:33 2022] Code: 00 31 c0 c1 e9 03 f3 48 ab 48 c7 43 18 00 00 00 00 48 c7 83 c0 00 00 00 00 00 00 00 5b 41 5c 5d c3 4c 89 a3 f0 00 00 00 eb b2 <0f> 0b 66 66 2e 0f 1f 84 00 00 00 00 00 90 0f 1f 44 00 00 55 48 89 [Fri Jan 7 19:41:33 2022] RSP: 0018:ffffa720444a7d08 EFLAGS: 00010206 [Fri Jan 7 19:41:33 2022] RAX: ffff906ae37ca568 RBX: ffff906ae37ca558 RCX: 0000000000000200 [Fri Jan 7 19:41:33 2022] RDX: 0000000000000200 RSI: ffff906ae37ca400 RDI: ffff906b2c174000 [Fri Jan 7 19:41:33 2022] RBP: ffffa720444a7d30 R08: ffff906ae29f7908 R09: ffff906ae29f7908 [Fri Jan 7 19:41:33 2022] R10: ffff906b12cf4008 R11: 0000000000000001 R12: ffff906ae37ca400 [Fri Jan 7 19:41:33 2022] R13: 0000000000000200 R14: ffff906b2c174000 R15: ffff906ae239f800 [Fri Jan 7 19:41:33 2022] FS: 0000000000000000(0000) GS:ffff906b30380000(0063) knlGS:00000000f7821740 [Fri Jan 7 19:41:33 2022] CS: 0010 DS: 002b ES: 002b CR0: 0000000080050033 [Fri Jan 7 19:41:33 2022] CR2: 00000000587a3000 CR3: 0000000823210004 CR4: 00000000003606e0 If I run steam from terminal, I get: $ steam steam.sh[7775]: Running Steam on linuxmint 20.3 64-bit steam.sh[7775]: STEAM_RUNTIME is enabled automatically setup.sh[7910]: Steam runtime environment up-to-date! steam.sh[7775]: Steam client's requirements are satisfied $ I did not change my kernel, but I remember updating it via regular updates. The only red line is: kernel BUG at drivers/gpu/drm/drm_gem.c:154! I am now running Nvidia drivers of 495.46 version. PS: On Windows 10 the Steam and games work.
The issue got resolved by reverting to Nvidia driver version 470.86. So, maybe the 495 driver branch is not game-ready yet. In any case, I am leaving the question here for other readers.
Unable to launch Steam on Nvidia 495 driver, 5.4 kernel, Linux Mint 20.3
1,562,858,941,000
Quoting the linux kernel documentation for boot parameters : pcie_bus_perf : Set device MPS to the largest allowable MPS based on its parent bus. Also set MRRS (Max Read Request Size) to the largest supported value (no larger than the MPS that the device or bus can support) for best performance. I fail to understand why the MRRS should not be larger than the MPS "for best performance". I mean if a device can do MPS=X and MRRS=4.X then read requests could, in numbers, be less hence the bus less busy compared to a MRRS=X situation, even if the satisfaction of the request needs to be split in 4. Would the split induce some significant overhead somewhere ? BTW, I know the concept of "fair sharing" and understand the impact of large MRRS on that sharing but I never understood fair sharing synonymous to best performance.
I hope you found the answer but I found some information for this which could help. This kernel-mailing-list-discussion and this article are mentioning this issue and the explanation is that by setting the MRRS you are ensuring that the devices are not sending out read requests where the completion-packet-size (the answer) is bigger than the MPS of the device sending out the read request. If you ensure that, every node is able to have the MPS of the node above as its own MPS (or the highest supported by the device if it is lower than the MPS of the node above). So one node with a very low MPS cannot slow down the whole bus. This schematic from the discussion helped me a lot to understand the problem: normal: root (MPS=128) | ------------------ / \ bridge0 (MPS=128) bridge1 (MPS=128) / \ EP0 (MPS=128) EP1 (MPS=128) perf: root (MPS=256) | ------------------ / \ bridge0 (MPS=256) bridge1 (MPS=128) / \ EP0 (MPS=256) EP1 (MPS=128) Where every node is able to have a higher MPS than 128 bytes except the EP1.
pcie_bus_perf : Understanding the capping of MRRS
1,562,858,941,000
I am using macOS with SIP enabled. And I am figuring out why the scripts run so slowly with the SIP after a modification or creation. And I found if I modify a script by editors like vim or nano, and run it by ./script.bash, it will take about 1 second to finish the script for the first time after each modification. For example. If the script.bash is: #!/bin/bash echo 1 And I change it to below by vim. It takes me about ten times longer time to run it. #!/bin/bash echo 1 echo 2 bash-3.2$ time ./script.bash # First time after modification by vim 1 2 real 0m0.884s user 0m0.001s sys 0m0.002s bash-3.2$ time ./script.bash # Second time after modification by vim 1 2 real 0m0.003s user 0m0.001s sys 0m0.002s While if I currently append the file by some command's output redirection like echo "echo 3" >> script.bash and still call the script by ./script.bash, the delay is gone. bash-3.2$ echo "echo 3" >> script.bash bash-3.2$ time ./script.bash # First time after modification by echo 1 2 3 real 0m0.004s user 0m0.001s sys 0m0.002s bash-3.2$ time ./script.bash # Second time after modification by echo 1 2 3 real 0m0.002s user 0m0.001s sys 0m0.001s So what's the difference between the two ways of writing a file? And why the delay happens only with SIP enabled?
I found this article which I believe explains your issue. Apple has introduced notarization, setting aside the inconvenience this brings to us developers, it also results in a degraded user experience, as the first time a user runs a new executable, Apple delays execution while waiting for a reply from their server. This check for me takes close to a second. This is not just for files downloaded from the internet, nor is it only when you launch them via Finder, this is everything. So even if you write a one line shell script and run it in a terminal, you will get a delay! As for the notarization check, the result is cached, so second invocation should be fast, but if you are a developer, you may update your scripts and binaries regularly, which trigger new checks (it appears caching is based on inode, so an update-in-place save may avoid triggering a new check), or you may have workflows that involve dynamically creating and executing scripts, which performance now hinges upon the responsiveness of Apple’s servers. It seems that modifying the file through an editor modifies the inode causing it to be checked again but appending it with a redirect does not.
What's the difference between writing a file by editor like vim/nano and by output redirection in shell?
1,562,858,941,000
I'm famillar with iptables -F and other features of iptables. I need to disable iptables from kernel, of course I prefer to disable from sysctl instead of recompiling the kernel.
If your kernel uses modules for iptables, which is the case in most distributions, you can blacklist the base module; create a file named, for example, /etc/modprobe.d/iptables-blacklist.conf, containing install ip_tables /bin/false You can block other variants in a similar fashion (ip6_tables, ebtables, nf_tables etc.), or block x_tables to block ebtables, iptables, ip6tables and arptables in one go (but not nftables).
How to disable iptables from kernel
1,562,858,941,000
The default linux kernel config (at least in Ubuntu and Arch) is not as optimized for desktop responsiveness as linux-ck or Zen Kernel. The difference between those kernels and the default one is very noticeable when the CPU or disk load is high. Is there anyway to make the default kernel better for UI responsiveness?
A/ It is not the kernel .config which makes your kernel less optimized for desktop responsiveness than linux-ck or Zen kernels. linux-ck and Zen kernels just use a different scheduler. B/ Mainline kernel offers many ways to increase desktop responsiveness without the need to patch or reconfig it. Many tweaks can be activated using the cfs-zen-tweaks you can download and just run but I would advise you just read the very simple code and learn how each of the tweaks impact. Don't hesitate to lower the priority of your cpu-bound processes (compilations, simulations...) and increase the priority of your interactive tasks thanks to the renice command and even change their scheduling policy using chrt Ultimately, you can always pin interrupts to dedicated cpus (setting desired values in /proc/irq/[irq_id]/smp_affinity) , having one in charge of the keyboard and the mouse, another one for the graphic adaptor a third one for the sound card and a fourth one housekeeping for all the possible remaining. Just plenty of solutions left opened without changing a byte in your distro-kernel. Have fun!
Optimizing Linux for desktop responsiveness without thirdparty kernels
1,562,858,941,000
I am confused about which is the proper formatting to use within mkinitcpio.conf ... I have noticed sometimes double quotes are used and other times parentheses to close off the users desired hooks, modules, etc. settings. Example: HOOKS="base udev autodetect block filesystems" HOOKS=(base udev autodetect block filesystems) So which format is the right one to use?
This is the old style: HOOKS="base udev autodetect block filesystems" And this is the current new style: HOOKS=(base udev autodetect block filesystems) see also arrayize config vars in mkinitcpio.conf This change was done in 2017, so you should see the old style only in older installs. Both styles work so don't worry too much about it.
Which format is the correct format to use for mkinitcpio.conf?
1,562,858,941,000
Content of hostname file in my server is: # cat /etc/hostname sub.mysite.com But when I ping my CentOS 7 server it says: # ping sub.mysite.com 64 bytes from sub ... Even: # ping ns1.mysite.com 64 bytes from sub ... How can I tell my server to have the following output when pinging? 64 bytes from sub.mysite.com ... UPDATE For example on my client: user@host:~$ ping ns1.mysite.com PING ns1.mysite.com (x.x.x.x) 56(84) bytes of data. 64 bytes from sub (x.x.x.x): icmp_seq=1 ttl=56 time=7.88 ms 64 bytes from sub (x.x.x.x): icmp_seq=2 ttl=56 time=5.86 ms 64 bytes from sub (x.x.x.x): icmp_seq=3 ttl=56 time=4.99 ms 64 bytes from sub (x.x.x.x): icmp_seq=4 ttl=56 time=4.88 ms I want to have full hostname (sub.mysite.com) rather than sub.
ping doesn't use /etc/hostname to resolve IP to name mappings, it uses the Name Service (netns) to do these translations. Incidentally, /etc/hostname is part of systemd: $ rpm -qf /etc/hostname systemd-219-42.el7_4.10.x86_64 That short name you're seeing, sub, is coming from your /etc/hosts file via the Name Service. If you use strace you can see how ping is finding sub: $ strace -s 2000 ping -c1 www.google.com |& grep /etc/host open("/etc/host.conf", O_RDONLY|O_CLOEXEC) = 4 open("/etc/hosts", O_RDONLY|O_CLOEXEC) = 4 open("/etc/hosts", O_RDONLY|O_CLOEXEC) = 4 So the easy way to solve your problem is to put the name of your server as you want ping to display it in your /etc/hosts file. Example $ ping -c1 www.google.com PING www.google.com (74.125.141.99) 56(84) bytes of data. 64 bytes from vl-in-f99.1e100.net (74.125.141.99): icmp_seq=1 ttl=63 time=109 ms --- www.google.com ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 109.903/109.903/109.903/0.000 ms Now if we were to add that IP, 74.125.141.103, to your `/etc/hosts` file we could manipulate `ping` into showing whatever we want for it: Add this to /etc/hosts: 74.125.141.99 blah.blah.com Now repeat our test: $ ping -c1 www.google.com PING www.google.com (74.125.141.99) 56(84) bytes of data. 64 bytes from blah.blah.com (74.125.141.99): icmp_seq=1 ttl=63 time=109 ms --- www.google.com ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 109.886/109.886/109.886/0.000 ms Order of /etc/hosts Keep in mind that the order the hosts are added to /etc/hosts can cause the names to show up as you were seeing. For example if we had this in our /etc/hosts: 74.125.141.99 blah blah.blah.com The ping would show up as you were seeing, with a short name: $ ping -c1 www.google.com PING www.google.com (74.125.141.99) 56(84) bytes of data. 64 bytes from blah (74.125.141.99): icmp_seq=1 ttl=63 time=108 ms --- www.google.com ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 108.065/108.065/108.065/0.000 ms References /etc/resolv.conf order not respected by ping
How to change `ping from` value when pinging Linux server?
1,562,858,941,000
How do I get the value of PAGE_CACHE_SIZE that is mentioned in man mount? man mount: Mount options for tmpfs size=nbytes Override default maximum size of the filesystem. The size is given in bytes, and rounded up to entire pages. The default is half of the memory. The size parameter also accepts a suffix % to limit this tmpfs instance to that percentage of your physical RAM: the default, when neither size nor nr_blocks is specified, is size=50%. nr_blocks= The same as size, but in blocks of PAGE_CACHE_SIZE
Page cache - is the place in RAM where files are stored before writing to disk or after reading from disk. It's reduces delays for I/O operations to/from SSD, HDD, CD ... tmpfs is the filesystem that lives in RAM permanently so tmpfs lives in page cache. So page cache lives in RAM and consists of pages. Page - is the minimum chunk of memory which OS can handle and it size depend on hardware (MMU(memory management unit) in CPU). All operations with memory usually rounded to page size. Get page size (one of the way): $ getconf PAGESIZE 4096 PAGE_CACHE_SIZE in mount command means count of pages. It's easy to check: # mkdir /mnt/trash # mount -t tmpfs -o nr_blocks=1 tmpfs /mnt/trash/ $ mount | grep trash tmpfs on /mnt/trash type tmpfs (rw,relatime,size=4k) $ df -h|grep trash tmpfs 4.0K 0 4.0K 0% /mnt/trash
How do I get the value of 'PAGE_CACHE_SIZE' that is mentioned in 'man mount'?
1,562,858,941,000
As the options make menuconfig and make nconfig allow a nice way to configure the kernel options, are there any way to get this hierachical structure for print it? Something similar to the "tree" command ouput.
Thanks to the replay of @jeff-schaller I did a contribution to the project Kconfiglib and now there is a new example script for this task. These are the steps to use it: Inside the directory with the linux source, clone the repo: root@23e196045c6f:/usr/src/linux-source-4.9# git clone git://github.com/ulfalizer/Kconfiglib.git Cloning into 'Kconfiglib'... remote: Counting objects: 3367, done. remote: Compressing objects: 100% (57/57), done. remote: Total 3367 (delta 64), reused 89 (delta 50), pack-reused 3259 Receiving objects: 100% (3367/3367), 1.25 MiB | 1.79 MiB/s, done. Resolving deltas: 100% (2184/2184), done. Patch the makefile: root@23e196045c6f:/usr/src/linux-source-4.9# patch -p1 < Kconfiglib/makefile.patch patching file scripts/kconfig/Makefile Configure as needed, basically for get a .config file: root@23e196045c6f:/usr/src/linux-source-4.9# make menuconfig Run the script with the config file: root@23e196045c6f:/usr/src/linux-source-4.9# make scriptconfig SCRIPT=Kconfiglib/examples/print_config_tree.py SCRIPT_ARG=.config ======== Linux/x86 4.9.65 Kernel Configuration ======== [*] 64-bit kernel (64BIT) General setup () Cross-compiler tool prefix (CROSS_COMPILE) [ ] Compile also drivers which will not load (COMPILE_TEST) () Local version - append to kernel release (LOCALVERSION) [ ] Automatically append version information to the version string (LOCALVERSION_AUTO) -*- Kernel compression mode --> Gzip (KERNEL_GZIP) Bzip2 (KERNEL_BZIP2) LZMA (KERNEL_LZMA) ... But the nice thing is that it is possible to pass differente kernel configurations and match the changes easily: root@23e196045c6f:/usr/src/linux-source-4.9# make scriptconfig SCRIPT=Kconfiglib/examples/print_config_tree.py SCRIPT_ARG=/tmp/config1 > config1-list.txt root@23e196045c6f:/usr/src/linux-source-4.9# make scriptconfig SCRIPT=Kconfiglib/examples/print_config_tree.py SCRIPT_ARG=/tmp/config2 > config2-list.txt And finally now with a diff tool:
Formatted print of linux kernel config
1,562,858,941,000
Lets say my userspace (packages) are compiled with gcc 4.7 and libc6 2.13 (Debian Wheezy) Can I compile linux kernel under different dev environment, such as gcc 6.3 and libc6 2.24 (ie, under Debian Stretch) ? I know that unlike packages, kernel is not linked with any dynamic libraries. So theoretically, it should make no difference which gcc and libc it was compiled under. Is this true ? Could I run into trouble when I do this ? Could there perhaps be some incompatibilities caused by different gcc versions? On the other hand, newer gcc has some interesting features, better security. So perhaps, kernel should be compiled with newest gcc ?
As you point out, the C library being used has no impact on the kernel, the kernel doesn’t use the C library. (There’s an indirect impact, since it’s used to build tools the kernel uses during its build process, but that’s extremely unlikely to affect the end result.) The kernel can be built with a variety of different compiler versions; according to its documentation, it only needs GCC 3.2 or later. You’ll also find that it can take a while for the kernel to officially support the latest versions of GCC, and longer still for a distribution kernel to use it. For example, the Debian Linux kernel package uses GCC 6, and even has dedicated packages to provide the correct compiler version (linux-compiler-gcc-6-x86 on amd64 and i386). There is no connection between the compiler used for the kernel and the compiler used for userspace (nor is there necessarily any need to use the same compiler for all of userspace — old programs built with GCC 3 or even 2 still work on modern systems). Newer compiler versions do provide more security features, but GCC 6 is good enough for most if not all the security features used in the kernel.
Does kernel need to be compiled in same dev environment as userspace?
1,562,858,941,000
The Scenario I'm trying Kernel Programming for Linux, where I've created module, Inserted, Removed and checked it's output in dmesg as well The Problem When I have inserted module, I can verify that it is inserted successfully by firing lsmod. Yet, when I fire modinfo it returns an error saying it isn't present there. follow the below terminal snippets INPUT sudo insmod hello.ko lsmod | head -2 OUTPUT Module Size Used by hello 16384 0 INPUT sudo modinfo hello OUTPUT modinfo: ERROR: Module hello not found. Questions Am I doing something wrong? If not, what is the other way I can get it? I tried finding it from nautilus browser in filesystem, which doesn't return any result. Where do I find it's file on Insertion? Is it loaded on temporary purpose, may that's why I can't find it?
modinfo by default searches in /lib/modules/<kernel-version>. So you would have to copy your kernel module in a subdirectory in there. Most likely /lib/modules/<kernel-version>/extra. After copying your module in the right place, you also should execute depmod -a.
modinfo does not return self inserted module's information
1,562,858,941,000
To configure netconsole you should pass IP-address and MAC-address of destination host. If you don't pass MAC as a parameter - netconsole packs IP packet to ethernet frame with broadcast address as a destination. Why does netconsole not search for route to host at routing table? Is it a netpoll limitation? Or is it fault tolerance feature, if something wrong with network stack? Or is it done for faster work? Or it's just hard to implement? What is the main reason?
Netconsole was designed to work as soon as possible after a reboot. From the kernel documentation: Netconsole was designed to be as instantaneous as possible, to enable the logging of even the most critical kernel bugs. It works from IRQ contexts as well, and does not enable interrupts while sending packets. Due to these unique needs, configuration cannot be more automatic, and some fundamental limitations will remain: only IP networks, UDP packets and ethernet devices are supported. As you can see, netconsole is intended to be a debugging feature, not for daily use. For this purpose the designers wanted it to be as simple and robust as possible, even if configuring it is a but crude. If the feature had been designed automiatically find out where to send the packet, the code would have to query the routing table to see if the destination host is in the same subnet, but routing is probably not set up when the first messages are to be sent. Even if we can assume the destination host is in the same subnet, without knowledge of the destination MAC address, the implementation would first have to make an ARP query. While waiting for the response, the kernel crashes and the crash message is lost.
Why does netconsole not search for route to logging server?
1,562,858,941,000
After upgrading my CentOS 7 kernel from 3.10.0 to 4.8.7, while rebooting the system I will get the following lines: [ 0.641455] cpufreq: cpufreq_online: Failed to initialize policy for cpu: 0 (-19) [ 0.641734] cpufreq: cpufreq_online: Failed to initialize policy for cpu: 1 (-19) [ 0.641873] cpufreq: cpufreq_online: Failed to initialize policy for cpu: 2 (-19) [ 0.641956] cpufreq: cpufreq_online: Failed to initialize policy for cpu: 3 (-19) [ 0.642048] cpufreq: cpufreq_online: Failed to initialize policy for cpu: 4 (-19) [ 0.642048] cpufreq: cpufreq_online: Failed to initialize policy for cpu: 5 (-19) [ 0.984906] sd 0:0:0:0: [sda] Assuming drive cache: write through What is the failed policies and how should I fix it?
Are you using virtual machines or a hypervisor? If yes so, you should update your hypervisor host to the latest version so it can support the kernel version. CPUFreq stands for CPU frequency scaling which enables the operating system to scale the CPU frequency up or down in order to save power. I'm not sure why you're getting this error since there may be many possible reasons, but if you're using a hypervisor host - such as ESXi - and your OSes are working fine after the boot and you're only getting this error whilst the boot time, you need to update your Hypervisor host since it does not fully support the newly upgraded kernel version. If you're getting the same error on the latest version of hypervisors, or if you're not using virtual machines and it's happening on your primary OS, you need to check if your hardware is working fine or not. But this is not CentOS or RHEL problem.
Kernel 4.8.7 failure on cpufreq - CentOS 7
1,562,858,941,000
ACPI procfs is deprecated in new kernel versions. In sysfs, which is supposed to replace it, I don't know of any clean way to read the state of the lid button. What is the new way of doing this?
TL;DR: This exact feature is gone forever, because of laptops' poor quality and buggy precious, proprietary, NDA-ed firmware. But there is a workaround. According to this thread on Linux kernel bug tracker, firmware in way too many laptops initializes its internal lid state variable to zero on boot - meaning closed. Despite the fact that nobody can turn the laptop on with the lid closed (this is always checked by firmware before power-on), resulting in an obvious mismatch between the real state and the firmware's reported state. For this reason kernel completely disregards this statically reported state from firmware. And to maintain its own state, it relies solely on firmware's ACPI interrupt events reporting changes in state after the device is booted, and assumes it's open, unless proven otherwise. Root users can check the kernel state directly by calling the appropriate input device. While the C code is simple, you can still use an existing utility evtest as shown in the answer by Stuart P. Bentley: evtest --query "/dev/input/EVENT_N" "EV_SW" "SW_LID" && echo "open" || echo "closed" Unprivileged users can query the state only via systemd's logind over D-Bus (since v240): dbus-send --system --print-reply=literal \ --dest="org.freedesktop.login1" "/org/freedesktop/login1" \ "org.freedesktop.DBus.Properties.Get" \ string:"org.freedesktop.login1.Manager" string:"LidClosed" | \ awk 'NR == 1 { print $3 == "true" ? "closed" : "open" }' or busctl get-property "org.freedesktop.login1" "/org/freedesktop/login1" \ "org.freedesktop.login1.Manager" "LidClosed" | \ awk 'NR == 1 { print $2 == "true" ? "closed" : "open" }'
sysfs alternative to /proc/acpi/button/lid/LID/state