date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,466,097,343,000 |
I am having a Red Hat 6.7 running on VM which I updated with the latest version kernel-2.6.32-573.26.1.el6.x86_64. but after this update I received the below error:
kernel panic-not syncing: VFS: unable to mount root fs on
unknown block(0,0)
Previous version was kernel-2.6.32-573.22.1.el6.x86_64 .
|
I have booted my system with my previous kernel which is working fine. while troubleshooting the system we found that there is no initrmfs image on the system and there is no initramfs line in the grub.conf file.
I have created the image with the below command and edited the grub.conf file
# mkinitrd /boot/initramfs-2.6.32-573.26.1.el6.x86_64.img 2.6.32-573.26.1.el6.x86_64
After this change the system is working fine. possible bug from Red Hat.
| Unable to boot after updating the kernel rhel 6.7 |
1,466,097,343,000 |
I would like to boot Fedora 23 Workstation x86_64 with a kernel from QubesOS 3.1. I have copied vmlinuz-4.1.13-9.pvops.qubes.x86_64 and initramfs-4.1.13-9.pvops.qubes.x86_64.img into the /boot directory and run grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg to generate the menuitem.
menuentry 'Fedora (4.1.13-9.pvops.qubes.x86_64) 23 (Workstation Edition)'
--class fedora
--class gnu-linux
--class gnu
--class os
--unrestricted $menuentry_id_option
'gnulinux-4.1.13-9.pvops.qubes.x86_64-advanced-d43f46bc-7649-44ca-b02d-7599d115a8e8' {
load_video
insmod gzio
insmod part_gpt
insmod ext2
set root='hd0,gpt6'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt6 --hint-efi=hd0,gpt6 --hint-baremetal=ahci0,gpt6 440e2ced-56a2-432f-95e0-c5f1c33941a9
else
search --no-floppy --fs-uuid --set=root 440e2ced-56a2-432f-95e0-c5f1c33941a9
fi
linuxefi /vmlinuz-4.1.13-9.pvops.qubes.x86_64 root=UUID=d43f46bc-7649-44ca-b02d-7599d115a8e8 ro rootflags=subvol=root00 rhgb quiet
initrdefi /initramfs-4.1.13-9.pvops.qubes.x86_64.img
I have also tried to modify the original Fedora menuitem, only modifying vmlinuz* and initramfs* file name.
menuentry 'Modified original Fedora 23 menuitem'
--class fedora
--class gnu-linux
--class gnu
--class os
--unrestricted $menuentry_id_option
'gnulinux-4.4.8-300.fc23.x86_64-advanced-d43f46bc-7649-44ca-b02d-7599d115a8e8' {
load_video
set gfxpayload=keep
insmod gzio
insmod part_gpt
insmod ext2
set root='hd0,gpt6'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt6 --hint-efi=hd0,gpt6 --hint-baremetal=ahci0,gpt6 440e2ced-56a2-432f-95e0-c5f1c33941a9
else
search --no-floppy --fs-uuid --set=root 440e2ced-56a2-432f-95e0-c5f1c33941a9
fi
linuxefi /vmlinuz-4.1.13-9.pvops.qubes.x86_64 root=UUID=d43f46bc-7649-44ca-b02d-7599d115a8e8 ro rootflags=subvol=root00 rhgb quiet
initrdefi /initramfs-4.1.13-9.pvops.qubes.x86_64.img
}
In both cases, however, I see 4 large penguins upon booting and eventually Dracut tells me something failed and asks me for root password to fix it.
|
I had to copy the modules from Qubes and regenerate the initramfs using the sudo dracut -f command.
| Dracut failure when trying to boot Fedora with Qubes kernel |
1,466,097,343,000 |
I am working with (one of) my workstation(s) working under Scientific Linux 6, so, basically a quite old version of Red Hat Enterprise Linux. I would need to use 2 screens, but only have 2 DisplayPort and one VGA as outputs from my Intel IGP. I am unable to make the DisplayPort ports working, I guess because the driver and kernel used are too old.
Anyone would have an idea (besides using a dedicated GPU) ?
lspci
00:00.0 Host bridge: Intel Corporation Sky Lake Host Bridge/DRAM Registers (rev 07)
00:02.0 VGA compatible controller: Intel Corporation Sky Lake Integrated Graphics (rev 06)
00:14.0 USB controller: Intel Corporation Sunrise Point-H USB 3.0 xHCI Controller (rev 31)
00:14.2 Signal processing controller: Intel Corporation Sunrise Point-H Thermal subsystem (rev 31)
00:16.0 Communication controller: Intel Corporation Sunrise Point-H CSME HECI #1 (rev 31)
00:16.3 Serial controller: Intel Corporation Sunrise Point-H KT Redirection (rev 31)
00:17.0 SATA controller: Intel Corporation Device a102 (rev 31)
00:1f.0 ISA bridge: Intel Corporation Sunrise Point-H LPC Controller (rev 31)
00:1f.2 Memory controller: Intel Corporation Sunrise Point-H PMC (rev 31)
00:1f.3 Audio device: Intel Corporation Sunrise Point-H HD Audio (rev 31)
00:1f.4 SMBus: Intel Corporation Sunrise Point-H SMBus (rev 31)
00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (2) I219-LM (rev 31)
uname -a
Linux pcbe13615 2.6.32-573.22.1.el6.x86_64 #1 SMP Wed Mar 23 17:13:03 CET 2016 x86_64 x86_64 x86_64 GNU/Linux
|
It is the lack of support for the GPU in kernel (and likely also in X.Org video driver) which you need to somehow solve. Proper support for Sky Lake based GPUs in i915 kernel driver should be available from kernel 4.4 on. Then again, myself I still couldn't get a Intel GPU with device code 1912 working in Debian Jessie under 4.4.5 due to something with possibly the X.org version in Jessie (haven't tried any later kernel now, though). So it'll be either major upgrade of the system, or a dedicated GPU.
Getting a used common good known brand GPU which your system has support for could be the easiest way out, but I'm not sure if you could find one that has specifically DisplayPort available.
If you don't want to upgrade the system, you could try just taking a recent kernel and compiling that manually with all the required options to support the GPU. The possible problem with this approach is that it might be hard to get the system to boot with the new kernel, as there might be some conflicts between the kernel and the base software of the system, udev being one possible issue. You'd also need to remember to include much of the deprecated stuff to be compatible with the older software which interfaces the kernel.
Intel does even provide sources for their graphics driver, so if you are willing to try every possible thing, you could try also compiling that.
In addition to compiling either the Linux kernel or just the Intel graphics driver, you'd still also need to get recent enough X.Org Intel video driver which also supports Skylake based GPUs, so you'd probably also end up needing to compile that (possibly the whole of X.Org), too. This might prove to be impossible without upgrading large parts of the rest of the system due to conflicting version requirements for many other components. After all, there is a reason why most people rely on prebuilt distributions instead of trying to get things going from the scratch :)
| How to use the proper video driver on Scientific Linux 6 for Display Port screens? |
1,466,097,343,000 |
I have already made a couple of tries in the last few days trying to install and run sysdig in Armbian 5.0/Debian Jessie 8.0, in my Lamobo R1.
After installing it with:
apt-get install -t jessie-backports sysdig sysdig-dkms dkms
When running it gives the following error:
# sysdig
Unable to load the driver
error opening device /dev/sysdig0. Make sure you have root credentials and that the sysdig-probe module is loaded.
In the first try a few days ago I noticed the module was not being place in /lib/modules/4.4.1-sunxi/updates/dkms/sysdig-probe.ko and commented the include of asm-offsets.h in /var/lib/dkms/sysdig/0.5.1/build/main.c.
I also had to run make scripts in the kernel directory /usr/src/linux-headers-4.4.1-sunxi.
After this, I run /usr/lib/dkms/dkms_autoinstaller start and the module was compiled. However when running the error is the same.
Running insmod says:
#insmod /lib/modules/4.4.1-sunxi/updates/dkms/sysdig-probe.ko
insmod: ERROR: could not insert module /lib/modules/4.4.1-sunxi/updates/dkms/sysdig-probe.ko: Invalid module format
Running modinfo:
modinfo /lib/modules/4.4.1-sunxi/updates/dkms/sysdig-probe.ko
Outputs:
filename: /lib/modules/4.4.1-sunxi/updates/dkms/sysdig-probe.ko
author: sysdig inc
license: GPL
depends:
vermagic: 4.4.1 SMP mod_unload ARMv7 p2v8
parm: max_consumers:Maximum number of consumers that can simultaneously open the devices (uint)
parm: verbose:Enable verbose logging (bool)
So obviously the module is with the wrong kernel version.
Now even when installing, it says:
#apt-get install -t jessie-backports sysdig sysdig-dkms dkms
Reading package lists... Done
Building dependency tree
Reading state information... Done
sysdig is already the newest version.
The following NEW packages will be installed:
dkms sysdig-dkms
0 upgraded, 2 newly installed, 0 to remove and 9 not upgraded.
Need to get 0 B/137 kB of archives.
After this operation, 821 kB of additional disk space will be used.
Do you want to continue? [Y/n]
Selecting previously unselected package dkms.
(Reading database ... 72251 files and directories currently installed.)
Preparing to unpack .../dkms_2.2.0.3-2_all.deb ...
Unpacking dkms (2.2.0.3-2) ...
Selecting previously unselected package sysdig-dkms.
Preparing to unpack .../sysdig-dkms_0.5.1-1~bpo8+1_all.deb ...
Unpacking sysdig-dkms (0.5.1-1~bpo8+1) ...
Processing triggers for man-db (2.7.0.2-5) ...
Setting up dkms (2.2.0.3-2) ...
Setting up sysdig-dkms (0.5.1-1~bpo8+1) ...
Loading new sysdig-0.5.1 DKMS files...
First Installation: checking all kernels...
Building only for 4.4.1-sunxi
Building initial module for 4.4.1-sunxi
Done.
sysdig-probe:
Running module version sanity check.
- Original module
- No original module exists within this kernel
- Installation
- Installing to /lib/modules/4.4.1-sunxi/updates/dkms/
depmod....
DKMS: install completed.
And again, sysdig-probe.ko, despite the message it is being compiled to 4.4.1-sunxi, is being compiled for the 4.4.1 kernel and not 4.4.1-sunxi.
My uname -r output: 4.4.1-sunxi. I do not have neither the 4.4.1 kernel, nor 4.4.1 sources installed.
root@ruir:/usr/src# ls -la
total 16
drwxr-xr-x 4 root root 4096 Apr 3 11:06 .
drwxr-xr-x 11 root root 4096 Oct 23 21:04 ..
drwxr-xr-x 25 root root 4096 Mar 30 21:29 linux-headers-4.4.1-sunxi
drwxr-xr-x 2 root root 4096 Apr 3 11:06 sysdig-0.5.1
So my question is, is there any file/configuration item in Linux I can change to make it compile to 4.4.1-sunxi and not 4.4.1?
|
I had to change in /lib/modules/4.4.1-sunxi/build the following occurrences of 4.4.1 to 4.4.1-sunxi
include/generated/utsrelease.h:#define UTS_RELEASE "4.4.1"
include/config/auto.conf.cmd:ifneq "$(KERNELVERSION)" "4.4.1"
include/config/kernel.release:4.4.1
After this I was able to install sysdig/compile sysdig-probe.ko with the correct version.
So it appears that whilst some scripts do uname -r (or accept other kernel versions) and output they are doing the correct job for that, it appears that behind the scenes at least part of the module compilation consult the corresponding kernel version files for adjusting the version of the compiled module.
| installing sysdig in ARM / Armbian Jessie - module compiled in wrong kernel version |
1,466,097,343,000 |
I am trying to cross-compile and boot a Linux 2.6 kernel for ARM using QEMU. I have basically followed the same instructions that are included in seemingly every single tutorial on the topic.
Specifically:
Download and compile kernel
$ make ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- versatile_defconfig
$ #Disabled loadable modules and enabled initramfs
$ make ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- all
Compile Busybox
$ make ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- arm
$ make ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- install
Create cpio archive from Busybox _install directory
$ cd $BUSYBOX/_install
$ find . | cpio -o -Hnewc | gzip > ../initramfs.gz
Boot using qemu-system-arm
$ qemu-system-arm -M versatilepb -m 200M -kernel $KERNEL/arch/arm/boot/zImage -initrd $BUSYBOX/initramfs.gz -append "root=/dev/ram0"
The result is this:
It looks like the kernel does not recognize the file system, but I don't know how to fix that. These are basically the exact steps that every single tutorial follows. There is no such thing as a "cpiofs" to enable in the kernel source.
|
Success!
The solution, as suggested, was to embed the initrd image into the kernel by pointing CONFIG_INITRAMFS_SOURCE to my BusyBox's "_install" directory. Many thanks to jc__ for that tip.
Also, for anyone else trying this, it is worthy of note that I needed to create in the Busybox _install directory:
dev/console
dev/loop0
as mentioned in this:
https://www.kernel.org/doc/Documentation/filesystems/ramfs-rootfs-initramfs.txt
| What step am I missing to get my 2.6 ARM kernel running in QEMU? |
1,466,097,343,000 |
I am looking to generate raw Ethernet frames with payload that is preloaded into memory.
The Ethernet frames (10-60 full frames) should be generated at 1 ms intervals with no exception.
What would be my option to do this? My concern is in regards to the real-time requirements of such an application. Interrupts should be minimized and the process should perhaps have a core dedicated to its execution? If Linux/software is not an option the alternative is FPGA.
Looking forward to hear potential solutions.
|
1ms is plenty to generate a few Ethernet frames, but on a typical Linux system, you can't count on not having the occasional pause. Even if you make your process high-priority, I don't think you can expect to always make a 1ms deadline.
RTLinux combines a real-time operating system with Linux. Linux runs as a non-real-time-priority task in the real-time scheduler.
I lack experience with RTLinux, so I can't offer concrete advice, but it does include Ethernet drivers, so it looks suitable for your use case.
| Generate raw Ethernet frames with memory preloaded payloads at < 1 ms intervals |
1,466,097,343,000 |
the only information i found about this topic is outdated since 2011/02: it says there is no support for devices with usb 3.0 "so far"!
according to the german wikipedia since version 2.6.30 the linux kernel supports usb 3.0; so i assume that since OpenWrt Backfire usb 3.0 should be supported...
for example the posted dmesg of this router includes some lines for a xHCI Host Controller, so maybe i'm right... but im not sure because of included "backports up to r47238" and i'm not so familiar with kernel-development/-commands.
...it is clear that it may depend on the device/hardware/controller but i'm talking about the general support!
has anyone better sources to clarify the usb support of OpenWrt? ...or at least personal experience in using OpenWrt with a usb 3.0 device?
|
I use LEDE now instead of OpenWRT, but both do support USB 3.0 (assuming your router also supports it).
What I'm curious about is whether it supports USB hubs. My router has 2 USB ports, but only one is 3.0.
| Is there a general USB 3.0 support in OpenWrt? |
1,466,097,343,000 |
I want to upgrade Linux kernel from 3.16 to 4.3. Unfortunately when I run aptitude install linux-image-4.3.0-1-amd64 installation fails due to no space on rootfs partition. 117MB left, 174MB needed.
I have no old kernels to remove to free up more disk space (except the one that I'm using right now):
root@host:/# aptitude search linux-image | grep ^i
ip linux-image-3.16.0-4-amd64 - Linux 3.16 for 64-bit PCs
I tried to free up space using aptitude clean, apt-get autoremove, but it didn't help because /var is a separate partition. AFAIK those commands removes content of the /var/cache/apt/archives directory, so it cannot help.
I considered to temporarily mount --bind / /home/rootfs (as suggested here), but rootfs probably cannot be safely remounted.
My file system disk space usage:
root@host:/# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda6 454M 310M 117M 73% /
udev 10M 0 10M 0% /dev
tmpfs 1,6G 19M 1,6G 2% /run
/dev/sda7 23G 13G 8,8G 59% /usr
tmpfs 3,9G 52M 3,9G 2% /dev/shm
tmpfs 5,0M 4,0K 5,0M 1% /run/lock
tmpfs 3,9G 0 3,9G 0% /sys/fs/cgroup
/dev/sda4 250G 187G 64G 75% /media/Windows/C
/dev/sda5 500G 428G 73G 86% /media/Windows/D
/dev/sda2 96M 25M 72M 26% /boot/efi
/dev/sda8 7,3G 966M 5,9G 14% /var
/dev/sda9 14G 37M 13G 1% /tmp
/dev/sda11 126G 95G 25G 80% /home
tmpfs 797M 20K 797M 1% /run/user/112
tmpfs 797M 24K 797M 1% /run/user/1000
du -mx / | sort -n result: link.
OS version:
root@host:/# cat /etc/debian_version
stretch/sid
/boot content:
root@host:/# du -sh /boot/*
156K /boot/config-3.16.0-4-amd64
25M /boot/efi
8,8M /boot/grub
16M /boot/initrd.img-3.16.0-4-amd64
16M /boot/initrd.img-3.16.0-4-amd64.old-dkms
2,6M /boot/System.map-3.16.0-4-amd64
3,0M /boot/vmlinuz-3.16.0-4-amd64
Is there any clever and safe way to free up rootfs partition or temporarily move current kernel to another partition?
Is it safe to move some rootfs content to another partition and create symbolic links pointing to them?
I know that there is plenty of similar problems, but most of them end up with removing old kernels which I don't have.
|
450MB is not much for a root+boot partition on a modern amd64 system. If you want to install multiple kernels, you're going to have to reorganize your partitions. Even if you don't, it's pretty tight.
Given the partitions you have now, I suggest moving the root partition to what is now /var. Since you're going to move the root partition, boot from rescue media (e.g. SystemRescueCD). Mount /dev/sda6 and /dev/sda8, say to /media/sda6 and /media/sda8. Then:
Create a /var directory: mkdir /media/sda8/var
Move everything in the old /var partition to this new subdirectory: mv /media/sda8/* /media/sda8/var (/var itself will be skipped)
Move everything except /boot from the old root partition to the old var partition: mv /media/sda6/[^bv]* /media/sda6/bin /media/sda8/
There should only be /boot and an empty /var on the old root partition. Move everything from /boot to the root of the partition: mv /media/sda6/boot/* /media/sda6
Remove the spurious directories: rmdir /media/sda6/boot /media/sda6/var and create one that's now needed: mkdir /media/sda8/boot
Edit the fstab file (now in /media/sda8/etc/fstab), remove the entry for /var, add one for /boot, and correct the entry for / if necessary.
Update the bootloader configuration. The easy way to get it right is to run both update-grub script (to regenerate grub.cfg) and grub-install (to regenerate the first-stage bootloader so that it knows where to find the rest, including grub.cfg). But to do that, you need to present it the right directory tree.
mount --rbind /dev /media/sda8/dev
mount --rbind /proc /media/sda8/proc
mount --rbind /sys /media/sda8/sys
mount --bind /media/sda6 /media/sda8/boot
chroot /media/sda8
mount /usr
update-grub
grub-install /dev/sda
Now reboot.
Alternatively, you could move /boot to /var; but it's a less common configuration, so you may have to tweak some bootloader configuration files.
These days, separating /usr from / is pretty pointless. Separating /var from / has never been really useful (they both need to be mounted read-write on most setups).
In the future, I recommend using LVM for Linux partitions. It's a lot more flexible.
| No space left to free up on rootfs partition to upgrade kernel |
1,466,097,343,000 |
The i915 Kernel module has several "module options" like the infamous enable_rc6.
However, for the xorg config file, there are further options such as TearFree.
I was wondering - why are there two ways to (seemingly?) set options the same module? Why can't I pass the TearFree option to the kernel module? Is this a general case and true for many other modules as well?
Any link to a good explanation is welcome!
|
Because they're two different drivers: the kernel driver and the xorg driver and each driver has its own, specific options.
The i915 kernel driver talks to the hardware device (it does basic, low level stuff like set up resolution, map a framebuffer etc).
The xorg intel driver includes OpenGL, DRI, DDX etc for 2D/3D acceleration and communicates with the gpu via the kernel module. In fact, the i915 kconfig even says
This driver is used by the Intel driver in X.org 6.8 and XFree86 4.4
and above
Further reading:
The Linux Graphics Stack
| Why are there module options AND driver options - e.g. for the i915 module? |
1,466,097,343,000 |
I'm using the 4.1.2 mainline kernel. The package with image and all modules has 207 MB of installed size.
I built a patched 4.1.6 kernel, taking the config from the current kernel and now the drivers themselves take 2G!
marcin@asus ~/4.1.6/lib/modules/4.1.6/kernel $ du -sh *
19M arch
16M crypto
2,0G drivers
213M fs
636K kernel
7,0M lib
240K mm
349M net
132M sound
marcin@asus ~/4.1.6/lib/modules/4.1.6/kernel $ du -sh /lib/modules/4.1.2-040102-generic/kernel/*
2,5M /lib/modules/4.1.2-040102-generic/kernel/arch
1,2M /lib/modules/4.1.2-040102-generic/kernel/crypto
155M /lib/modules/4.1.2-040102-generic/kernel/drivers
16M /lib/modules/4.1.2-040102-generic/kernel/fs
100K /lib/modules/4.1.2-040102-generic/kernel/kernel
700K /lib/modules/4.1.2-040102-generic/kernel/lib
668K /lib/modules/4.1.2-040102-generic/kernel/misc
16K /lib/modules/4.1.2-040102-generic/kernel/mm
17M /lib/modules/4.1.2-040102-generic/kernel/net
13M /lib/modules/4.1.2-040102-generic/kernel/sound
Why does the new build take so much more space in comparison with the old one? Did I do something wrong?
|
The kernel I used was stripped afterwards. My modules were not stripped. Hence they were so big.
| Linux modules take a lot of space [closed] |
1,466,097,343,000 |
My machine (ThinkPad x230) hangs on wake up from s2disk or s2ram after a recent aptitude safe-upgrade.† The hang looks like a blank screen with a blinking cursor in the upper right after loading data pages completes.
Confirmed the same behavior after sudo hibernate, sudo pm-hibernate, and echo disk | sudo tee /sys/power/state.
I am on Debian testing, kernel 4.0.0-2-rt-amd64. Not sure where to go from here. Searching on Google produces many old results. Dump of /var/log/pm-suspend.log here.
† Confirmed s2ram works as intended on wake up from suspend. s2disk still broken.
|
Found a solution that works:
sudo blkid and copy your swap UUID to clipboard
sudo vim /boot/grub/menu.lst, search for resume= and add resume=UUID=xxxxx-xxxxx-xxx-xxxx-xxxx
update grub with sudo update-grub
That worked once and never again. Still looking.
Update: My problems went away once I switched from Linux kernel 4.0.0-2-rt-amd64 to 4.1.0-2-amd64. If you are experiencing a similar issue, first make sure you are not using the RT kernel without a good reason and second either upgrade or downgrade the kernel to see if that resolves the issue. Marking as solved.
| debian testing hangs on wakeup from s2disk / hibernate |
1,466,097,343,000 |
I have Windows 7 and Kali Linux on my computer, normally I don't use windows anymore. But today I started windows and it updated itself. I am writing my thesis and all my stuff is in Linux so I really need to get it started my deadline is near.
In the grub bootloader I choosed Lali Linux, and in the next windows it says
early console in decompress_kernel
Decompressing Linux.... Parsing ELF.... done.
Booting the kernel.
Then nothing happens.
What can I do? I really need my stuff and can't just format the computer.
|
Assuming that your Kali Linux install uses ext4 (or an older version, which will work even better), you may also be interested in installing ext4 drivers for your windows 7 installation, so you can access your files from within windows. I've used them with reasonable success (I run Windows 7 and Arch Linux). To be safe I would disable writing the your ext4 partition, only enable reading (which should be enough to recover your thesis).
Link: http://www.ext2fsd.com/
| Linux fails with starting on a dualboot system after Windows update |
1,466,097,343,000 |
I'm reading some doc about UNIX but I don't understand two things:
Why is important for the kernel to know the current working directory of the running process?
Why not keeping the inode information in the directory?
|
The system needs to keep track of the current directory of all processes because otherwise processes couldn't use relative paths for anything (including for example file open or stat, and changing directories — what does chdir("..") mean if you don't track were the process currently sits?).
There's also the matter that without tracking that info, the kernel wouldn't be able to check if a process is sitting inside a given mount point. So you'd be liable to accidentally unmount a filesystem from under a process, leading to inconsistent state.
For your second question: think about hard links. They would be much harder to implement correctly and safely if the inode data was in the directory "structure" itself. Much easier to have essentially pointers to the inodes in the directory structure, makes adding or removing links to a given inode pretty simple.
| Kernel current working directory and inode information placement |
1,466,097,343,000 |
aptitude search linux-headers
provides this:
p linux-headers-3.16.0-4-all - All header files for Linux 3.16 (meta-pack
p linux-headers-3.16.0-4-all-amd6 - All header files for Linux 3.16 (meta-pack
i linux-headers-3.16.0-4-amd64 - Header files for Linux 3.16.0-4-amd64
i A linux-headers-3.16.0-4-common - Common header files for Linux 3.16.0-4
p linux-headers-amd64 - Header files for Linux amd64 configuration
but i need to get 32 bit kernel headers
how to do that with apt?
|
It required to modify /etc/apt/sources.list
adding i386 to downloadable architectures like that:
deb [arch=amd64,i386] http://httpredir.debian.org/debian/ jessie main contrib
After that you need to make
apt-get update
dpkg --add-architecture i386
apt-get update
and to install package for i386 architecture:
apt-get install linux-headers-3.16.0-4-686-pae:i386
| how to get 32 bit kernel headers in 64 bit debian installation [duplicate] |
1,466,097,343,000 |
on my laptop, when I do ifup wlan0, sometimes I get following call trace in dmesg. Other times, the log looks perfectly normal, and there are no kernel Oops messages.
In both cases (with errors and without), my network seems to work fine.
But I would like to understand what is happening. Could somebody please help me interpret the meaning of the call trace messages ?
kernel: [181794.747548] ------------[ cut here ]------------
kernel: [181794.747560] WARNING: CPU: 2 PID: 8430 at net/wireless/reg.c:1806 0xffffffff8147a677()
kernel: [181794.747570] CPU: 2 PID: 8430 Comm: kworker/2:0 Tainted: G W 3.16.6 #1
kernel: [181794.747574] Hardware name: Dell Inc. Latitude E7440/03HFCG, BIOS A09 05/01/2014
kernel: [181794.747579] Workqueue: events 0xffffffff8147a740
kernel: [181794.747584] 0000000000000000 0000000000000009 ffffffff814c43fb 0000000000000000
kernel: [181794.747592] ffffffff81058185 0000000000011500 ffffffff8147a677 ffff88040d4cb300
kernel: [181794.747598] ffff8804082e7e00 0000000000000003 ffff88040d440240 ffff8804082e7e00
kernel: [181794.747605] Call Trace:
kernel: [181794.747611] [<ffffffff814c43fb>] ? 0xffffffff814c43fb
kernel: [181794.747615] [<ffffffff81058185>] ? 0xffffffff81058185
kernel: [181794.747619] [<ffffffff8147a677>] ? 0xffffffff8147a677
kernel: [181794.747623] [<ffffffff8147a677>] ? 0xffffffff8147a677
kernel: [181794.747627] [<ffffffff8147a7ae>] ? 0xffffffff8147a7ae
kernel: [181794.747630] [<ffffffff81067c27>] ? 0xffffffff81067c27
kernel: [181794.747634] [<ffffffff8106806f>] ? 0xffffffff8106806f
kernel: [181794.747638] [<ffffffff81067d7d>] ? 0xffffffff81067d7d
kernel: [181794.747642] [<ffffffff8106cc2d>] ? 0xffffffff8106cc2d
kernel: [181794.747645] [<ffffffff8106cb68>] ? 0xffffffff8106cb68
kernel: [181794.747649] [<ffffffff814cb16c>] ? 0xffffffff814cb16c
kernel: [181794.747653] [<ffffffff8106cb68>] ? 0xffffffff8106cb68
kernel: [181794.747657] ---[ end trace 269bc2d623c15a61 ]---
kernel: [181794.747661] cfg80211: Calling CRDA for country: DE
kernel: [181794.790714] CPU: 2 PID: 8430 Comm: kworker/2:0 Tainted: G W 3.16.6 #1
kernel: [181794.790717] Hardware name: Dell Inc. Latitude E7440/03HFCG, BIOS A09 05/01/2014
kernel: [181794.790720] Workqueue: events 0xffffffff8147a740
kernel: [181794.790723] 0000000000000000 0000000000000009 ffffffff814c43fb 0000000000000000
kernel: [181794.790728] ffffffff81058185 0000000000011500 ffffffff8147a677 ffff88040d4cb300
kernel: [181794.790732] ffff8804082e7e00 0000000000000003 ffff88040d440240 ffff8804082e7e00
kernel: [181794.790736] Call Trace:
kernel: [181794.790740] [<ffffffff814c43fb>] ? 0xffffffff814c43fb
kernel: [181794.790743] [<ffffffff81058185>] ? 0xffffffff81058185
kernel: [181794.790745] [<ffffffff8147a677>] ? 0xffffffff8147a677
kernel: [181794.790747] [<ffffffff8147a677>] ? 0xffffffff8147a677
kernel: [181794.790750] [<ffffffff8147a7ae>] ? 0xffffffff8147a7ae
kernel: [181794.790752] [<ffffffff81067c27>] ? 0xffffffff81067c27
kernel: [181794.790754] [<ffffffff8106806f>] ? 0xffffffff8106806f
kernel: [181794.790757] [<ffffffff81067d7d>] ? 0xffffffff81067d7d
kernel: [181794.790759] [<ffffffff8106cc2d>] ? 0xffffffff8106cc2d
kernel: [181794.790761] [<ffffffff8106cb68>] ? 0xffffffff8106cb68
kernel: [181794.790763] [<ffffffff814cb16c>] ? 0xffffffff814cb16c
kernel: [181794.790766] [<ffffffff8106cb68>] ? 0xffffffff8106cb68
kernel: [181794.790768] ---[ end trace 269bc2d623c15a62 ]---
kernel: [181794.790771] cfg80211: Calling CRDA for country: DE
kernel: [181794.972483] ------------[ cut here ]------------
For completion, I might add that this is a new laptop, but I have replaced the original wifi card (Intel Dualband Wireless-AC 7260) with my own card (Intel Centrino Ultimate-N 6300)
I am using Debian Wheezy with my own kernel 3.16.6
|
Turns out to be a false warning.
To quote from the (now closed) kernel bug report:
Alfred Krohmer: If I unterstand your patch correctly it just removes the warning, but it won't actually fix the driver crash this bug report was submitted for. So why mark it as resolved?
Emmanuel Grumbach: There is no real bug. The commit message explains this.
Well, I see the same error in dmesg but my Wifi connections works fine. (I'm using a Lenovo Thinkpad W540.) Interestingly, the Arch Linux bug report is still open, and some say that the have connection problems.
| ifup wlan0 causes kernel Oops |
1,466,097,343,000 |
My embedded device is not working after updating firmware.
Thus I tried to update firmware using u-boot. I could successfully get u-boot console via serial connection, but updating firmware failed due to the lack of knowledge on firmware update using u-boot.
At last, I not only corrupted linux kernel but also u-boot while modifying the device flash memory (u-boot command supports flash memory modification). Booting device no longer gives u-boot console. It just stops (I can see it through the serial connection).
In this situation, how can I recover(or update) firmware on my device?
|
If available (ie. if there's a JTAG header on your board) , you can connect using a JTAG cable.
Remember: Before using it you might need to enable JTAG using Test Mode Select Input (sometimes a jumper somewhere).
You can then use that connection to upload a new firmware into your device.
| How to recover firmware on a embedded device with corrupted u-boot and kernel image? [closed] |
1,466,097,343,000 |
The Debian guide for compiling a kernel says:
Do not forget to select “Kernel module loader” in “Loadable module support” (it is not selected by default). If not included, your Debian installation will experience problems.
However, I have downloaded the 3.12.22 kernel, run make xconfig and searched for the “Kernel module loader” option without finding it. Has such option been discontinued, included by default, or not needed anymore?
Thank you.
|
Parts of this guide are seriously out of date.
“Loadable module support” is the name of the option that enables kmod, the kernel component that calls modprobe to load modules with a symbolic name based on hardware identification. You can see these symbolic names in /lib/modules/VERSION/modules.alias; they're automatically extracted from the kernel sources. For example the line alias pci:v00001002d00005147sv*sd*bc*sc*i* radeonfb means that when the kernel requests a module whose name is of the form pci:v00001002d00005147sv*sd*bc*sc*i* then modprobe will look for a file called radeonfb.ko. The symbolic name corresponds to a particular PCI identifier which is sent by the PCI peripheral (in this case, a video card).
The thing is, “loadable module support” is the name of the option in kernel 2.4.x. In 2.6, the option was renamed “Automatic kernel module loading” (for the internal name CONFIG_KMOD). In version 2.6.27, the kmod feature became a compulsory part of module support, and the option was removed soon after since it was ignored.
| Debian + Linux kernel 3.12.22: “Kernel module loader” option is not available |
1,466,097,343,000 |
I just upgraded Linux Mint Debian Edition with the up7 package.
The update manager failed initially (can't remember where...) so I ran sudo apt update, sudo apt dist-upgrade, sudo apt upgrade and sudo dpkg --configure -a.
Situation now: when I reboot and select the new kernel (3.10-2-amd64) from the grub menu, the following happens:
This:
early console in decompress_kernel
Decompressing kernel... Parsing ELF ... done.
Booting the kernel.
[<...> Could not configure common clock.
Linux Mint splash screen appears.
The NVIDIA Screen appears.
I get this error message:
FAIL: startpar: service(s) returned failure: plymouth ... failed!
The system quits.
Sometimes the first message will be shown twice with a login prompt in between, after which the system will quit.
Any ideas? I will be happy to boot into the old kernel and provide any output!
|
Does sudo apt-get -f install help? Also have a look at the LMDE updates forum they seem to have screwed up with UP7, a lot of people are reporting problems. I am currently updating myself we'll see how it goes.
A common source of problems is the nvidia proprietary driver. This is compiled against your current kernel and can cause problems if the kernel changes. The normal recommendation is to remove it, then upgrade, then reinstall it.
Since you've already upgraded, I would try disabling the nvidia driver and switching to the open source nouveau. If that lets you boot, you can then reinstall the nvidia driver for your new kernel. Try these steps (adapted from here, I recommend you read that)
1.Boot into the old kernel, switch to a tty (CtrlAltF1) and log in.
Stop the mdm service and edit your /etc/X11/xorg.conf to use nouveau. Depending on your setup you might not be using the xorg.conf but /etc/X11/xorg.conf.d/20-nvidia.conf. If so, delete
/etc/X11/xorg.conf.d/20-nvidia.conf and create a /etc/X11/xorg.conf.d/20-nouveau.conf with these contents:
Section "Device"
Identifier "Nvidia card"
Driver "nouveau"
EndSection
Un-blacklist nouveau. Find where you have blacklisted it (which you have almost certainly done) and comment out the appropriate line:
$ grep nouv /etc/modprobe.d/*
/etc/modprobe.d/nvidia-kernel-common.conf:blacklist nouveau
So, on my system, I have it blacklisted in /etc/modprobe.d/nvidia-kernel-common.conf so I had to change that line to:
#blacklist nouveau
Reboot. If that solves things and you can now boot normally, reinstall the nvidia driver for the new kernel:
sudo apt-get install nvidia-kernel-dkms linux-headers \
nvidia-settings nvidia-xconfig
sudo nvidia-xconfig
| Linux Mint Debian quits on boot |
1,466,097,343,000 |
I created virtual mouse driver according to Essential Linux Device Drivers book. After I wrote coordinates echo x y > /sys/ ... /coordinates into sysfs node, my program generates event packets through event interface /dev/input/event5 (I checked this). This event interface is attached to the GPM gpm -m /dev/input/event5 -t evdev. But mouse don't move.
Maybe this will help: in Xorg.0.log i see the following:
[ 666.521] (II) config/udev: Adding input device (/dev/input/event5)
[ 666.521] (II) No input driver/identifier specified (ignoring)
It seems, that code is ok, but some outer features interfere my module work.
|
I spent a huge amount of time, resolving this issue, and i would like to help other people, who run in this problem. I think some outer X11 features interfered my module work. After disabling GDM it now works fine (runlevel 3). Working code you can find here http://fred-zone.blogspot.ru/2010/01/mouse-linux-kernel-driver.html working distro ubuntu 11.04 (gdm disabled)
| Virtual mouse driver, possible X11 problems |
1,466,097,343,000 |
I'm trying to debug a linux driver and a particular piece of code is behaving very strangely. In order to see what's going on I've filled the code with printk statements so I can see exactly what the variables I'm interested in do as the code executes. Unfortunately, when printing the ring buffer with dmesg lots of lines appear to be randomly missing. Google tells me this is because I'm writing too much data to the ring buffer at once. I've tried increasing the ring buffer size to its maximum (1 << 21) and I've tried inserting udelays to slow the writing down but I'm still having the same problem.
What else can I try?
|
As far as I can tell, klogd uses a blocking read() to read from /proc/kmsg. It might help if you boost its priority via renice. You could also try writing the kernel logs to a ramfs/tmpfs to save some disk overhead, either via syslog, or with klogd's -f option to write directly to a file.
Otherwise, plan B is ftrace and trace_printk(): http://lwn.net/Articles/365835/
| How to avoid overflowing the kernel printk ring buffer? |
1,354,782,873,000 |
I have a 3rd party device driver which I am trying to cross-compile. When I build the driver everything goes smooth but I don't see any driver.ko file, however driver.o file is generated fine and I don't see any error during the build process. I have also tried with the option V=1 and I see following error
echo;
echo " ERROR: Kernel configuration is invalid.";
echo " include/generated/autoconf.h or include/config/auto.conf are missing.";
echo " Run 'make oldconfig && make prepare' on kernel src to fix it.";
echo;
But my kernel configuration is correct and I have tried a simple hello world module with this configuration, in that case I can build my module but still see this error message. Also I can see both the files include/generated/autoconf.h and include/config/auto.conf in the kernel sources. Still why I am unable to build my driver module.
I am cross-compiling the driver for ARM platform so /lib/modules will not be good in this environment. Secondly here is the output of the build.
LD [M] /home/farshad/Work/CSP/boards/imx6q/ar6k3/ar6003_3.1_RC_Linux_release_[posted_2011_8_19_olca3.1RC_553/imx6build/host/os/linux/ar6000.o
Building modules, stage 2.
MODPOST 0 modules
make[2]: Leaving directory `/home/farshad/Work/CSP/projects/phase_1/farshad/cspbox/platform/imx6/mel5/fs/workspace/linux-2.6.38-imx6'
As you can see above ar6000.o is built properly without any error, but why ar6000.ko is not being built otherwise it should report "MODPOST 1 modules".
Since ar6000.ko is not being built at the end of the complete build process I also get the following error
cp: cannot stat `/home/farshad/Work/CSP/boards/imx6q/ar6k3/ar6003_3.1_RC_Linux_release_[posted_2011_8_19_olca3.1RC_553/imx6build/host/os/linux/ar6000.ko': No such file or directory
2404 make[1]: *** [install] Error 1
Which is obvious. My problem is why I am not getting a ar6000.ko in the first place. Searching over google someone also faced this issue and mentioned that running make with sudo resolved it but it brought no luck for me!
I am wandering is there any problem in my kernel configuration (i.e the driver is looking for some configuration setting which I haven't enabled in my kernel, but in that case it should give compiler error looking for required #define), the other point can be that there is a problem with the driver build system, which I am trying to figure out. My cross-compile environment is good as I am seeing exactly the same issue while building the same driver for my (Ubuntu x86) machine!
Update # 1
Its a 3rd party driver package which also build other utilities along with the driver module. Here is the output of the driver module build process
make CT_BUILD_TYPE=MX6Q_ARM CT_OS_TYPE=linux CT_OS_SUB_TYPE= CT_LINUXPATH=~/Work/CSP/projects/phase_1/farshad/cspbox/platform/imx6/mel5/fs/workspace/linu x-2.6.38-imx6 CT_BUILD_TYPE=MX6Q_ARM CT_CROSS_COM PILE_TYPE=~/bin/mgc/CodeSourcery/Sourcery_CodeBench_for_ARM_GNU_Linux/bin/arm-none-linux- gnueabi- CT_ARCH_CPU_TYPE=arm CT_HC_DRIVERS=pci_std/ CT_MAKE_INCLUDE_OVERRIDE= CT_BUILD_OUTPUT_OVERRIDE=/home/far shad/Work/CSP/boards/imx6q/ar6k3/ar6003_3.1_RC_Linux_release_[posted_2011_8_19_olca3.1RC_553 /imx6build/host/.output/MX6Q_ARM-SDIO/image -C /home/farshad/Work/CSP/boards/imx6q/ar6k3/ar6003_3.1_RC_Linux _release_[posted_2011_8_19_olca3.1RC_553/imx6build/host/sdiostack/src default
make[3]: Entering directory `/home/farshad/Work/CSP/boards/imx6q/ar6k3/ar6003_3.1_RC_Linux_release_[posted_2011_8_19_olc a3.1RC_553/imx6build/host/sdiostack/src'
make -C ~/Work/CSP/projects/phase_1/farshad/cspbox/platform/imx6/mel5/fs/workspace/linux-2.6.38-imx6 SUBDIRS=/home/farshad/Work/CSP/boards/imx6q/ar6k3/ar6003_3.1_RC_Linux_release_[posted_2011_8_19_olca 3.1RC_553/imx6build/host/sdiostack/src ARCH=arm CROSS_COMPILE=~/bin/mgc/CodeSourcery/Sourcery_CodeBench_for_ARM_GNU_Linux/bin/arm-none-linux-gnueabi- EXTRA_CFLAGS="-DLINUX -I/home/farshad/Work/CSP/board s/imx6q/ar6k3/ar6003_3.1_RC_Linux_release_[posted_2011_8_19_olca3.1RC_553/imx6build/host/sdiostack/src/include -DDEBUG" modules
make[4]: Entering directory `/home/farshad/Work/CSP/projects/phase_1/farshad/cspbox/platform/imx6/mel5/fs/workspace/linux-2.6.38-imx6'
Building modules, stage 2.
MODPOST 0 modules
make[4]: Leaving directory `/home/farshad/Work/CSP/projects/phase_1/farshad/cspbox/platform/imx6/mel5/fs/workspace/linu x-2.6.38-imx6'
Here is the Makefile of the driver module.
ifdef CT_MAKE_INCLUDE_OVERRIDE
-include $(CT_MAKE_INCLUDE_OVERRIDE)
else
-include localmake.$(CT_OS_TYPE).inc
-include localmake.$(CT_OS_TYPE).private.inc
endif
export CT_OS_TYPE
export CT_OS_SUB_TYPE
export CT_OS_TOP_LEVEL_RULE
export CT_PASS_CFLAGS
export CT_SRC_BASE
export CT_BUILD_SUB_PROJ
# this makefile can only be invoked from the /EMSDIO/src base
CT_SRC_BASE :=$(shell pwd)
# export flags for which HCDs to build. Set the hcd driver name in hcd/ in your localmake.*.inc file.
export CT_HC_DRIVERS
export PDK_BUILD
export HDK_BUILD
export ALL_BUILD
export ATHRAW_FD_BUILD
export BUS_BUILD
# For Linux
ifeq ($(CT_OS_TYPE),linux)
#make a copy for linux 2.4
EXTRA_CFLAGS += -DLINUX -I$(CT_SRC_BASE)/include
ifneq ($(CT_RELEASE),1)
EXTRA_CFLAGS += -DDEBUG
endif
export EXTRA_CFLAGS
CT_SRC_OUTPUT :=$(CT_SRC_BASE)/../output
ifdef CT_BUILD_OUTPUT_OVERRIDE
_CT_COMPILED_OBJECTS_PATH :=$(CT_BUILD_OUTPUT_OVERRIDE)
_MAKE_OUTPUT_DIR :=
_CLEAN_OUTPUT_DIR :=
else
_CT_COMPILED_OBJECTS_PATH := $(CT_SRC_OUTPUT)/$(CT_BUILD_TYPE)
_MAKE_OUTPUT_DIR := mkdir --parents $(_CT_COMPILED_OBJECTS_PATH)
_CLEAN_OUTPUT_DIR := rm -R -f $(CT_SRC_OUTPUT)
endif
ifeq ($(CT_OS_SUB_TYPE),linux_2_4)
CT_PASS_CFLAGS := $(EXTRA_CFLAGS)
_CT_MOD_EXTENSION :=o
ifeq ($(ALL_BUILD),1)
subdir-m += busdriver/ lib/ hcd/ function/
else
ifeq ($(BUS_BUILD),1)
subdir-m += busdriver/ lib/ hcd/
else
ifeq ($(PDK_BUILD),1)
subdir-m += function/
else
ifeq ($(HDK_BUILD),1)
subdir-m += hcd/ function/
endif
endif
endif
endif
# add in rules to make modules
CT_OS_TOP_LEVEL_RULE :=$(CT_LINUXPATH)/Rules.make
include $(CT_OS_TOP_LEVEL_RULE)
else
#2.6+
_CT_MOD_EXTENSION :=ko
ifeq ($(ALL_BUILD),1)
obj-m += busdriver/ lib/ hcd/ function/
else
ifeq ($(BUS_BUILD),1)
obj-m += busdriver/ lib/ hcd/
else
ifeq ($(PDK_BUILD),1)
obj-m += function/
else
ifeq ($(HDK_BUILD),1)
obj-m += hcd/ function/
endif
endif
endif
endif
endif
ifdef CT_BUILD_SUB_PROJ
_CT_SUBDIRS=$(CT_BUILD_SUB_PROJ)
else
_CT_SUBDIRS=$(CT_SRC_BASE)
endif
ifdef CT_CROSS_COMPILE_TYPE
CT_MAKE_COMMAND_LINE=$(CT_OUTPUT_FLAGS) -C $(CT_LINUXPATH) SUBDIRS=$(_CT_SUBDIRS) ARCH=$(CT_ARCH_CPU_TYPE) CROSS_COMPILE=$(CT_CROSS_COMPILE_TYPE)
else
CT_MAKE_COMMAND_LINE=$(CT_OUTPUT_FLAGS) -C $(CT_LINUXPATH) SUBDIRS=$(_CT_SUBDIRS)
endif
makeoutputdirs:
$(_MAKE_OUTPUT_DIR)
default: makeoutputdirs
echo " ************ BUILDING MODULE ************** "
$(MAKE) $(CT_MAKE_COMMAND_LINE) EXTRA_CFLAGS="$(EXTRA_CFLAGS)" modules
echo " *** MODULE EXTENSION = $(_CT_MOD_EXTENSION)"
$(CT_SRC_BASE)/../scripts/getobjects.scr $(CT_SRC_BASE) $(_CT_COMPILED_OBJECTS_PATH) $(_CT_MOD_EXTENSION)
ifeq ($(CT_OS_SUB_TYPE),linux_2_4)
# on 2.4 we can't invoke the linux clean with SUBDIRS, it will just clean out the kernel
clean:
find $(_CT_SUBDIRS) \( -name '*.[oas]' -o -name core -o -name '.*.flags' -o -name '.ko' -o -name '.*.cmd' \) -type f -print \
| grep -v lxdialog/ | xargs rm -f
$(_CLEAN_OUTPUT_DIR)
else
clean:
$(MAKE) $(CT_MAKE_COMMAND_LINE) clean
find $(_CT_SUBDIRS) \( -name '*.[oas]' -o -name core -o -name '.*.flags' \) -type f -print \
| grep -v lxdialog/ | xargs rm -f
$(_CLEAN_OUTPUT_DIR)
endif
endif
# For QNX
ifeq ($(CT_OS_TYPE),qnx)
LIST=VARIANT
EARLY_DIRS=lib
##ifndef QRECURSE
QRECURSE=./recurse.mk
##ifdef QCONFIG
###QRDIR=$(dir $(QCONFIG))
##endif
##endif
include $(QRDIR)$(QRECURSE)
endif
|
Ok, I have figured out the problem. I am having square bracket character "[" in the module source directory
LD [M] /home/farshad/Work/CSP/boards/imx6q/ar6k3/ar6003_3.1_RC_Linux_release_[posted_2011_8_19_olca3.1RC_553/imx6build/host/os/linux/ar6000.o
Removing this from the path worked well and I got my kernel module object files. I have renamed
ar6003_3.1_RC_Linux_release_[posted_2011_8_19_olca3.1RC_553
to
ar6003,
and also tested with
ar6003_3.1_RC_Linux_release_posted_2011_8_19_olca3.1RC_553
Both worked fine. I was building on Ubuntu 10.04. A colleague of mine has built from the same sources having "[" character in his path on Ubuntu 11.04 and kernel module object file was building nicely, this also suggest the changed behavior of grep / find / awk or such utility among their different versions, which kernel build system is using, resulting in this issue.
Regards,
Farrukh Arshad.
| Building kernel module |
1,354,782,873,000 |
I'm making a simple kernel auto build script, right now I need to detect if there's a new config that need to ask the user (I don't want to use the default value), if so, it launches make menuconfig first, otherwise skip that part.
Normally it just ask me to pick between N, Y, M.
Is it possible?
|
Check the output of make listnewconfig.
| How can I tell if a kernel has some "new" config? |
1,354,782,873,000 |
I'm using Arch Linux, kernel 3.5.3 on a MacBook. I'm trying to get the keyboard backlight working. I found the driver in the AUR: http://aur.archlinux.org/packages.php?ID=25467, but I'm having trouble compiling it.
When I run makepkg i get:
==> Making package: nvidia-bl 0.16.7-1 (Sat Sep 1 03:23:25 UTC 2012)
==> Checking runtime dependencies...
==> Installing missing dependencies...
Password:
target not found: kernel26<2.6.34
==> ERROR: 'pacman' failed to install missing dependencies.
I'm a new Linux user. Does this mean that I need to downgrade my Linux version to use this driver?
|
Looks like your PKGBUILD is outdated (0.16.7-1, but the current is 0.17.3-5). Try downloading the nvidia-bl tarball again and building it.
| Can't install driver in Arch |
1,354,782,873,000 |
Does ext4 / cifs system need kernel NLS support ? I'm not sure if it's being handled by user-space program (decoding / encoding) ?
|
NLS allows normalization of character sets used for filenames over the whole system, so you can have different charset used on two different systems and still have correct mappings.
So yes, it's necessary, especially for CIFS, which afaik uses Unicode by default on newer servers, but your local system might have different settings (usually UTF-8 these days, fortunately).
Unfortunately, applications don't handle that (and why should they?).
| Is Native Language Support (NLS) kernel support still necessary? |
1,354,782,873,000 |
I'm suffering from a kernel bug in a production environment. The problem isn't causing a complete outage, but it's degrading service.
These are soft lockups.
I'd like to try a newer kernel, however Squeeze only has 2.6.32.5, but kernel.org has 2.6.32.59. I've compiled from source in the past, but should this really be necessary? I'd really rather not put other admins through manual tracking of security bugs.
Backports seems to no longer have 2.6 kernels, they're all 3.2. 3.2 is a PITA to install because it pulls in a mountain of dependencies. E.g., linux-base > 3
http://packages.debian.org/squeeze-backports/linux-image-3.2.0-0.bpo.2-686-pae
Those dependencies could break my abilty to boot back in 2.6, cutting out my backout plan and potentially leading to a very long production outage.
There are references online of backports having a newer 2.6.32 kernel. Where did they go?
Thanks,
|
Squeeze currently has 2.6.32-59. For details have a look at the chanelog entry for 2.6.32-42.
What exactly is your problem and why do you think upgrading the kernel may help? As you can always downgrade packages there should be no problem using the newest Linux kernel in backports.
| install 2.6.32.59 on squeeze |
1,354,782,873,000 |
I've been asked to investigate adding extra memory (> 16gb) to a RHEL 4 server and moving to a huge mem kernel. The server is used for Oracle RAC.
Is it just a case of installing the hugh mem kernel and booting into ? or is it more complex than that ?
Any tips / gotcha's ?
|
I use a BIGMEM kernel on Debian squeeze. I've used a non-BIGMEM kernel before as well. I'm not aware of any issues or gotchas, and no changes to the system are necessary when switching between a BIGMEM and non-BIGMEM kernel. Of course, you only need to use a BIGMEM kernel if you are running a 32 bit kernel. If you have any specific concerns, mention them in the question.
| What are the Implications of moving to Huge mem kernel on RHEL4 |
1,354,782,873,000 |
I have an issue with Fedora 15's available memory. The laptop it is installed on has 4GB of SDRAM, but Fedora only sees 2GB.
[njozwiak@calvin xpmc6720]$ free -m
total used free shared buffers cached
Mem: 2193 1994 198 0 59 1405
-/+ buffers/cache: 529 1663
Swap: 4255 0 4255
I realize F15 is a 32-bit OS, but I should still be able to access more than 2193MB. Any ideas?
[njozwiak@calvin xpmc6720]$ uname -a
Linux calvin 2.6.38.6-27.fc15.i686 #1 SMP Sun May 15 17:57:13 UTC 2011 i686 i686 i386 GNU/Linux
|
What kernel are you using? The kernel needs to have BIGMEM support.
Fedora 15 offers BIGMEM support in PAE kernels. So install (for example)
the Fedora 15 2.6.38 PAE kernel.
| Fedora 15 - Limited RAM Access |
1,354,782,873,000 |
I have a new Sandy Bridge i5-2500 and Intel H67 motherboard. As the onboard video didn't work, I put in an older 8600 gts graphics card. However, the S3 suspend won't work (and I can't test without the dedicated card). I've had this experience with all other desktops (all of which have nvidia cards).
Any help diagnosing what might be the problem would be appreciated. I've been trying a few s2ram parameters at en.opensuse.org/SDB:Suspend_to_RAM, but this is very time consuming and so far fruitless. Namely,
Of course, if you have experience with a H67 or P67 motherboard, that would be most useful.
If you have owned Intel desktop motherboards, how often does S3 work?
same with nVidia graphics cards
Does the power supply ever make a difference?
|
It seems to have been fixed now, I'm using OpenSuSE 11.4. My uname string is
Linux linux-sdia 2.6.37.6-0.7-desktop #1 SMP PREEMPT 2011-07-21 02:17:24 +0200 x86_64 x86_64 x86_64 GNU/Linux
and the nVidia driver is 275.21.
| S3 sleep problems -- nVidia or Intel H67 (Sandy Bridge motherboard) issue? |
1,354,782,873,000 |
I am trying to compile the mainline Linux kernel with a custom config. This one!
Running on a 64 bit system.
At the last step, when linking the Kernel, it fails because it goes OOM (error 137).
[...]
DESCEND objtool
INSTALL libsubcmd_headers
CALL scripts/checksyscalls.sh
LD vmlinux.o
Killed
make[2]: *** [scripts/Makefile.vmlinux_o:61: vmlinux.o] Error 137
make[2]: *** Deleting file 'vmlinux.o'
[...]
ulimit -a says that per process memory is unlimited.
I have tried make, make -j1 make -j4, no difference whatsoever.
Same results with gcc as compiler instead of clang.
Does anyone have a freaking clue on why the compilation eats up so much RAM? It's getting unaffordable to develop Linux :\
|
It's getting unaffordable to develop Linux
I am afraid it has always been.
32GB RAM is common on kernel devs desktops.
And yet some of them started encountering ooms when building their allyesconfig-ed kernel.
Lucky you… who are apparently not allyesconfig-ing… you should not need more than 32G… ;-)
On a side note, reading CONFIG_HAVE_OBJTOOL=y as part of your .config file, you might take some benefits from the patches submitted as part of the discussion linked hereabove.
Does anyone have a freaking clue on why the compilation eats up so
much RAM?
You are probably the only one who could precisely tell. (after considering the size of the miscellaneous *.o files you might be able to find in each top level directory of the kernel source distribution (since compilation was achieved successfuly))
From the information you provide (the kernel.config file) I can only venture a priori :
A/ every component of your kernel will be statically linked :
(since I notice that all your selected OPTION_* are marked "=y")
There is nothing wrong with this per se since there can be many good reasons for building everything in-kernel but this will definitely significantly increase the RAM needed when linking all this together.
=> You probably should consider building kernel parts as modules wherever possible.
B/ a good amount of CONFIG_DEBUG appear set.
Once again there is nothing wrong with that per se however it is likely to increase significantly the RAM needed to link the different parts, not to say even more since it implies CONFIG_KALLSYMS_*=y
On a side note, considering the debugging feature selected, in addition to CONFIG_HZ_100=y I would assume that you are not searching for best possible latencies / performances.
=> I would then consider the opportunity to prefer CONFIG_CC_OPTIMIZE_FOR_SIZE
| Linux build with custom config using all RAM (8GB)? |
1,354,782,873,000 |
Can someone please explain how to apply a patch file to ubuntu server kernel? I'm trying to apply this patch file which enables the tcp_collapse_max_bytes option in the TCP communication options on an ubuntu server. I followed this answer and tried to apply the .patch but got the same error.
Here are my steps:
first, I changed the directory to the kernel source folder:
cd /usr/src/linux-headers-5.15.0-58-generic
Then I run the command patch -p0 ~/file.patch
But I got the following info and it keeps asking me to enter the file to patch
can't find file to patch at input line 44
Perhaps you used the wrong -p or --strip option?
The text leading up to this was:
...
...
File to patch:
I think I am working in wrong directory but I am not sure.
|
I found that I have to download the source code and patch the downloaded source code. For some reason that I don't know exactly, the codes in /usr/src/ were not the exact code of the official Linux. (Probably my VPS provider modified them).
I followed these steps and applied the patch after downloading the codes, then compiled the kernel and installed it.
So, the following steps helped me:
Download the corresponding linux kernel source code, from official websites such as kernel.org
Extract the kernel and change the directory, i.e. cd linux-5.15.**
Apply the patch patch -p1 < path/to/patch/0014-add-a-sysctl-to-enable-disable-tcp_collapse-logic.patch
Compile the patched kernel and install it. This step may be slightly different based on the distributions, and also requires some dependencies. but usually involves the following commands :
make menuconfig
make
Note that step 4 may be different based on the linux distribution and may require installing some other packages to compile and install the kernel.
| Applying patch file to ubuntu server |
1,354,782,873,000 |
This is a follow-up question to dentry/inode caching on multi-cpu machines / memory allocator configuration, but here I try to put the question differently.
My problem is that I have a dual socket machine, and memory for the kernel caches (dentry/inode/buffer) are allocated from bank0 (cpu0's memory bank), and that eventually gets consumed. However, bank1 is never used for caches, so there is plenty of free memory in the overall system. So in this state the memory allocator gets memory from bank1, regardless of where my process is running (even if I set memory affinity). Due to the different memory latency when accessing memory from different banks, this means that my process (which is somewhat memory access bound with a low cache-hit ratio) will run much slower when scheduled on the cores in cpu0 than when scheduled on the cores in cpu1. (I'd like to schedule two processes, one for each cpu, and a process should use all cores of its cpu. I don't want to waste half the cores.)
What could I do to ensure that my process can get memory from the local bank, no matter on which cpu it gets scheduled on?
I tried playing with the kernel VM parameters, but they don't really do anything. After all, half the memory is free! These caches in kernel simply do not seem to take NUMA issues into account. I tried to look into cgroups, but as far as I can understand, I can't really control the kernel that way. I did not really find anything that would address my issue :-(.
I can, of course, drop all caches before starting my processes, but that is a bit heavy handed. A cleaner solution would be, for example, to limit the total cache size (say 8GB). True, cpu0 would still have a bit less memory than cpu1 (I have 64GB in both banks), but I can live with that.
I'd be grateful for any suggestions... Thanks!
|
"What is going on with kernel caching on NUMA architecture" under your linux-3.10 is governed by the zone_reclaim_mode sysctl which allows to select the appropriate action to be taken when a zone runs out of memory.
In other words, determine if the page allocator will reclaim easily reusable pages before allocating off node pages or the right opposite. (refer to the official documentation here-above linked for more details)
Several patches came in linux-4 times regarding default settings, then in linux-5 times, that global (nodes-wide) setting became a per node setting : node_reclaim_mode.
| NUMA aware caching on linux |
1,672,002,091,000 |
Is it possible to check the different status of video drivers when it is on, is off, in error, in no-signal?
Example: monitor off - some state 0-, monitor no-signal - some state not connected and so on?
|
Try this: grep . /sys/class/drm/card0-*/{status,enabled,dpms}.
If this doesn't satisfy your needs, just leave a comment below.
| How to detect display driver info |
1,672,002,091,000 |
My goal is to read CPU cycle with ARMv8 registers, and both kmod and my test program works fine on my ARM laptop. However, on my db410c board after loading the kmod, when executing the binary to access the pmccntr_el0 I get Illegal instruction.
Anyone could help me understand why? Since I have already enable the user mode access via PMUSERENR_EL0_EN bit.
This is the example code for the testing program
#include <stdio.h>
#include <stdint.h>
static inline uint64_t
read_pmccntr(void)
{
uint64_t val;
asm volatile("mrs %0, pmccntr_el0" : "=r"(val));
return val;
}
int main(){
uint64_t counter = read_pmccntr();
printf("value:%lu\n",counter);
return 0;
}
os version:
laptop: Linux debian-gnu-linux-10 4.19.0-18-arm64
db410c: Linux linaro-developer 5.15.0-stm-qcomlt-arm64
|
found out reason, it needs properly to be enabled user mode access bit.
| userspace program gets "Illegal instruction" when reading pmccntr_el0 (with user mode access enabled) |
1,672,002,091,000 |
I am sending a synchronous control message to a USB device with the call usb_control_msg. It is taking .25 second to complete. Is it normal/expected? The USB port is USB 3.0. The device is a Cypress FX3 module. The similar test done a Windows system (same port, device, FX3 firmware) returns every message in much less time. With Linux, I noticed that the first message sent takes 10 microseconds to complete, than the next 19 or so will take 0.25 second to complete. Than there is another message that will complete quickly followed by another 19 or so messages that will be slow. Also, I cannot send control messages with setup data longer than 8 bytes. I am going to try to implement asynchronous messages, but it would still be good to know if this behaviour can be improved for synchronous calls.
ktime_t start_time = ktime_get();
int ret = usb_control_msg(device, usb_sndctrlpipe(device, 0), 0, 0x40, 0, 0, &command_data_payload, 8, 5000);
if (ret < 0)
printk(KERN_ERR "Messaged failed: %d\n", ret);
else
printk("message took: %llu\n", ktime_get() - message_start_time);
Correction: i do not know if the Windows app I used in the comparison uses sync or async calls. I will definitely try to implement the test with async calls.
Update: Using async calls, the messages are sent out much quicker, but it still takes 0.25 second for the completion callback to be called. For 20 messages sent out sequentially, 1 completed in very little time and the others each took 0.25 seconds. Maybe the delay is a function of the FX3 USB device module. Also, on closer inspection on Windows, the messages also mostly take 0.25 second each to complete, with a few which complete more quickly.
|
I am going to guess that because async messages sent out sequentially, go out quickly but the completion callback and the responses from the USB device still take 0.25 second to occur, that the delay was because of the Cypress FX3 module. I am also including the code for the async messages.
static struct usb_ctrlrequest ctrl_request;
static ktime_t last_time = 0;
static void write_control_callback(struct urb *urb)
{
ktime_t now_time = ktime_get();
if (last_time != 0)
printk("skel_write_bulk_callback: %llu\n", now_time - last_time);
last_time = now_time;
}
static long send_command(uint8_t * command_data_payload)
{
int ret;
struct urb * cUrb;
void * buf;
cUrb = usb_alloc_urb(0, GFP_KERNEL);
if (!cUrb)
return 1;
buf = usb_alloc_coherent(device, 32, GFP_KERNEL, &cUrb->transfer_dma);
if (!buf)
return 1;
memcpy(buf, command_data_payload, 32);
ctrl_request.bRequest = 0;
ctrl_request.bRequestType = 0x40;
ctrl_request.wValue = 0;
ctrl_request.wIndex = 0;
ctrl_request.wLength = 32;
usb_fill_control_urb(cUrb, device, usb_sndctrlpipe(device, 0), (unsigned char*)(&ctrl_request),
buf, 32, skel_write_bulk_callback, NULL);
ret = usb_submit_urb(cUrb, GFP_KERNEL);
if (ret)
printk("could not submit urb\n");
}
Warning!!!: I do not release the allocated buffer correctly in this code.
| Why does usb_control_msg take 0.25 second to complete |
1,672,002,091,000 |
So I'm building a custom Linux-based OS, and I chose to run it as a RAM disk (initramfs). Unfortunately, I keep getting a Kernel Panic during boot.
RAMDISK: gzip image found at block 0
using deprecated initrd support, will be removed in 2021.
exFAT-fs (ram0): invalid boot record signature
exFAT-fs (ram0): failed to read boot sector
exFAT-fs (ram0): failed to recognize exfat type
exFAT-fs (ram0): invalid boot record signature
exFAT-fs (ram0): failed to read boot sector
exFAT-fs (ram0): failed to recognize exfat type
List of all partitions:
0100 4096 ram0
(driver?)
0101 4096 ram1
(driver?)
0102 4096 ram2
(driver?)
0103 4096 ram3
(driver?)
0104 4096 ram4
(driver?)
0105 4096 ram5
(driver?)
0106 4096 ram6
(driver?)
0107 4096 ram7
(driver?)
0108 4096 ram8
(driver?)
0109 4096 ram9
(driver?)
010a 4096 ram10
(driver?)
010b 4096 ram11
(driver?)
010c 4096 ram12
(driver?)
010d 4096 ram13
(driver?)
010e 4096 ram14
(driver?)
010f 4096 ram15
(driver?)
No filesystem could mount root, tried:
vfat
msdos
exfat
ntfs
ntfs3
Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(1,0)
Any chance this is something missing in my kernel build?
Here's how I've designed the OS:
Component
My Choice
Init Daemon
initrd
Commands
busybox 1.35.0
Kernel
Linux 5.15.12
filesystem
msdos, fat, exfat, ext2, ext3, or ext4
Bootloader
syslinux or extlinux
NOTES: I tried each file system one at a time, and all provide the same response, which leads me to believe that it is not an issue with the filesystem itself. I also tried both syslinux and extlinux for testing purposes.
Here's how I've structured my disk:
/media/vfloppy
└── [ 512 Jan 3 08:06] boot
├── [ 36896 Jan 3 08:06] initramfs.cpio.gz
├── [ 512 Jan 3 08:06] syslinux
│ ├── [ 283 Jan 3 08:06] boot.msg
│ ├── [ 120912 Jan 3 08:06] ldlinux.c32
│ ├── [ 60928 Jan 3 08:06] ldlinux.sys
│ └── [ 173 Jan 3 08:06] syslinux.cfg
└── [ 939968 Jan 3 08:06] vmlinux
Here is my syslinux.cfg:
DISPLAY boot.msg
DEFAULT linux
label linux
KERNEL /boot/vmlinux
INITRD /boot/initramfs.cpio.gz
APPEND root=/dev/ram0 init=/init loglevel=3
PROMPT 1
TIMEOUT 10
F1 boot.msg
I've also enabled the following filesystem options in my kernel's .config file:
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE=""
CONFIG_FS_IOMAP=y
CONFIG_EXT2_FS=y
CONFIG_EXT2_FS_XATTR=y
CONFIG_FS_MBCACHE=y
CONFIG_EXPORTFS_BLOCK_OPS=y
CONFIG_FAT_FS=y
CONFIG_MSDOS_FS=y
CONFIG_PROC_FS=y
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=4096
CONFIG_HAVE_KERNEL_GZIP=y
CONFIG_RD_GZIP=y
CONFIG_DECOMPRESS_GZIP=y
|
The issue in this instance was directly related to the CPIO archive I created. Although I was using the correct cpio and gzip commands, I was piping incorrectly due to a typo in the TinyCore book. Using the following command, I was able to create a cpio file that was readable:
cd fs_folder
sudo find | sudo cpio -o -H newc > ../fs.cpio
gzip -2 ../fs.cpio
advdef -z4 ../fs.cpio.gz
| Custom Build - Unable to mount filesystem |
1,672,002,091,000 |
I'm using Arch on a very old computer with Chrome and it crashes pretty often, plus the CPU consumption is very high.
I read that using cfs-zen-tweaks could improve the responsiveness.
Which is better, using cfs-zen-tweaks or a linux-zen kernel? What is the difference?
|
I have an older laptop with low processing power as well, I use it on the go as it's very light and the battery lasts quite long. It wasn't unstable, though, but got unresponsive whenever I did an update in the background. I run Arch on it.
I moved from the vanilla kernel over to the cfs-zen tweaks, which made a difference in responsiveness. I wanted even more and installed the zen kernel, I can't really tell the difference to the cfs-patches allone, but in general it is responding really well even in stressfull situations.
Just give it a go and test it out, on most distributions it's quite easy to switch back and forth between the zen kernel and the vanilla kernel.
| linux-zen VS cfs-zen-tweaks |
1,672,002,091,000 |
I'm trying to boot CentOs already installed on an appliance.
This appliance has only two input connectors:
1 USB port, where I plugged in a USB hub, to which I connected a keyboard
1 Male 9 Pin Serial port / RS232
It also has 2 Gigabit Ethernet ports, one of which is used for remote administration with Intel BMC and Intel SOL (Serial-Over-LAN)
I'm connected to the server via the Intel SOL.
Now the problem:
when I boot the server, I successfully reach the Grub loader and I can select the entries, but after that, the output shows the � character, and nothing more happens, whichever the entry I select.
I tried to boot from a live Centos image (using a USB Pendrive connected to the USB hub) but also, in this case, I can see the Grub loader, I can select the desired entry, but then it stuck with � character.
I also tried to:
disable the quiet mode from the kernel flags
add init=/bin/bash to the kernel lines
add console=tty0 console=ttyS0 console=tty1 console=ttyS1 to the kernel lines
in Grub Rescue mode, chain load and boot the grubx64.efi from the OS already installed and from the USB Pendrive
I'm honestly out of options now, any suggestion would be greatly appreciated.
|
Adding console=ttyS0,115200 console=ttyS1,115200 to the linux command and dropping rhgb solved the problem.
| Missing console output after grub |
1,630,600,137,000 |
I am trying to declare a new variable in vvar.h and define it near my new VDSO function. So that I could use this variable in my vdso function.
I have a trouble about VVar. According to the description in arch/x86/include/asm/vvar.h, when I declare here a new variable as DECLARE_VVAR(0, int, count), I should use DEFINE_VVAR(type, name) to define this variable somewhere else.
The problem is after I defined this variable somewhere else, like DEFINE_VVAR(int, count), when I am trying to assign an integer value to this variable count, it is failed. This is because after kernel version 5.2 #define DEFINE_VVAR(type, name) has been changed from #define DEFINE_VVAR(type, name) type name to #define DEFINE_VVAR(type, name) type name[CS_BASES]. Right now this variable count is an integer array instead of type integer. Therefore I can't assign a integer value to it. Do you know how to fix it?
VVAR.h: https://elixir.bootlin.com/linux/v5.12/source/arch/x86/include/asm/vvar.h#L43
|
DECLARE_VVAR and DEFINE_VVAR are arch-specific vDSO implementation details, and you shouldn’t use them to add new vDSO data.
To add a vDSO variable, you should modify struct vdso_data include/vdso/datapage.h (you’ll see the relevance of the array construction there; it’s tied to clock sources, hence the reference to CS_BASES), or if it’s arch-specific, the relevant include/asm/vdso/data.h file (currently, s390 is the only architecture with arch-specific vDSO data in such a file; other architectures have different approaches, see PowerPC for one).
| how to declare a new variable in vvar.h |create a vdso in linux |
1,630,600,137,000 |
When listed down the running processes,
I can see several processes like 'chrome', 'notepad', 'intellij', 'sublime editor' etc.. are having "tty = ?"
Then how are they able to read the input from a keyboard?
Is TTY always related to terminals/cli?
|
XWindows applications receive keyboard and mouse input from the X Server, and display things through the X Server. It is unusual for these applications to have a controlling terminal (the tty column) unless they are started from a command line that has a controlling terminal.
There are really only a few XWindows applications that need a controlling terminal or input or output redirection. Amongst them are xclip, xev, xprop, xwininfo, xkill, xlsfonts, xlsclients and xlsatoms. Of those, only one displays a window and three can temporarily change the cursor.
It is actually more common for an XWindows application to host a tty. xterm and every other terminal application provides a tty to the shell or other programs running "inside" it.
Finally, note that the X Server itself usually DOES (at least in Linux) have a terminal associated with it. This is more so that it can fit into the virtual consoles than anything else, but it does allow the keyboard and mouse to be switched between the X server and the other virtual consoles.
| TTY = ?, How input is read |
1,630,600,137,000 |
I'm trying to create a file with the following specifications.
The file "myfile" in a debugfs sub-directory is to be read only by any user, and when read,
should return the current value of the jiffies kernel timer.
I have written the following line of code in the function my_init that is directly called through module_init(my_init).
if (!debugfs_create_u32("myfile", 0444, handle, (u32*)&myfile))
goto fail;
when run make, i get the following error.
error: invalid use of void expression
if (!debugfs_create_u32("myfile", 0444, handle, (u32*)&myfile))
It would br great if somebody can help me clear this error.
Thanks in advance :)
|
The return value was removed from debugfs_create_u32 at 2b07021a940ce1cdec736ec0cacad6af77717afc which went into v5.7:
debugfs: remove return value of debugfs_create_u32()
No one checks the return value of debugfs_create_u32(), as it's not
needed, so make the return value void, so that no one tries to do so in
the future.
so you can just get rid of that if.
The problem for me is that I wanted to remove the debugfs file afterwards at module unload, and my entry was on the debugfs toplevel with NULL, so I can't just debugfs_remove_recursive.
So I think I will just get the dentry with:
debugfs_lookup("basename", NULL)
Maybe we are just not supposed to create debugfs entries on toplevel, as it pollutes namespaces too much if you have more than one.
It is a bit inconsistent that some very similar calls still return the file however, e.g.:
struct dentry *debugfs_create_ulong
so I think this was just an API mistake. Are kernel devs not perfect Gods as I once though? :-O
| creating a debugfs file that is used to read/write u32 value |
1,592,330,370,000 |
I am trying to create my own PID 1 init script, to be called from the boot cmdline with init=/myscript. How can I make it work on a real filesystem, with any kernel?
When it runs in an initrd, it works fine and can mount things, etc. - but when I use it on my filesystem without an initrd, it fails to mount things, because:
mount: only root can do that (effective UID is 1000)
When I strace any command that fails, it inevitably issues geteuid32() and that returns 1000. Why? How can I run as euid 0?
|
There's no special treatment for init on initrd, so there must be some other issue.
If run as root, the euid will match the owner of the binary if the setuid bit is set.
Check the ownership on /bin/mount.
| Why does a shebang script run as init= have an euid of 0 when run from an initrd, but not otherwise? |
1,592,330,370,000 |
I am relatively new to Linux and I was reading a book for LPIC-1. Reading the module part I was checking my modules and noticed that some modules don't have a description.
ac97_bus, autofs4, cdrom, crc32_pclmul, crypto_simd, glue_helper, hid, jbd2, sunrpc, usb_common, usbcore
I tried Google but I didn't find any answer there, or on this site either.
So, can you tell me why those modules don't have a description?
Best regards,
me
|
It's not mandatory for a Linux kernel module to have a description filled in. If one does have it, you can find it in the module source code as a MODULE_DESCRIPTION declaration like this:
MODULE_DESCRIPTION("Intel HDA driver");
which you can inspect via modinfo on the .ko object:
$ modinfo snd_hda_intel
filename: /lib/modules/5.3.0-40-generic/kernel/sound/pci/hda/snd-hda-
intel.ko
description: Intel HDA driver
license: GPL
But to make it short - as that's just for documentation purposes, some modules simply don't have it.
| Linux Module descriptions missing |
1,592,330,370,000 |
I am loading a kernel module at boot time, I added it to a config file in /etc/modules-load.d/, the module is loading correctly.
In my module I am using the wait_for_random_bytes() function from linux/random.h, so my module can have some delay in loading.
The modules are loaded sequentially? This module of mine can delay the loading of other modules?
Thanks!
|
What the OS does ?
In my Debian (but i would bet that your CentOS just do the same), the module loading part of initialization is done by /etc/init.d/kmod.
Below is an extract of this script:
files=$(modules_files)
if [ "$files" ] ; then
grep -h '^[^#]' $files |
while read module args; do
[ "$module" ] || continue
load_module "$module" "$args"
done
fi
Where:
modules_files is a shell function that parses various file and directories (including /etc/modules-load.d) and built a list of modules to be loaded.
load_module is a shell function that do the modprobe work + some logging if verbose flag is set.
So i would say that yes, modules are loaded sequentially and if one blocks, then it will block the other ones....
but ...
What kernel does ?
When reading the source code of linux/modules.c we can see that:
The syscall is probably implemented by the function load_module(). We can see that it performs a lot of stuff (initialization, memory allocation, sanity checks, signature checks, etc..) and it returns with return do_init_module(mod); (line 3927
The do_init_module() function do at line 3574 the following operation and, if everything went ok, return 0.
if (mod->init != NULL)
ret = do_one_initcall(mod->init);
if (ret < 0) {
goto fail_free_freeinit;
}
My conclusion is the syscall will return only when:
1. The module has been loaded in memory.
2. Its init() function has ran successfully.
So if your call to wait_for_random_bytes() is part of the init function of your module, then yes, it may block others modules loading.
| Loading a kernel module at boot time is blocking? |
1,592,330,370,000 |
I have a Linux system with two interfaces lo and eth0, I have some iptables rules which will block some tcp ports.
It's possible to teste my own firewall rules by implementing a probing service to port scan my own ports? the idea is to port scan my external interface eth0 but doing that inside.
I have made some basic implementations in C++ with raw sockets, but in the end the packet is always going to lo interface I don't see anything on the eth0.
Seems the kernel is doing the shorcut because the destination is equal to the source address, any thought ideas on this problem?
|
You can use a network namespace, and link it to your system with a pair of veth interfaces. That way you'll really have two network stacks: the host's and the additional network namespace's. This will very cheaply (way more than a VM, only the network is separated) create an host to scan from. The easiest command to manage this is ip netns (which is designed to keep network namespaces up by mounting a reference to them even when no process is using them):
ip netns add scanner
ip link add name veth0 type veth peer netns scanner name veth0
ip address add 192.0.2.2/24 dev veth0
ip -n scanner address add 192.0.2.3/24 dev veth0
ip link set veth0 up
ip -n scanner link set lo up #not really needed, but can avoid a few surprising results
ip -n scanner link set veth0 up
ip netns exec scanner /path/to/my/probing_service ...
You now can scan 192.0.2.2 from 192.0.2.3, and this won't use lo. Of course it all depends on your firewall rules (you'd have to adapt IPs, interface names, both or a few other things).
| Test iptables from localhost |
1,592,330,370,000 |
I am using the Linux's traffic control (tc) utility, which to my understanding is used to configure the Linux kernel packet scheduler.
I am also using the netem command in tc to add delay, drop, or corrupt traffic.
My main question is, does the netem modify transport layer datagrams, IP packets, or Link layer frames (like Ethernet)?
I found this page which explains the network communication flow in the Linux kernel. It mentions that the shaping and queuing disciplines are made in the "Layer 2: Link layer (e.g. Ethernet)". Does this mean that netem adds its corruption, loss, or delay on frames (layer 2)?
But since tc filter allows you to apply traffic rules to a certain IP:port pair, does that mean it operates on the transport layer datagrams (layer 4)?
|
tc affects the queuing discipline, i.e. the order in which outgoing "packets" are sent to the hardware. The implementation operates on sk_buff structs, and
the documentation for sk_buff seems to imply that the packet format is whatever the particular network interfaces uses (e.g. Ethernet packets for an Ethernet interface).
So I'd assume netem corrupt adds corruption on this layer, which should be discovered through the layer-2 checksum (whatever layer 2 is for a particular interface).
In addition, sk_buff contains pointers to the higher layer payloads, which explains why tc filter can act on them.
| Does the Linux traffic control utility modify datagrams, IP packets, or frames? |
1,563,892,725,000 |
I am trying to connect a custom device to this Linksys router. This device has a firmware file which I copy and paste into the /lib/firmware folder. The issue I am facing is, on bootup if the device is connected I get an error saying the firmware file was not present in the /lib/firmware folder. But the device works fine if I connect the device after bootup.
I believe the issue is the way I am copying the firmware file. By default, the Linksys Openwrt image uses squashfs which upon further reading is a read-only file system and uses overlayfs to write on the FS which might be the reason of this error but I could be wrong
What would be the correct way to go about putting the firmware file in the router file system, so on bootup the deivce works.
|
Custom files can be "installed" by use of cp on a running system (which will add them to the overlay), or by use of the ./files/ directory in the build system (which will add them to the ROM)
You'll see that it is common to copy files needed by the wireless drivers during boot and that it works quite well. See, for example, /etc/hotplug.d/firmware/11-ath10k-caldata
| Custom Firmware Boot Issue For Linksys EA6350 |
1,563,892,725,000 |
My kernel driver needs to access battery's properties (get_property, set_property).
Problem: How to find the battery's struct power_supply?
I only find power_supply_get_by_name but there can be different names for the battery. I need to check the power_supply's type but this is where I am stuck.
A direct get_by_type or a get_all_power_supplies to check the type on my own or also a get_power_supply_names to pass to power_supply_get_by_name would be fine for me.
I want to avoid file accesses in the kernel so what is a better way to find the type="battery" power_supply?
I suspect I should grab the supply every time again because it may change or vanish / reappear? This driver can access the supply every couple of seconds in some situations so it would be nice to not spend a long time finding the battery.
|
Exactly a year late for the party. :) Here's the basic idea that loops over the all objects of power-supply class.
#include <linux/power_supply.h>
static int power_supply_printer(struct device *dev, const void *data)
{
struct power_supply *psy = dev_get_drvdata(dev);
(void)data;
printk(KERN_INF "power-supply = %s\n", psy->desc->name);
/* Return 1 if found, 0 if this is not valid. */
return 0;
}
static __init int my_driver_init(void)
{
struct device *dev;
dev = class_find_device(power_supply_class, NULL, NULL /* data*/, power_supply_printer);
...
}
This function will iterate over all power_suppy_class drivers. Note, as long as the call-back function returns 0, it will check the next available device in that class.
| Find a power_supply of type |
1,563,892,725,000 |
I have been having problems compiling a router's kernel for QEMU. I have the router working in QEMU using an OpenWRT kernel, but networking does not work. This is why I want to compile the original kernel.
The below command is the problematic command that the (main) Makefile indirectly executes. I say indirectly because it doesn't even explicitly choose to execute the configure script, it just chooses to do so because it is in the directory of downloaded packages that are needed to compile the kernel.
PATH=/home/debian/build-new/host/bin:/home/debian/build-new/host/usr/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
AR="/usr/bin/ar" AS="/usr/bin/as" LD="/usr/bin/ld" NM="/usr/bin/nm"
CC="/home/debian/build-new/host/usr/bin/ccache /usr/bin/gcc"
GCC="/home/debian/build-new/host/usr/bin/ccache /usr/bin/gcc"
CXX="/home/debian/build-new/host/usr/bin/ccache /usr/bin/g++"
CPP="/usr/bin/cpp"
CPPFLAGS="-I/home/debian/build-new/host/usr/include" CFLAGS="-O2
-I/home/debian/build-new/host/usr/include" CXXFLAGS="-O2 -I/home/debian/build-new/host/usr/include" LDFLAGS="-L/home/debian/build-new/host/lib
-L/home/debian/build-new/host/usr/lib -Wl,-rpath,/home/debian/build-new/host/usr/lib" PKG_CONFIG_ALLOW_SYSTEM_CFLAGS=1 PKG_CONFIG_ALLOW_SYSTEM_LIBS=1
PKG_CONFIG="/home/debian/build-new/host/usr/bin/pkg-config"
PKG_CONFIG_SYSROOT_DIR="/"
PKG_CONFIG_LIBDIR="/home/debian/build-new/host/usr/lib/pkgconfig:/home/debian/build-new/host/usr/share/pkgconfig"
PERLLIB="/home/debian/build-new/host/usr/lib/perl"
LD_LIBRARY_PATH="/home/debian/build-new/host/usr/lib:" CFLAGS="-O2
-I/home/debian/build-new/host/usr/include" LDFLAGS="-L/home/debian/build-new/host/lib
-L/home/debian/build-new/host/usr/lib -Wl,-rpath,/home/debian/build-new/host/usr/lib" CC="/usr/bin/gcc" ./configure --prefix="/home/debian/build-new/host/usr"
--sysconfdir="/home/debian/build-new/host/etc" --enable-shared --disable-static --disable-gtk-doc --disable-doc --disable-docs --disable-documentation --with-xmlto=no --with-fop=no ccache_cv_zlib_1_2_3=no
The flag that breaks the command is LDFLAGS.
LD="/usr/bin/ld" LDFLAGS="-L/home/debian/build-new/host/lib -L/home/debian/build-new/host/usr/lib -Wl,-rpath,/home/debian/build-new/host/usr/lib" LDFLAGS="-L/home/debian/build-new/host/lib -L/home/debian/build-new/host/usr/lib -Wl,-rpath,/home/debian/build-new/host/usr/lib"
The output of running the command is:
debian@debian-i686:~/build-new/build/host-ccache-3.1.8$ PATH=/home/debian/build-new/host/bin:/home/debian/build-new/host/usr/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games AR="/usr/bin/ar" AS="/usr/bin/as" LD="/usr/bin/ld" NM="/usr/bin/nm" CC="/home/debian/build-new/host/usr/bin/ccache /usr/bin/gcc" GCC="/home/debian/build-new/host/usr/bin/ccache /usr/bin/gcc" CXX="/home/debian/build-new/host/usr/bin/ccache /usr/bin/g++" CPP="/usr/bin/cpp" CPPFLAGS="-I/home/debian/build-new/host/usr/include" CFLAGS="-O2 -I/home/debian/build-new/host/usr/include" CXXFLAGS="-O2 -I/home/debian/build-new/host/usr/include" LDFLAGS="-L/home/debian/build-new/host/lib -L/home/debian/build-new/host/usr/lib -Wl,-rpath,/home/debian/build-new/host/usr/lib" PKG_CONFIG_ALLOW_SYSTEM_CFLAGS=1 PKG_CONFIG_ALLOW_SYSTEM_LIBS=1 PKG_CONFIG="/home/debian/build-new/host/usr/bin/pkg-config" PKG_CONFIG_SYSROOT_DIR="/" PKG_CONFIG_LIBDIR="/home/debian/build-new/host/usr/lib/pkgconfig:/home/debian/build-new/host/usr/share/pkgconfig" PERLLIB="/home/debian/build-new/host/usr/lib/perl" LD_LIBRARY_PATH="/home/debian/build-new/host/usr/lib:" CFLAGS="-O2 -I/home/debian/build-new/host/usr/include" LDFLAGS="-L/home/debian/build-new/host/lib -L/home/debian/build-new/host/usr/lib -Wl,-rpath,/home/debian/build-new/host/usr/lib" CC="/usr/bin/gcc" ./configure --prefix="/home/debian/build-new/host/usr" --sysconfdir="/home/debian/build-new/host/etc" --enable-shared --disable-static --disable-gtk-doc --disable-doc --disable-docs --disable-documentation --with-xmlto=no --with-fop=no ccache_cv_zlib_1_2_3=no
configure: WARNING: unrecognized options: --enable-shared, --disable-static, --disable-gtk-doc, --disable-doc, --disable-docs, --disable-documentation, --with-xmlto, --with-fop
configure: Configuring ccache
checking for gcc... /usr/bin/gcc
checking whether the C compiler works... no
configure: error: in `/home/debian/build-new/build/host-ccache-3.1.8':
configure: error: C compiler cannot create executables
See `config.log' for more details
Removing only the LDFLAGS fixes the particular error, but then I have another error later.
debian@debian-i686:~/build-new/build/host-ccache-3.1.8$ PATH=/home/debian/build-new/host/bin:/home/debian/build-new/host/usr/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games AR="/usr/bin/ar" AS="/usr/bin/as" LD="/usr/bin/ld" NM="/usr/bin/nm" CC="/home/debian/build-new/host/usr/bin/ccache /usr/bin/gcc" GCC="/home/debian/build-new/host/usr/bin/ccache /usr/bin/gcc" CXX="/home/debian/build-new/host/usr/bin/ccache /usr/bin/g++" CPP="/usr/bin/cpp" CPPFLAGS="-I/home/debian/build-new/host/usr/include" CFLAGS="-O2 -I/home/debian/build-new/host/usr/include" CXXFLAGS="-O2 -I/home/debian/build-new/host/usr/include" PKG_CONFIG_ALLOW_SYSTEM_CFLAGS=1 PKG_CONFIG_ALLOW_SYSTEM_LIBS=1 PKG_CONFIG="/home/debian/build-new/host/usr/bin/pkg-config" PKG_CONFIG_SYSROOT_DIR="/" PKG_CONFIG_LIBDIR="/home/debian/build-new/host/usr/lib/pkgconfig:/home/debian/build-new/host/usr/share/pkgconfig" PERLLIB="/home/debian/build-new/host/usr/lib/perl" LD_LIBRARY_PATH="/home/debian/build-new/host/usr/lib:" CFLAGS="-O2 -I/home/debian/build-new/host/usr/include" CC="/usr/bin/gcc" ./configure --prefix="/home/debian/build-new/host/usr" --sysconfdir="/home/debian/build-new/host/etc" --enable-shared --disable-static --disable-gtk-doc --disable-doc --disable-docs --disable-documentation --with-xmlto=no --with-fop=no ccache_cv_zlib_1_2_3=no
configure: WARNING: unrecognized options: --enable-shared, --disable-static, --disable-gtk-doc, --disable-doc, --disable-docs, --disable-documentation, --with-xmlto, --with-fop
configure: Configuring ccache
checking for gcc... /usr/bin/gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... configure: error: in `/home/debian/build-new/build/host-ccache-3.1.8':
configure: error: cannot run C compiled programs.
If you meant to cross compile, use `--host'.
See `config.log' for more details
Removing all flags (to get the command below) allows the configure script to function perfectly.
./configure --prefix="/home/debian/build-new/host/usr" --sysconfdir="/home/debian/build-new/host/etc" --enable-shared --disable-static --disable-gtk-doc --disable-doc --disable-docs --disable-documentation --with-xmlto=no --with-fop=no ccache_cv_zlib_1_2_3=no
What the configure script is crashing on is it is trying to find the files path/to/lib/libc.so.0 and path/to/usr/lib/uclibc_nonshared.a. The problem is, the script is trying to find these libraries in /lib/ and /usr/lib/ even though the Makefile explicitly sets where it is supposed to get the libraries from. Manually symlinking the libraries from where LDFLAGS points to have it link to /lib/ and /usr/lib/ only results in the message:
/usr/bin/ld: skipping incompatible /lib/libc.so.0 when searching for /lib/libc.so.0
/usr/bin/ld: cannot find /lib/libc.so.0
/usr/bin/ld: skipping incompatible /usr/lib/uclibc_nonshared.a when searching for /usr/lib/uclibc_nonshared.a
/usr/bin/ld: cannot find /usr/lib/uclibc_nonshared.a
Also, setting the LD flag to LD="/home/debian/build-new/host/usr/bin/mips-linux-ld" does not fix the problem.
How do I get the Makefile to compile properly? I left some logs and configs on Github's gist service.
Edit:
Using @filbranden's tip I have now reached the point of getting results such as the below output:
/home/debian/build-new/toolchain/gcc-4.7.3-intermediate/./gcc/xgcc -B/home/debian/build-new/toolchain/gcc-4.7.3-intermediate/./gcc/ -B/home/debian/build-new/host/usr/mips-buildroot-linux-uclibc/bin/ -B/home/debian/build-new/host/usr/mips-buildroot-linux-uclibc/lib/ -isystem /home/debian/build-new/host/usr/mips-buildroot-linux-uclibc/include
-isystem /home/debian/build-new/host/usr/mips-buildroot-linux-uclibc/sys-include
-g -Os -O2 -g -Os -DIN_GCC -DCROSS_DIRECTORY_STRUCTURE -W -Wall -Wno-narrowing -Wwrite-strings -Wcast-qual -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -isystem ./include -fPIC -g -DIN_LIBGCC2 -fbuilding-libgcc -fno-stack-protector -fPIC -I. -I. -I../.././gcc -I/home/debian/build-new/toolchain/gcc-4.7.3/libgcc -I/home/debian/build-new/toolchain/gcc-4.7.3/libgcc/. -I/home/debian/build-new/toolchain/gcc-4.7.3/libgcc/../gcc -I/home/debian/build-new/toolchain/gcc-4.7.3/libgcc/../include -DHAVE_CC_TLS -o _fractHADI_s.o -MT _fractHADI_s.o -MD -MP -MF _fractHADI_s.dep -DSHARED -DL_fract -DFROM_HA -DTO_DI -c /home/debian/build-new/toolchain/gcc-4.7.3/libgcc/fixed-bit.c
/home/debian/build-new/toolchain/gcc-4.7.3-intermediate/./gcc/xgcc -B/home/debian/build-new/toolchain/gcc-4.7.3-intermediate/./gcc/ -B/home/debian/build-new/host/usr/mips-buildroot-linux-uclibc/bin/ -B/home/debian/build-new/host/usr/mips-buildroot-linux-uclibc/lib/ -isystem /home/debian/build-new/host/usr/mips-buildroot-linux-uclibc/include
-isystem /home/debian/build-new/host/usr/mips-buildroot-linux-uclibc/sys-include
-g -Os -O2 -g -Os -DIN_GCC -DCROSS_DIRECTORY_STRUCTURE -W -Wall -Wno-narrowing -Wwrite-strings -Wcast-qual -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -isystem ./include -fPIC -g -DIN_LIBGCC2 -fbuilding-libgcc -fno-stack-protector -fPIC -I. -I. -I../.././gcc -I/home/debian/build-new/toolchain/gcc-4.7.3/libgcc -I/home/debian/build-new/toolchain/gcc-4.7.3/libgcc/. -I/home/debian/build-new/toolchain/gcc-4.7.3/libgcc/../gcc -I/home/debian/build-new/toolchain/gcc-4.7.3/libgcc/../include -DHAVE_CC_TLS -o _fractHATI_s.o -MT _fractHATI_s.o -MD -MP -MF _fractHATI_s.dep -DSHARED -DL_fract -DFROM_HA -DTO_TI -c /home/debian/build-new/toolchain/gcc-4.7.3/libgcc/fixed-bit.c
/home/debian/build-new/toolchain/gcc-4.7.3-intermediate/./gcc/xgcc -B/home/debian/build-new/toolchain/gcc-4.7.3-intermediate/./gcc/ -B/home/debian/build-new/host/usr/mips-buildroot-linux-uclibc/bin/ -B/home/debian/build-new/host/usr/mips-buildroot-linux-uclibc/lib/ -isystem /home/debian/build-new/host/usr/mips-buildroot-linux-uclibc/include
-isystem /home/debian/build-new/host/usr/mips-buildroot-linux-uclibc/sys-include
-g -Os -O2 -g -Os -DIN_GCC -DCROSS_DIRECTORY_STRUCTURE -W -Wall -Wno-narrowing -Wwrite-strings -Wcast-qual -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -isystem ./include -fPIC -g -DIN_LIBGCC2 -fbuilding-libgcc -fno-stack-protector -fPIC -I. -I. -I../.././gcc -I/home/debian/build-new/toolchain/gcc-4.7.3/libgcc -I/home/debian/build-new/toolchain/gcc-4.7.3/libgcc/. -I/home/debian/build-new/toolchain/gcc-4.7.3/libgcc/../gcc -I/home/debian/build-new/toolchain/gcc-4.7.3/libgcc/../include -DHAVE_CC_TLS -o _fractHASF_s.o -MT _fractHASF_s.o -MD -MP -MF _fractHASF_s.dep -DSHARED -DL_fract -DFROM_HA -DTO_SF -c /home/debian/build-new/toolchain/gcc-4.7.3/libgcc/fixed-bit.c
This compilation has been running for the past 17-18 hours now (and has not crashed or done anything else to indicate that an error may have occurred). It does seem a bit weird that it is still working on fixed-bit.c, but maybe that's normal?
|
Using @filbranden's comment, I was able to compile the kernel for my router (there are more error's that need to be solved, but that isn't the scope of this question).
I left a log and my config of what I was doing to compile the kernel on Github gist (new logs and config). The config is broken and won't be apparent until the stage of actually compiling linux, but those solutions are simple.
I ran the below commands to compile the kernel:
make menuconfig O=~/build-new/
RANLIB="/home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-ranlib"
READELF="/home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-readelf"
OBJDUMP="/home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-objdump"
AR="/home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-ar"
AS="/home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-as"
LD="/home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-gcc"
NM="/home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-nm"
CC="/home/debian/bin-new/ccache-3.1.8/ccache
/home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-gcc"
GCC="/home/debian/bin-new/ccache-3.1.8/ccache
/home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/gcc"
CXX="/home/debian/bin-new/ccache-3.1.8/ccache
/home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/g++"
CPP="/home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-cpp"
CPPFLAGS="-I/home/debian/build-new/host/usr/include" CFLAGS="-O2
-I/home/debian/build-new/host/usr/include" CXXFLAGS="-O2 -I/home/debian/build-new/host/usr/include" LDFLAGS="-L/home/debian/build-new/host/lib
-L/home/debian/build-new/host/usr/lib" make autoconf O=~/build-new/
RANLIB="/home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-ranlib"
READELF="/home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-readelf"
OBJDUMP="/home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-objdump"
AR="/home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-ar"
AS="/home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-as"
LD="/home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-gcc"
NM="/home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-nm"
CC="/home/debian/bin-new/ccache-3.1.8/ccache
/home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-gcc"
GCC="/home/debian/bin-new/ccache-3.1.8/ccache
/home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/gcc"
CXX="/home/debian/bin-new/ccache-3.1.8/ccache
/home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/g++"
CPP="/home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-cpp"
CPPFLAGS="-I/home/debian/build-new/host/usr/include" CFLAGS="-O2
-I/home/debian/build-new/host/usr/include" CXXFLAGS="-O2 -I/home/debian/build-new/host/usr/include" LDFLAGS="-L/home/debian/build-new/host/lib
-L/home/debian/build-new/host/usr/lib -Wl,-rpath,/home/debian/build-new/host/usr/lib" make O=~/build-new/
Given the below error, I had to still symlink some binaries before running the make command:
Checking for C compiler ... none found
ERROR: no C compiler found
Checking for linker ... '/home/debian/build-new/host/usr/bin/mips-buildroot-linux-uclibc-gcc' not found (user)
ERROR: no linker found
Checking for ar ... /home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-ar ()
Checking for ranlib ... /home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-ranlib ()
Checking for readelf ... none found
ERROR: no readelf found
Checking for objdump ... /home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-objdump ()
The binaries I symlinked are below. I put a pound sign in front of the ones I don't think matter if symlinked (because the path was never able to be set from prefixing to the make command).
mkdir -p /home/debian/build-new/host/usr/bin/
cp -r /home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/* /home/debian/build-new/host/usr/bin/
cd /home/debian/build-new/host/usr/bin/
ln -s mips-linux-uclibc-gcc mips-buildroot-linux-uclibc-gcc
#ln -s /home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-ranlib mips-buildroot-linux-uclibc-ranlib
#ln -s /home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-readelf mips-buildroot-linux-uclibc-readelf
#ln -s /home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-objdump mips-buildroot-linux-uclibc-objdump
The config and commands here are still broken when it comes to building the kernel itself, but they function well enough to get past the stage of this error. I managed to successfully compile the kernel this morning (took more than 24 hours of pure compile time), but after booting it in QEMU and mounting my filesystem I copied from my router, I came to realize I chose the wrong byte order (I chose LSB instead of MSB by choosing big endian instead of little endian).
Otherwise, I have successfully compiled the kernel using @filbranden's help.
| Why does my Makefile not compile and how can I fix it? |
1,563,892,725,000 |
When trying to mount ext2 I get this error:
Creating 4 MTD partitions on "MPC8313RDB Flash Map Info":
0x000000000000-0x000000100000 : "U-Boot"
0x000000100000-0x000000300000 : "Kernel"
0x000000300000-0x000000700000 : "JFFS2"
0x000000700000-0x000000800000 : "dtb"
List of all partitions:
1f00 1024 mtdblock0 (driver?)
1f01 2048 mtdblock1 (driver?)
1f02 4096 mtdblock2 (driver?)
1f03 1024 mtdblock3 (driver?)
No filesystem could mount root, tried: ext2
Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(1,0)
For some reason u-boot is not able to pass boot parameters to the kernel so I specified them directly by modifying boot_command_line in init/main.c
These are my arguments:
root=/dev/ram0 rw rootfstype=ext2 ramdisk_size=30000 ramdisk_blocksize=1024 console=ttyS0,115200
I thought (and still think) the problem is that the kernel does not have enough information about the initrd so I went insize powerpc/boot/of.c and manually set
loader_info
if (a1 && a2 && a2 != 0xdeadbeef) {
//loader_info.initrd_addr = a1;
//loader_info.initrd_size = a2;
loader_info.initrd_addr= 0x07c15000;
loader_info.initrd_size= 0x00386815;
}
I chose those values because that is the size and location u-boot reports
Loading Ramdisk to 07c15000, end 07f9b815 ... OK
If I do not specify rootfstype then it defaults to yaffs2 and this is the output:
yaffs: dev is 1048576 name is "ram0" rw
yaffs: passed flags ""
yaffs: dev is 1048576 name is "ram0" rw
yaffs: passed flags ""
yaffs: mtd is read only, setting superblock read only
------------[ cut here ]------------
WARNING: at mm/page_alloc.c:2544
Modules linked in:
CPU: 0 PID: 1 Comm: swapper Not tainted 3.16.62 #116
task: c782c000 ti: c781a000 task.ti: c781a000
NIP: c006dcc0 LR: c006d754 CTR: 00000000
REGS: c781b890 TRAP: 0700 Not tainted (3.16.62)
MSR: 00029032 <EE,ME,IR,DR,RI> CR: 22002244 XER: 00000000
GPR00: c006d754 c781b940 c782c000 00000000 00000001 00000000 c781b8a8 00000041
GPR08: c02a46ab 00000000 00000001 00000000 22002242 00000000 c00041a0 00000000
GPR16: 00000000 00000000 00000000 00000041 00024050 c02a45ec c02bbb40 c02bbb3c
GPR24: 00000000 00000014 00000000 00000000 c02a45e8 c02b10e0 00004050 00000001
NIP [c006dcc0] __alloc_pages_nodemask+0x660/0x86c
LR [c006d754] __alloc_pages_nodemask+0xf4/0x86c
Call Trace:
[c781b940] [c006d754] __alloc_pages_nodemask+0xf4/0x86c (unreliable)
[c781ba10] [c007f51c] kmalloc_order+0x18/0x4c
[c781ba20] [c01128bc] yaffs_tags_marshall_read+0x22c/0x264
[c781bae0] [c0110650] yaffs2_checkpt_find_block+0x90/0x1a8
[c781bb50] [c011125c] yaffs2_checkpt_rd+0x200/0x228
[c781bbe0] [c0114dcc] yaffs2_rd_checkpt_validity_marker+0x24/0xa4
[c781bc10] [c0115b68] yaffs2_checkpt_restore+0x68/0x714
[c781bc80] [c010fe90] yaffs_guts_initialise+0x46c/0x868
[c781bcb0] [c0108810] yaffs_internal_read_super.isra.16+0x420/0x83c
[c781bd50] [c0108c48] yaffs2_internal_read_super_mtd+0x1c/0x3c
[c781bd60] [c00a6224] mount_bdev+0x194/0x1c0
[c781bdb0] [c00a6c60] mount_fs+0x20/0xb8
[c781bdd0] [c00be984] vfs_kern_mount+0x54/0x120
[c781bdf0] [c00c1a30] do_mount+0x1f0/0xb60
[c781be50] [c00c2770] SyS_mount+0xac/0x120
[c781be90] [c0275e94] mount_block_root+0x130/0x2a0
[c781bee0] [c027635c] prepare_namespace+0x1b8/0x200
[c781bf00] [c0275b48] kernel_init_freeable+0x1a8/0x1bc
[c781bf30] [c00041b8] kernel_init+0x18/0x120
[c781bf40] [c000e310] ret_from_kernel_thread+0x5c/0x64
Instruction dump:
2f890000 40beff90 89210030 2f890000 419efe3c 4bffff80 73ca0200 4082fab4
3d00c02a 390846ab 89480001 694a0001 <0f0a0000> 2f8a0000 419efa98 39400001
---[ end trace fbbfd1e0d42ac49d ]---
VFS: Mounted root (yaffs2 filesystem) readonly on device 1:0.
devtmpfs: error mounting -2
Freeing unused kernel memory: 112K (c0275000 - c0291000)
What is the source of this problem?
|
For some reason my U-boot is not letting my kernel know where the initrd is being loaded into ram so I manually set initrd_start and initrd_end in setup-common.c. I mapped the memory location in ram that the ramdisk was loaded to a virtual address space of the kernel. I had to remap because the PAGE_OFFSET was larger than the address of the ramdisk.
void __init check_for_initrd(void)
{
#ifdef CONFIG_BLK_DEV_INITRD
initrd_start= (int)ioremap(0x07c15000 ,(0x07f9b815-0x07c15000) );
initrd_end= initrd_start + (0x07f9b815 - 0x07c15000);
printk("PAGE OFFSET: %lx\n", PAGE_OFFSET);
DBG(" -> check_for_initrd() initrd_start=0x%lx initrd_end=0x%lx\n",
initrd_start, initrd_end);
| Kernel can't find initrd? |
1,563,892,725,000 |
In 4.19 the dentry_update_name_case function was removed.
What's the replacement for this?
|
There is no replacement; the function was removed because its user, ncpfs, had been removed (as you know only too well). If you’re working on restoring ncpfs, I would consider adding the function locally in ncpfs.
| kernel 4.19: replacement for dentry_update_name_case()? |
1,563,892,725,000 |
I can control backlight using /sys/class/backlight/.. but is there a way to identify which driver (module) is used internally to actually control the backlight intensity?
|
$ ls -ld /sys/class/backlight/intel_backlight
lrwxrwxrwx. 1 root root 0 Jun 3 10:08 /sys/class/backlight/intel_backlight
-> ../../devices/pci0000:00/0000:00:02.0/drm/card0/card0-eDP-1/intel_backlight
$ ls -l /sys/devices/pci0000:00/0000:00:02.0/driver
lrwxrwxrwx. 1 root root 0 Jun 3 09:08 /sys/devices/pci0000:00/0000:00:02.0/driver
-> ../../../bus/pci/drivers/i915
$ ls -l /sys/bus/pci/drivers/i915/module
lrwxrwxrwx. 1 root root 0 Jun 4 17:04 /sys/bus/pci/drivers/i915/module
-> ../../../../module/i915
don't ask me exactly how I guess the level that holds driver :). I think you might be supposed to check each level, starting with the longest path and working down, but that's a bit tedious.
| Find driver used for backlight control |
1,563,892,725,000 |
How can I uninstall a kernel which is now being used by my laptop.
I have two kernel installed in my manjaro linux laptop
4.16*
4.14*
When I boot into my system 4.16 always runs by default. To run with 4.14 i goto advanced options and then select 4.14. I want to get rid of 4.16 I like 4.14. Can I remove 4.16 while it is running, and do the update-grub. Or if their is alternative way I'll prefer that alternative way
linux linux-kernel kernel boot archlinux
|
From the advanced option of Grub menu you should boot the 4.14 kernel then remove the 4.16 version.
To prevent the kernel from being upgraded , edit your /etc/pacman.conf , add the following line:
IgnorePkg = linux
Then update grub.
| can i uninstall a kernel which is now running in a manjaro linux |
1,520,323,320,000 |
I would like to have the default for certain input pins be a weak pulldown. I am using a sama5d36 running Debian 4.12.8. I modified the dts file as follows:
ahb {
abp {
pinctrl@fffff200 {
board {
pinctrl_inputs: input_pins {
atmel,pins =
<AT91_PIOC 26 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_DOWN>,
<AT91_PIOC 27 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_DOWN>,
<AT91_PIOA 30 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_DOWN>,
<AT91_PIOA 31 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_DOWN>;
};
};
};
};
};
myInputs {
compatible = "atmel,at91sam9x5-pinctrl", "atmel,at91rm9200-pinctrl";
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_inputs>;
};
Just wanted to add that I do see PULL_DOWN in /sys/kernel/debug/pinctrl/ahb:apb:pinctrl@fffff200/pinconf-pins:
pin 30 (pioA30): PULL_DOWN|DRIVE_STRENGTH_MED
pin 31 (pioA31): PULL_DOWN|DRIVE_STRENGTH_MED
pin 90 (pioC26): PULL_DOWN|DRIVE_STRENGTH_MED
pin 91 (pioC27): PULL_DOWN|DRIVE_STRENGTH_MED
but /sys/class/gpio/pioA30 still shows a value of 1:
direction -> in
active_low -> 0
value -> 1
Same for the other pins (PioA31, pioC26, pioC27). I don’t need this pin to be active low I just added that to show that the input is high with nothing connected, something I verified with a scope.
Update: I added the following pins and they actually work:
<AT91_PIOD 6 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_DOWN>,
<AT91_PIOD 7 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_DOWN>;
which confuses me even more. I checked /sys/kernel/debug/pinctrl/ahb:apb:pinctrl@fffff200/pinmux-pins and all the pins show as follows:
pin 102 (pioD6): (MUX UNCLAIMED) (GPIO UNCLAIMED)
pin 103 (pioD7): (MUX UNCLAIMED) (GPIO UNCLAIMED)
Anyone experienced anything similar?
|
Setting up a node in the device tree (dts) requires a compatible node like gpio-keys or gpio-leds. You can't just make up a node like I was trying to do. since the line I need is part of SPI BLE I added it to my spi1 node as follows:
spi1: spi@f8008000 {
cs-gpios = <0>, <0>, <0>, <0>;
pinctrl-0 = <&pinctrl_spi1 &pinctrl_ble_irq>;
dmas = <0>, <0>;
status = "okay";
spidev@0 {
compatible = "semtech,sx1301";
spi-max-frequency = <10000000>;
reg = <0>;
};
};
pinctrl@fffff200 {
board {
pinctrl_ble_irq: ble_irq {
atmel,pins =
<AT91_PIOB 14 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_DOWN>,
<AT91_PIOB 20 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_DOWN>,
<AT91_PIOB 22 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_DOWN>,
<AT91_PIOB 26 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_DOWN>,
<AT91_PIOC 17 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_DOWN>,
<AT91_PIOD 6 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_DOWN>,
<AT91_PIOD 15 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_DOWN>,
<AT91_PIOE 16 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_DOWN>,
<AT91_PIOE 23 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_DOWN>,
<AT91_PIOE 31 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_DOWN>,
<AT91_PIOD 8 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_DOWN>;
};
};
};
I still don't know why the other pins won't pull down but at least now I am not gettin an error in my boot. I had to turn on earlyprintk in the kernel in order to see the message.
Update: was finally able to get pulldown to work. Several pins were pulled up in hardware and thus the ineffectiveness of the pulldown. Several pins were set as LED or used by other peripherals that I disabled. All the pins in the above example did pull down successfully.
| Want pulldown on gpio pin |
1,520,323,320,000 |
I am trying to compile kernel for Allwinner A10 processor (sun4i, ARMV7) for Android 4.1.2. Config file is copied from the device. This is the output :
$ make ARCH=arm CROSS_COMPILE=/home/user/android_kernel/arm-eabi-4.4.3/bin/arm-eabi-
CHK include/linux/version.h
CHK include/generated/utsrelease.h
make[1]: include/generated/mach-types.h is up to date.
CC kernel/bounds.s
kernel/bounds.c:1: sorry, unimplemented: -mfloat-abi=hard and VFP
make[1]: *** [kernel/bounds.s] Error 1
make: *** [prepare0] Error 2
The error is the kernel/bounds.c:1: sorry, unimplemented: -mfloat-abi=hard and VFP, for which I could not find solution
Any suggestions?
|
It seems like you're using a toolchain configured for softfloat targets. You'll need a hardware floating point-supported one.
| Kernel make error: “sorry, unimplemented: -mfloat-abi=hard and VFP” |
1,520,323,320,000 |
I'm using an NXP embedded linux board and I compiled u-boot, the kernel and am using a linaro rootfs.
On it I installed freeswitch and loaded mod_gsmopen with a Huawei module and it didn't work. After a little bit of reading the conf file I found that it was trying to read ttyUSB3, so I tried finding the correct ttyUSB but I can't find any /dev/ttyUSB modules, even though it detects the module.
I've tried using lsusb, lsblk and lsmod but only lsusb gives me something about the USB module.
After some reading and trying to find a similar problem, I saw some posts telling me to try
modprobe usbserial
depmod
Modprobe command gives me the result:
modprobe: ERROR: ../libkmod/libkmod.c:557 kmod_search_moddep() could not open moddep file '/lib/modules/4.1.15/modules.dep.bin'
And depmod gives me:
depmod: ERROR: could not open directory /lib/modules/4.1.15: No such file or directory
depmod: FATAL: could not search modules: No such file or directory
I found on a post to do something likes this:
apt-get install --reinstall linux-image-`uname -r\`
But it doesn't find the package. When I do an apt-cache search linux-image I get many results, ranging from linux-image-4.4 to 4.9, which leads me to believe that there isn't any linux-image available for my version of the kernel and I don't know if I can install something from a newer version of the kernel.
My solution so far has been downloading kernel 4.9.34 which is longterm and recompile it from scratch again, but there is a chance that the problem persists and also this tkes a long time in my machine. Does anyone have any easier solutions?
PS: I'm on armhf, i.MX6ULL module from NXP. Also, /lib/modules does not exist on my machine
|
Ok. So I had many problems, so let's start from the beginning.
At that time I was trying to compile the drivers builtin into the kernel and not as a module. For some reason that didn't work so I decided to compile them as modules separately and installing them later.
The main problema I wasn't finding any package when doing apt-get was because I wasn't using a kernel version that had the linux image ready for installation. Basically I needed to install compile and install it myself.
One other thing that made everything start working was when I started compiling the linux headers, without doing that I'd probably still be trying to get my board to work.
| Can't find /lib/modules/ |
1,520,323,320,000 |
I recently purchased a Samsung NP900X3N which came with Windows 10 installed. I used it for a while on Windows 10 and I noticed the battery lasted for a long time (I forgot to time it, but I felt like it lasted for as a long as it should have lasted for a new laptop with SSD and a 7th gen i5).
However, after I installed Linux Mint and upgraded to version 4.10 of the kernel (which was necessary for some cards to work) I feel like the battery doesn't last as long. If I watch a movie unplugged it lasts for less than 2 hours.
I get that from my old computer, which has an HDD and 5 year-old hardware, but I figured it should last longer with this laptop. (Although I do not have benchmarks for that to compare against.) Keep in mind that both my old computer and this new one have almost the same battery power (4400 mAh for the old and 4000 mAh for the new one).
My question is: is this behavior normal? Can it be related to the kernel? Is there something I can do to prolong the battery life? (I already keep the bluetooth turned off, but maybe there are other services that I don't use that are generally a good idea to leave off for battery-saving.)
Cheers
EDIT
I'm trying to first try things without installing TLP. If I can't really make a difference then I'll try TLP.
|
Turns out that powertop is a good alternative, but the fact that I have to re-do everything manually every time I turn on the computer kind of kills it for me. This made me have to install tlp, for which I found no alternative.
| Battery doesn't last long on Linux for Samsung series 9 |
1,484,913,249,000 |
After updating Arch Linux xfce4 has been crashed and When I reboot the machine I got this message
Failed to start loading kernel modules
This is the output of uname -a
Linux NasserLaptop 4.8.4-1ARCH #1SMP PREEMPT Sat Oct 22 18:26:57 CEST 2016 x86_64 GNU/Linux
Output of `pacman -Q Linux
linux 4.8.6-1
I have tried pacman -S linux and remount the /boot but still have the same problem
|
I have solved ,Thank to u guys
I have booted from USB then remounting the partitions as in installation then generate fstab file and edit it to add /boot then arch-chroot and
pacman -S linux
The problem was that boot partition didn't mount correctly and I have to edit fstab file to add it
My post and more details here in the Arch forums
| Failed to start loading kernel modules |
1,484,913,249,000 |
I'm now running XUbuntu 16.04 with kernel 4.4.0-31 and now I need to somehow downgrade kernel to version 4.1.24. Is there any way to do that?
|
Install required packages:
sudo apt-get install git fakeroot build-essential
sudo apt-get install libssl-dev bc ncurses-dev xz-utils
sudo apt-get install kernel-package
Download the Linux kernel
wget https://www.kernel.org/pub/linux/kernel/v4.x/linux-4.1.24.tar.xz
wget https://www.kernel.org/pub/linux/kernel/v4.x/linux-4.1.24.tar.sign
verify kernel signatures
unxz linux linux-4.1.24.tar.xz
gpg --verify linux-4.1.24.tar.sign
If you get "Bad signatures" try the following answer
tar xvf linux-4.1.24.tar
cd linux-4.1.24
cp /boot/config-$(uname -r) .config
make menuconfig
Make sure that the Enable loadable module support is selected.
Save and exit
Compile the kernel:
make-kpkg clean
fakeroot make-kpkg --initrd --revision=1.0.NAS kernel_image kernel_headers -j 16
Type the following command to find the .deb files:
ls ../*.deb
Install the kernel files:
sudo dpkg -i linux-headers-4.1.24_1.0.NAS_amd64.deb
sudo dpkg -i linux-image-4.1.24_1.0.NAS_amd64.deb
Reboot you system.
From the advanced option of GRUB, choose to boot the new kernel.
You can find the old kernel using the following command:
dpkg --list | egrep -i --color 'linux-image|linux-headers'
To remove it :
sudo apt-get --purge remove linux-imagexxxxx
sudo apt-get autoremove
Edit
To solve this error unxz : linux : no such file or directory , install xz-utils :
sudo apt-get install xz-utils
| Downgrade XUbuntu kernel |
1,484,913,249,000 |
I just need to check some redhat machine version - 6
and I notice that grub.conf inst exists
how this Linux machine know with which kernel to start? ( in case grub.conf not exists )
is it safe to do a reboot to the linux in that case ?
08:16:41 root@test:~ # more /etc/grub.conf
/etc/grub.conf: No such file or directory
08:16:47 root@test:~ # rpm -q kernel
kernel-2.6.32-642.el6.x86_64
kernel-2.6.32-573.12.1.el6.x86_64
08:16:55 root@test:~ # ls -ltr /etc/grub.conf
lrwxrwxrwx. 1 root root 22 Jan 4 16:46 /etc/grub.conf -> ../boot/grub/grub.conf
08:17:22 root@test:~ #
|
Does cat /boot/grub/grub.conf work?
In those days the LILO bootloader was also a common choice to boot a UNIX machine.
| linux + grub.conf not exist |
1,484,913,249,000 |
On Linux 5 and 6 I can print the /etc/grub.conf
How to verify the latest kernel version on Linux 7? , like we do on Linux 5 and 6 from grub.conf
#boot=/dev/sda
default=0
timeout=15
#splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1
terminal --timeout=10 serial console
title Red Hat Enterprise Linux Server (2.6.17-1.2519.4.21.el5xen)
root (hd0,0)
kernel /xen.gz-2.6.17-1.2519.4.21.el5 com1=115200,8n1 dom0_mem=256MB
module /vmlinuz-2.6.17-1.2519.4.21.el5xen ro
root=/dev/VolGroup00/LogVol00
module /initrd-2.6.17-1.2519.4.21.el5xen.img
|
For example ls /boot/config-* will print you all the installed kernel versions. But there will be a lot more possibilities how to achieve the same.
yum info 'kernel*' should do the job too on all the RHEL's
| How to print the latest kernel version from grub.conf on Linux 7 |
1,484,913,249,000 |
I wanted to create a simple Hello world driver as in here: Page2 and compile it with Makefile:
obj-m := hello.o
all:
make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules
clean:
make -C /lib/modules/$(shell uname -r)/build M=$(PWD) clean
But I got:
make1: *** /lib/modules/4.3.5-300.fc23.x86_64/build: No such file or directory.
Which is logic, since build is a link to /usr/src/kernels/4.3.5-300.fc23.x86_64 and my /usr/src directory is empty. But kernel-devel is installed: rpm -qa|grep kernel:
kernel-headers-4.3.5-300.fc23.x86_64
kernel-modules-4.3.5-300.fc23.x86_64
kernel-core-4.2.3-300.fc23.x86_64
kernel-devel-4.3.5-300.fc23.x86_64
kernel-4.2.3-300.fc23.x86_64
kernel-modules-extra-4.3.5-300.fc23.x86_64
kernel-modules-extra-4.2.3-300.fc23.x86_64
kernel-core-4.3.5-300.fc23.x86_64
libreport-plugin-kerneloops-2.6.4-1.fc23.x86_64
abrt-addon-kerneloops-2.8.0-2.fc23.x86_64
kernel-4.3.5-300.fc23.x86_64
kernel-modules-4.2.3-300.fc23.x86_64
And I read that these packages might have been stored in /usr/include and when I try to install them, it of course keep saying that they are already installed.
Question: What should I do, makefile or installation, to properly compile my hello.c to hello.ko ?
I have Fedora 23.
|
Okay, so for me worked upgrading/installing kernel and all kernel-X modules.
After that a kernel (propriate version) directory appeared in /usr/src/kernels/
| Invalid Build path for driver creation |
1,484,913,249,000 |
I am trying to compile a linux kernel (2.6.32.70) for an ARM board (versatilepb), it is my first steps in embedded linux.
At the end of the compilation, two compressed kernel images are generated inside /arch/x86/boot and /arch/i386/boot directories, and not inside /arch/arm/boot. So it looks like that it doesn't compile for an ARM guest.
First, i call make versatile_defconfig in order to generate a default configuration file. I also type make menuconfig to enable the option Use the ARM EABI to compile the kernel. Then i use make V=1 with root privilege for compilation (it doesn't work without). In my environment, these two variables are defined : $ARCH=arm and $CROSS_COMPILE=arm-linux-gnueabi-.
Is it normal to have a lot of questions during the compilation process, even after generating the configuration file ? Questions are about the kernel compression mode, processor family, ... And for this last one, answers seem to be only x86 an similar cpu !
|
Finally it works, it seems that my kernel directory was not so clean, even after a make clean && make mrproper.
After restarting from kernel sources extracted from the archive, i can do a make V=1 without root privilege and there isn't any questions asked. And the directory /arch/arm/boot contains an image of the compressed kernel too (zImage).
| Cannot compile a linux kernel for an ARM board [closed] |
1,484,913,249,000 |
I want to setup a Linux environment, but I want the system to be bootable in two or more computer systems with different sets of hardware.
Can Linux provide that level of hardware abstraction given that computers are based on the same architecture (x86 64-bit) ?
I suspect that if I have one compatible kernel for each machine, it could boot successfully.
Does the Debian OS architecture support that feature? How can I do it?
|
The short answer is yes.
As long as the processor architecture is the same (x86_32, x86_64, etc.), an installation will for the most part run anywhere. There are only three difficulties in practice:
You need to have the right drivers available at boot time. The best way to ensure this is to stick with your distribution's kernel: if you compile your own, the risk is pretty high that you'll accidentally miss a driver.
The bootloader needs to work. On PC hardware that's generally not an issue. Just use Grub and make sure that the configuration doesn't hardcode device names.
Proprietary video drivers are unfriendly and tend to install some files that make it impossible not to use them. Last I looked, this was the case for both ATI and NVidia proprietary drivers. Free drivers are fine. So stick to the free video drivers, and don't use fancy 3D effects which the free drivers don't support.
| Linux setup compatibility |
1,450,281,081,000 |
I am currently getting this error message:-
iptables v1.4.12: can't initialize iptables table `filter': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.
I have tried insmod, updating the kernel from apt-get but I am really at a loss they are just not working and I have no idea why. what should I do? recompile the kernel? when I was installing the modules when first installing the kernel all IPV4 related modules failed to install. help me people of the internet!
I am using -> Linaro 14.04 (GNU/Linux 3.15.0+ armv7l)
apt-get install linux-image-$(uname -r)
I have already tried the above and update everything I still get the same error and I try running this again and it tells me everything is upto date (yes I did update the apts)
edit -- some information from dmesg
[ 60.551189] init: tty1 main process (1534) killed by TERM signal
[ 65.094650] init: lxdm main process (1463) terminated with status 1
[ 182.391322] ip_tables: Unknown symbol xt_free_table_info (err 0)
[ 182.391378] ip_tables: Unknown symbol xt_alloc_table_info (err 0)
[ 182.391422] ip_tables: Unknown symbol xt_check_match (err 0)
[ 182.391467] ip_tables: Unknown symbol xt_request_find_target (err
0)
[ 182.391497] ip_tables: Unknown symbol xt_unregister_matches (err 0)
[ 182.391534] ip_tables: Unknown symbol xt_request_find_match (err 0)
[ 182.391584] ip_tables: Unknown symbol xt_unregister_targets (err 0)
[ 182.391606] ip_tables: Unknown symbol xt_recseq (err 0)
[ 182.391701] ip_tables: Unknown symbol xt_register_targets (err 0)
[ 182.391798] ip_tables: Unknown symbol xt_register_table (err 0)
[ 182.391819] ip_tables: Unknown symbol xt_proto_init (err 0)
[ 182.391855] ip_tables: Unknown symbol xt_replace_table (err 0)
[ 182.391882] ip_tables: Unknown symbol xt_find_table_lock (err 0)
[ 182.391925] ip_tables: Unknown symbol xt_table_unlock (err 0)
[ 182.391945] ip_tables: Unknown symbol xt_proto_fini (err 0)
[ 182.391964] ip_tables: Unknown symbol xt_register_matches (err 0)
[ 182.391984] ip_tables: Unknown symbol xt_check_target (err 0)
[ 182.392018] ip_tables: Unknown symbol xt_find_revision (err 0)
[ 182.392045] ip_tables: Unknown symbol xt_unregister_table (err 0)
|
I solved this issue in the end, I recompiled the kernel, recopied over the modules and created a symlink between 3.15.0 to 3.15.0+ ... this + was not added to a file for some reason which messed up alot of things adding this I was able to make and install modules
| module issues with kernel |
1,450,281,081,000 |
Recently performed software update on a RHEL 6.6 (Santiago). Noticed that kernel version is not updated to latest one that was installed.
kernel version before doing software update:
[root@server01 ~]# uname -r
2.6.32-431.11.2.el6.x86_64
Below is the summary of kernel packages from yum list command:
[root@server01 ~]# yum list kernel
Loaded plugins: amazon-id, rhui-lb, security
Installed Packages
kernel.x86_64 2.6.32-431.11.2.el6 installed
kernel.x86_64 2.6.32-504.12.2.el6 @rhui-REGION-rhel-server-releases
kernel.x86_64 2.6.32-504.16.2.el6 @rhui-REGION-rhel-server-releases
Available Packages
kernel.x86_64 2.6.32-504.23.4.el6 rhui-REGION-rhel-server-releases
Expecting the kernel would be updated to highest version among the installed ones. But it changed as below even after couple of reboots.
[root@server01 ~]# uname -r
**2.6.32-504.12.2.el6.x86_64**
Any ideas? thanks!
|
While I'm unable to explain why the newest kernel wasn't booting automatically, the kernel booted by GRUB is set in /boot/grub/grub.conf using the default=<menu entry number>, where counting starts at 0. In this specific case, default=1 will boot your desired kernel.
| Kernel version not changing even though the new version is installed |
1,450,281,081,000 |
Reading the Debian Kernel Handbook I came across the config option CONFIG_DEBUG_INFO.
That option isn't in the official debian 3.2 kernel config, so what I'd like to know:
If a certain option is not in the .config file, does that count as "undefined" and is "undefined" the same as "disabled"?
|
In case of that config option, "undefined" means "disabled", see the Makefile:
ifdef CONFIG_DEBUG_INFO
KBUILD_CFLAGS += -g
KBUILD_AFLAGS += -gdwarf-2
endif
So, the answer is: it depends. There are config options in the Makefile with else, and other statements. Such a contant can also have a value that defines a different behavior.
| Kernel config: Is undefined the same as disabled? |
1,450,281,081,000 |
I am trying to install Kali on my Lenovo Yoga13, but after formatting the disk the setup failed to install grub because of no internet access (no Ethernet, need driver to get Wifi to work).
So, I decided to compile the Wifi driver to complete the setup just to realize I am missing kernel headers. I cannot apt-get install because I do not have net access. Is there a way to manually install kernel headers to compile the driver?
|
If it is not on the install media, then you need to get the deb into /var/cache/apt/archives/. If it is there then the download part of apt-get will be skipped.
| Missing kernel headers, but need them to install the Wifi driver |
1,450,281,081,000 |
On my debian laptop I installed kernel 3.14 so I have the alx driver so my ethernet works, I originally had the 3.2 kernel (Debian 7.7). AFter installing the new kernel, gnome3 went back to the "failed to start properly"-mode and startx didnt find the fglrx module .(
Is that a kernel compatibility issue? Can I install lower kernels than 3.14 via apt-get?
|
FGLRX has very poor performance (among other issues, which may include kernel compatibility issues with newer kernels). Heed my advice: You need to use the Open-Source Radeon drivers.
https://wiki.debian.org/AtiHowTo
I'm running Kernel 3.14+ on an ATI Radeon 5770HD with the Open-Source drivers.
The solution is not to downgrade your kernel. Download the Open-Source drivers via apt-get from the provided link. The XServer should pretty much take care of itself when you install the new packages.
| Kernel 3.14 not working with ATIProprietary fglrx? |
1,407,668,336,000 |
Why, on every Linux I've used, when a program is really I/O bound, it bogs the machine down? It just happened when I (accidentally) tried to make gcc compile a (valid!) 50Mb C file. GCC was so I/O intensive it brought the machine to its knees.
Another moment is when apt-get or similar are downloading packages, sometimes I can't even access the internet in the meantime, all the requests get timed out.
On Windows, the operations may not be that fast, but at least the processes get to share the I/O time between them.
Is this because of design decision? If yes, is there anyway I can change this behavior?
(please do not take this as a critique of Linux)
|
Sorry this question is just too broad to address. We can only really deal with specific issues here. There are any number of reasons as to why this is happening. Often poor performance is just misunderstood by the user.
Is the system out of RAM?
does it have a slow HDD?
Is the system not setup optimally?
I'm not just blowing your question off, but I've done a fair amount of benchmarking over my career and when you tear a supposed performance issue down to the brass tacks there's always a reason and it's usually obvious once you break the problem down into its discrete parts.
To debug this "problem" further one will have to roll up their sleeves and start digging in by first:
identifying what the rest of the system is doing during this event.
Is there ample RAM available?
Is swap being used?
Are there other processes competing for the finite set of resources on the system?
Actually time the "slow" running process and compare the perceived differences in run time
what resources is my process trying to use? HDD? RAM? How much?
This list can go on forever, but these are the types of things you'll need to diagnose before a cure to the problem can be identified.
| I/O brings a machine to its knees [closed] |
1,407,668,336,000 |
I installed latest DD-WRT firmware to my wifi router WZR-HP-AG300H. I got the firmware from here (05-27-2013-r21676).
I'm currently trying to install XFS file system support because my USB hard disk is formatted with it, but unfortunately DD-WRT firmware doesn't seem to support it.
I found that XFS module can be installed from internet (kmod-fs-xfs_3.8.13-1_ar71xx.ipk).
I managed to install libc and opkg (the installer tools), but the module needs kernel 3.8.13-1:
root@DD-WRT:/# opkg install kmod-fs-xfs
Installing kmod-fs-xfs (3.8.13-1) to root...
Downloading kmod-fs-xfs_3.8.13-1_ar71xx.ipk.
Collected errors:
* satisfy_dependencies_for: Cannot satisfy the following dependencies for kmod-fs-xfs:
* kernel (= 3.8.13-1-c9fbcbc6c04e6f1cd1482e9b879b485b) * kernel (= 3.8.13-1-c9fbcbc6c04e6f1cd1482e9b879b485b) *
* opkg_install_cmd: Cannot install package kmod-fs-xfs.
root@DD-WRT:/# uname -a
Linux DD-WRT 3.9.4 #322 Mon May 27 03:17:08 CEST 2013 mips GNU/Linux
So, I want to know, what firmware has kernel 3.8.13-1? There are so many revisions and I couldn't find any changelog past 2008.
|
It seems almost impossible to determine which versions of the Linux kernel are included with all the DD-WRT firmware versions, so I think I'd take a different tactic here and attempt to compile the xfs module myself. There are numerous tutorials that explain in pretty good details how to accomplish this.
Here are just a few:
Compiling custom dd-wrt kernel modules
Building DD-WRT From Source - Official Guide
You have a bit of an advantage here in the sense that you should only need to re-compile the XFS kernel module and not the entire DD-WRT firmware, so it should be fairly doable. You'll need to focus on specifically on this section of the DD-WRT Guide, titled: Compiling (custom) Kernel modules for DD-WRT.
If this is too much for you to deal with you might want to consider switching to OpenWRT, assuming it supports your particular hardware.
If you look through this page, titled: OpenWRT - USB Storage, you'll notice that they provide the XFS kernel module already built, as an option.
| DD-WRT on WZR-HP-AG300H: Want XFS support; what firmware has kernel 3.8.13-1? |
1,407,668,336,000 |
What are the standard and conventional ways of profiling the Linux kernel? I know there is perf tool but is there anything else?
|
I can suggest you using SystemTap using which you can add probe points to a running linux kernel. It is similar to DTrace which is similar tool developed for Solaris. You can write simple stap script to perform interesting tasks.
| Where to start profiling Linux kernel? |
1,407,668,336,000 |
We want to disable swap on all our RHEL servers (Hadoop servers). We have 2 options:
Set swappiness to 0, and swapoff -a & swapon -a
swapoff -a , and disable swap from fstab
From my understanding, both options disable swap completely
Option 2 for sure, since we swapoff -a and disable the swap from fstab.
But does option 1 give the same results as option 2?
|
What happened when you tested it?
Did you read the documentation?
A value of 0 instructs the kernel not to initiate swap until the amount of free and file-backed pages is less than the high water mark in a zone.
i.e., a value of zero does not disable swap, it just defers it.
| what is the different between settings swappiness to 0 to swapoff |
1,407,668,336,000 |
I have a theoretical question,
What would happen if I clean up all the swap space while running,
Would the operating system crash because of page faults that would happen in the kernel?
|
If you just mean running "swapoff -a" when you say "clean up", then no.
If you corrupt/overwrite the swap device/file, an application that gets swapped back in (with corrupted data) is very likely to crash, yes. The kernel does not get swapped out, so the "system" would not crash.
| Cleaning swap space while running |
1,407,668,336,000 |
/proc is a directory we find in root in linux. it contains processes' information. but actually, process table and all that stuff is stored in kernel which is in RAM. please answer my query. I may sound silly as i am new here.
|
From the manual page:
The proc filesystem is a pseudo-filesystem which provides an interface to kernel data structures.
/proc is not secondary storage. /proc, just like /sys, is a filesystem that provides a window into the kernel. /proc/1234/cmdline, as an example, is not a disk file. It doesn't occupy any space with the possible exception of an inode. When you read from that file, you actually access kernel memory.
You can see that /proc is not a normal filesystem when you try to write to some of its files. As root, try echo blabla > /proc/$$/cmdline. You will be greeted with echo: write error: Invalid argument. cmdline can only be read, even by root. Likewise, /sys contains files that can only be written. For example, try cat /sys/block/sda/device/delete (but don't write into it - you would logically remove the sda device from your system. Should you do it accidentally, the easiest remedy is to reboot).
A similar case is /dev/kmem. It's not a file system, but a device file, and it gives you access to kernel memory. It does not refer to a storage device.
Caveat: Writing to, and perhaps even reading from, some of the files in /proc and /sys can be risky and is best done on a test machine.
| Why process information is stored in /proc as process information is in kernel which is in RAM and /proc in secondary storage? |
1,407,668,336,000 |
Assuming that all filesystems are specific, how tools like ls, touch, cat, etc interacting with them? I suppose that 'ls' doesn't know specifics about btrfs for example, but it reads directory entries anyway. When some process writes a file, it doesn't to know about block allocator's specifics for particular filesystem, but anyway, file is successfully writen.
Is there some kind of kernel API that hides underlying filesystem implementation from the processes? It's not too complex to design a custom simple filesystem for academic purposes, and write a programs for manipulating it (mkfs, fsck, etc), but how to tell kernel about filesystem implementation, so other processes can use it?
EDIT:
I understand syscalls in user space, but what I am really interested in is what's happening after that, in kernel space.
|
On any POSIX system, the interface between applications and the kernel is a few function calls: open, read, write, close, etc. An application such as cat calls those functions; it doesn't care how the functions are implemented under the hood.
On Unix systems, those functions are actually system calls: the application calls the kernel. Inside the kernel, a typical architecture is to have a VFS layer, which handles tasks that are independent of the filesystem format (such as locating the proper filesystem, permissions, locking, etc.). Once it has determined which filesystem the file is located on, the VFS layer passes the operation on to the proper filesystem-specific driver.
You can observe the interface between applications and the kernel with a tool like strace on Linux or its equivalent on other Unix platforms (trace, truss, …). Example (omitting the part of the trace corresponding to the startup and the final cleanup of cat):
$ strace cat foo
…
open("foo", O_RDONLY) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=6, ...}) = 0
fadvise64(3, 0, 0, POSIX_FADV_SEQUENTIAL) = 0
mmap(NULL, 139264, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f8b9ea14000
read(3, "hello\n", 131072) = 6
write(1, "hello\n", 6hello
) = 6
read(3, "", 131072) = 0
munmap(0x7f8b9ea14000, 139264) = 0
close(3) = 0
…
You can see cat calling open, read and close on the file. It also calls fstat and fadvise64, I think only for performance optimizations.
The interface between the VFS and the filesystem driver can't be spied on so easily.
Programs like mkfs and fsck don't go through the kernel's filesystem interface, because they don't work on files, they work on storage areas. They access the block device containing the filesystem directly.
If you want to add support for a new filesystem, you need to write a driver for it. There are two ways to go about it.
You can write a driver that runs in the kernel; this gets you the best performance, and it's the only way to implement some feature (e.g. fine-grained access control). But it's also harder to debug (if your driver has a bug, you'll probably need to reboot; if you're lucky you may be able to view an error trace and perhaps even delay the reboot until you've saved your data — better do that in a virtual machine). Look up the documentation of your Unix variant's kernel to see what interface you need to implement.
Alternatively, you can use FUSE, which is a filesystem driver that forwards all requests back out of the kernel, so each filesystem driver is implemented as a process. If the filesystem is buggy, just kill the filesystem driver process and the rest of the OS survives. To learn how to write a FUSE filesystem, see the examples and read tutorials such as Sumit Singh's.
| Filesystem kernel API |
1,407,668,336,000 |
I'm trying to install Virtualbox 4.1 from the .run file since Pacman had the 4.0 version only. But when I try installing the file using sh filelocation/filename.run, it gives me the following error-
Please install the build and header files for your current Linux
kernel. The current kernel version is 2.6.38-ARCH
Is something broken in the kernel, or do I need to install something?
|
You need the "kernel26-headers" package installed so VirtualBox can compile it's accompanying modules
| Why do I get this error installing virtualbox-4.1 on Arch? |
1,407,668,336,000 |
as all know after kernel upgrade (on RHEL 7), reboot is necessary in order to update the kernel version
So after reboot we can verify the kernel version by uname -r
since we are using scripts for kernel upgrade,
we want to verify what is the right approach that machines are required reboot as results of kernel upgrade
just to mention that reboot process isn't immediately after kernel upgrade process, and could be couple of months after kernel upgrade
so, we want to find the right verification that indicate that machine RHEL7 required reboot
one approach is to verify by uname -r the version and comparing that version with installed rpm's as rpm -qa | grep kernel
but maybe we can get advice about better indication or better verification
|
as all know after kernel upgrade (on RHEL 7), reboot is necessary in order to update the kernel version
no, it's already upgraded; but to run a new kernel, you'll need to load the new kernel if you want to use it. Since Linux is not that cool, the only way to do that is effectively a reboot¹.
yum comes with a tool to tell you whether any of the things you've installed/upgraded require a reboot. Was easy to find, it's called needs-restarting. Call it with -r to get a meaningful return code.
¹ Technically, you could kexec into a new kernel, but none of the running processes, open files, filesystems network … will survive, so you end up with a broken state.
| how to identify if rhel 7 machine requires reboot after kernel update |
1,593,753,506,000 |
I have a Debian 10 system. It has secure boot enabled. I am trying to sign and load a new kernel module for virtualbox.
I generated a certificate and private key using openssl req -new -x509 -newkey rsa:2048 -keyout MOK.priv -outform DER -out MOK.der -days 36500 -subj "/CN=My Name/" -nodes. Then I imported this key with mokutil --import MOK.der. I then entered some password, rebooted, and enrolled the key. Then, after reading dozens of inaccurate tutorials, including Debian.org's OWN tutorial, they all suggested to use a program called sign-file. However, sign-file was completely missing, and a recursive search of every directory of the system returned nothing. After browsing a few obscure forums, I found a tool called sbsign, which seems to be the only available option for signing anything. Any time I attempt to sign a module with it, using the command sbsign --cert ~/MOK.pem --key ~/MOK.priv /lib/modules/4.19.0-9-amd64/misc/vboxdrv.ko. However, this command returns Invalid DOS Header Magic. There are almost no references to this error anywhere on the internet, and none that relate to my specific problem in any meaningful way.
What does this error mean? What can I do to sign these modules?
|
sbsign is for signing .efi binaries and other PE32(+) formatted executables.
sign-file comes along with the kernel source code (in the scripts directory of the source code tarball) and in the linux-kbuild-4.19 .deb package for Debian 10. It signs ELF-formatted binary files, which is what Linux kernel modules are.
You cannot substitute one for the other, as the file formats are different.
In situations where you know the exact name of the tool you need but not the name of the package it's in, you should go to the distribution's package contents search engine (good distributions have one). Here's it for Debian: https://www.debian.org/distrib/packages
Scroll down to Search the contents of packages, type in "sign-file" to the Keyword field, click on Search and if the file exists in any package of that distribution, you will find it.
| Cryptic Error When Attempted to Sign Kernel Modules |
1,593,753,506,000 |
I understand that Busybox is a single executable file that contains a set of unix commands/utilities.
My question is do we need an underlying OS on which it will run or it can be run directly on the machine without a kernel. If it can be run without an explicit OS, who will handle the stuff like CPU scheduling, user & role management, etc. And in case it needs an underlying OS, how can it be platform agnostic?
Please help me understand what I am missing here?
Edit:
So the root of the confusion was, I read that it is installed on small embedded devices. What I conclude is that these devices should come with some lightweight OS installed on top of which we can add basic unix functionalities with busybox.
Also by platform-agnostic above, I mean underlying OS agnostic. For example, can I run BusyBox over windows and if so how can that be possible?
|
If it can be run without an explicit OS, who will handle the stuff like CPU scheduling, user & role management, etc.
That should answer your first question already: Those are things a kernel does, and without kernel, programs that rely on those features just cannot run, which includes busybox.
As for a platform agnostic kernel, that's easier said than done. I assume by platform you mean processor architecture, which means you want an executable that runs without any VM in between on any processor which is just not possible.
The closest thing to what you want might be some minimalist linux distro, like core linux (only 11 MB).
If you don't want any user interaction after booting, you could even throw out some more stuff from the OS, but I assume you'd want at least a terminal so you can interact with the system.
So the root of the confusion was, I read that it is installed on small embedded devices. What I conclude is that these devices should come with some lightweight OS installed on top of which we can add basic unix functionalities with busybox.
There's several aspects to this question:
What even is an embedded device? These days you can easily run a full linux distribution on a raspberry pi, which is technically still a "small embedded device", and you can obviously run busybox on that to make it more lightweight. I suspec that's what that quote refers to.
You could probably modify busybox to run without a kernel; how difficult of a task that would be depends on how much it relies on kernel calls and whether you want all of its features to work or just a few of them.
How much sense does it even make? Busybox implements several programs described in the POSIX standard, which are meant to be used in combination with a unix-like kernel. For example, what's the point of chroot when you don't even have a filesystem, let alone a root directory?
Also by platform-agnostic above, I mean underlying OS agnostic. For example, can I run BusyBox over windows and if so how can that be possible?
Yes but no; Windows has a different API for programs to interact with the kernel. It also uses a different binary format for executable files. There's no way to just run busybox on windows without some sort of compatibility layer.
Normally you'd use something like mingw for that, which essentially implements linux APIs in such a way that they are redirected to the corresponding windows APIs under the hood. This allows you to compile and run simple linux programs without any major modifications to the source code.
Since Windows 10, microsoft also provides one such compatibility layer themselves, with Windows Subsystem for Linux, which afaik. now runs an entire virtualized linux kernel within windows to run linux applications "natively".
| Can I run Busybox without having an OS installed on the machine? |
1,593,753,506,000 |
Does anybody know the meaning of the string "People's Front" in the linux kernel source Makefile name?
> uname -a
Linux debian 4.19.0-6-amd64 #1 SMP Debian 4.19.67-2+deb10u2 (2019-11-11) x86_64 GNU/Linux
Just installed the kernel source from the debian official repository
> sudo apt-get install linux-source
the referred line:
> cat /usr/src/linux-source-4.19/Makefile | grep "NAME"
NAME = "People's Front"
|
Most Linux kernel versions since 1.2 have included a NAME= field in their Makefile.
Wikipedia has a list of them: https://en.wikipedia.org/wiki/List_of_Linux_kernel_names
Note that the string "People's Front" is in double quotes, both the previous and the next names in the series, Merciless Moray and Shy Crocodile respectively, aren't. This is exactly how they are in the actual kernel source: you can verify that for yourself in https://git.kernel.org.
Here's the update that labelled the 4.20-rc4 release candidate and, at the same time, changed the NAME= to Shy Crocodile:
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/Makefile?id=2e6e902d185027f8e3cb8b7305238f7e35d6a436
I think it's nothing more than Linus's long-running private joke, or something along those lines.
| linux kernel Makefile name="People's Front" |
1,593,753,506,000 |
I'm running Fedora 26, rpm -qa kernel outputs kernel-4.13.5-200.fc26.x86_64. And that's the one I want.
I had 3 kernels showing in grub, that one, and 2 newer ones, Wifi wasn't working when booting from the other two so I excluded kernel updates from dnf, and I removed the newer kernels sudo dnf remove kernel-4.x.
All went smoothly, however when I restart, I still can see them in grub and I can boot from them and the wifi won't work if I pick them.
Here I am booting from the newest kernel that I removed.
Here's my grub
What went wrong?
|
You can set your default entry without removing the newest kernel through grub2-set-default command. In your case without updating grub2 use:
grub2-set-default 2
After grub update you should have tow kernel on your grub 2 configuration file the command should be (the first kernel is 0 the second one is 1):
grub2-mkconfig -o /boot/grub2/grub.cfg
grub2-set-default 1
The command :
# awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg
will print the exact order of the existing kernel on the grub.cfg file.
Fedora project : Setting default entry
| Grub boots from kernels that I removed |
1,593,753,506,000 |
the kernel version for debian 8 use 3.10. But under /sys/fs/cgroup/cpu, miss a lot of cpu items.
vagrant@debian-jessie:/sys/fs/cgroup/cpu$ ls -1 .
cgroup.clone_children
cgroup.procs
cgroup.sane_behavior
cpuacct.stat
cpuacct.usage
cpuacct.usage_percpu
cpu.shares
notify_on_release
release_agent
tasks
How do I enable cpu.cfs_quota_us in debian 8?
|
You recompile your kernel with CONFIG_CFS_BANDWIDTH=y.
There is a feature request about this already.
| Why cgroup cpu items are miss in Debian 8 |
1,593,753,506,000 |
When a page fault occurs for a virtual address for any process how does the linux/unix operating system determine whether that page (of that virtual address) was swapped previously present in memory and swapped out to disk (i.e. that page is currently present in swap space) or that page was never loaded to memory before (i.e. that page is not present in swap space)?
|
The low level page fault handler in the OS (that is listed in the trap table from the CPU) get's the fault address from the CPU and uses this fault address to check the entries in process's address space description table.
This table contains a list of segment descriptors that each contain a base address and a size. If the address is not in that list. the OS sends a SIGSEGV (segmentati violati).
If the address could be found, the segment table entry that is responsible for the address range that includes the fault address also holds a pointer to the driver functions of the related segment driver.
A segment driver manages the VMEM to background memory actions. If the address is related to swap space, then the name of the responsible driver is anon.
There are many segment drivers, e.g. a segment driver for every filesystem as basically all filesystem data I/O operations are handled via mmap().
| On page fault, how does Unix determine if the faulting address is in swap space? |
1,593,753,506,000 |
Today I learned that there had been a faulty Debian kernel version which caused ext4 data corruption (bug 1057843) in December 2023.
Searching through the /var/log/aptitude and /var/log/apt logs, I noticed that the faulty kernel version was installed for one full day by /usr/bin/unattended-upgrade .
The chronology:
09 Sep 2023 17:53 Rebooted system by hand
07 Oct 2023 20:02 Upgrade via "aptitude" by hand: linux-image-amd64:amd64 6.1.52-1 -> 6.1.55-1
10 Dec 2023 06:41 Unattended "apt" upgrade: linux-image-amd64:amd64 6.1.55-1 => 6.1.64-1 (installed faulty version)
11 Dec 2023 07:00 Unattended "apt" upgrade: linux-image-amd64:amd64 6.1.64-1 => 6.1.66-1 (installed fixed version)
16 Dec 2023 12:23 Upgrade via "aptitude" by hand: linux-image-amd64:amd64 6.1.66-1 -> 6.1.67-1
18 Dec 2023 18:39 Rebooted system by hand
Although the faulty kernel version was installed at December, 10th, the system was not rebooted.
Can I assume that I am not affected by the data correuption bug, since the faulty kernel did not boot?
I am not 100% sure if the ext4 filesystem code is fully embedded in the kernel, or if changes to the ext4 module can apply on a running system.
|
As far as I understand, you are safe from this bug.
The only way to have the ext4 module changes to apply to the currently running non-buggy kernel would have been to first unmount all ext4 filesystems, then unload the old ext4 module and force-load the module from the buggy kernel version (overriding the kernel's preference to load the older version of the module intended for that particular kernel version), then remount all filesystems. If your root filesystem is ext4, it would be even more complicated.
No distribution I know has ever done anything like this, as it would cause a similar interruption to applications as a reboot would, so there would be no benefit.
While any ext4 filesystems are mounted, the current version of the ext4.ko kernel module is in use and cannot be unloaded.
The Debian 12 kernel seems to include the CONFIG_LIVEPATCH option, which would allow the patching of running kernel/module code, but it would require having specific livepatch modules provided for the specific kernel version that is going to be patched. As far as I know, Debian has not actually used this feature.
Anyway, if you have any livepatches applied, you should see them listed as extra kernel modules (presumably named like livepatch-<something>.ko), and also in /sys/kernel/livepatch/.
| Debian file corruption bug possible without reboot after unattended kernel update? |
1,593,753,506,000 |
Does anyone know if uname() makes an ioctl() call directly or indirectly? I reviewed the source, however didn't see that it does. I also used strace and did not see the kernel call made.
Thanks
strace uname
execve("/usr/bin/uname", ["uname"], 0x7fffe60bef30 /* 35 vars */) = 0
brk(NULL) = 0x559dc7796000
access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=161332, ...}) = 0
mmap(NULL, 161332, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7fd0c9384000
close(3) = 0
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\260A\2\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=1820400, ...}) = 0
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fd0c9382000
mmap(NULL, 1832960, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fd0c91c2000
mprotect(0x7fd0c91e4000, 1654784, PROT_NONE) = 0
mmap(0x7fd0c91e4000, 1339392, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x22000) = 0x7fd0c91e4000
mmap(0x7fd0c932b000, 311296, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x169000) = 0x7fd0c932b000
mmap(0x7fd0c9378000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1b5000) = 0x7fd0c9378000
mmap(0x7fd0c937e000, 14336, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7fd0c937e000
close(3) = 0
arch_prctl(ARCH_SET_FS, 0x7fd0c9383540) = 0
mprotect(0x7fd0c9378000, 16384, PROT_READ) = 0
mprotect(0x559dc6f13000, 4096, PROT_READ) = 0
mprotect(0x7fd0c93d3000, 4096, PROT_READ) = 0
munmap(0x7fd0c9384000, 161332) = 0
brk(NULL) = 0x559dc7796000
brk(0x559dc77b7000) = 0x559dc77b7000
openat(AT_FDCWD, "/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=3031632, ...}) = 0
mmap(NULL, 3031632, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7fd0c8edd000
close(3) = 0
uname({sysname="Linux", nodename="debian", ...}) = 0
fstat(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(0x88, 0), ...}) = 0
write(1, "Linux\n", 6Linux
) = 6
close(1) = 0
close(2) = 0
exit_group(0) = ?
+++ exited with 0 +++
|
The system call involved is … uname! You can see it in your trace:
uname({sysname="Linux", nodename="debian", ...}) = 0
It provides the operating system name, release, version etc.
| uname: what ioctl does it use? |
1,593,753,506,000 |
What is the technical minimum size for a functional (holds some pages) swap partition or file on Linux.
If it's architecture dependent, or depends on the size of physical memory or something, how would I calculate an estimate?
I'm not asking for recommendations, or viable sizes for a modern full-scale system. Just how low can it go and still function as swap
|
So a swap file has some overhead because of header information and stuff.
If you try a too small file... in this case 1 byte
# dd if=/dev/zero of=tst1 bs=1c count=1
1+0 records in
1+0 records out
1 byte (1 B) copied, 0.000141358 s, 7.1 kB/s
# ls -l tst1
-rw-r--r-- 1 root root 1 Mar 10 22:19 tst1
# mkswap tst1
mkswap: error: swap area needs to be at least 40 KiB
So we need at least 40k (at least on RedHat 7 on x86_64)
# dd if=/dev/zero of=tst1 bs=1c count=40960
40960+0 records in
40960+0 records out
40960 bytes (41 kB) copied, 0.183741 s, 223 kB/s
# mkswap tst1
Setting up swapspace version 1, size = 36 KiB
no label, UUID=4d559295-45c6-4952-8c14-f8eb55f3c201
# swapon tst1
swapon: /home/sweh/tst1: insecure permissions 0644, 0600 suggested.
# cat /proc/swaps
Filename Type Size Used Priority
/home/sweh/tst1 file 36 0 -2
And that provides 36K of swap.
| On Linux what is the technical minimum size for a functional swap partition |
1,593,753,506,000 |
on our RHEL servers , RHEL version - 7.2 , we saw many dmesg lines as:
example about sdb disk ( hard drive )
[Thu Dec 30 13:07:48 2021] EXT4-fs (sdb): error count since last fsck: 1329
[Thu Dec 30 13:07:48 2021] EXT4-fs (sdb): initial error at time 1614482941: ext4_find_entry:1312: inode 67240512
[Thu Dec 30 13:07:48 2021] EXT4-fs (sdb): last error at time 1640670898: ext4_find_entry:1312: inode 67240512
[Thu Dec 30 13:12:19 2021] sd 0:0:1:0: [sdb] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[Thu Dec 30 13:12:19 2021] sd 0:0:1:0: [sdb] tag#0 Sense Key : Medium Error [current]
[Thu Dec 30 13:12:19 2021] sd 0:0:1:0: [sdb] tag#0 Add. Sense: Unrecovered read error
[Thu Dec 30 13:12:19 2021] sd 0:0:1:0: [sdb] tag#0 CDB: Read(10) 28 00 80 41 13 38 00 00 08 00
[Thu Dec 30 13:12:19 2021] blk_update_request: critical medium error, dev sdb, sector 2151748408
[Thu Dec 30 13:14:38 2021] EXT4-fs warning (device sdb): __ext4_read_dirblock:902: error reading directory block (ino 67240512, block 0)
[Thu Dec 30 13:17:05 2021] NOHZ: local_softirq_pending 08
[Thu Dec 30 13:21:26 2021] NOHZ: local_softirq_pending 08
[Thu Dec 30 13:21:59 2021] sd 0:0:1:0: [sdb] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[Thu Dec 30 13:21:59 2021] sd 0:0:1:0: [sdb] tag#0 Sense Key : Medium Error [current]
[Thu Dec 30 13:21:59 2021] sd 0:0:1:0: [sdb] tag#0 Add. Sense: Unrecovered read error
[Thu Dec 30 13:21:59 2021] sd 0:0:1:0: [sdb] tag#0 CDB: Read(10) 28 00 80 41 13 38 00 00 08 00
[Thu Dec 30 13:21:59 2021] blk_update_request: critical medium error, dev sdb, sector 2151748408
[Thu Dec 30 13:21:59 2021] EXT4-fs warning (device sdb): __ext4_read_dirblock:902: error reading directory block (ino 67240512, block 0)
[Thu Dec 30 13:25:32 2021] NOHZ: local_softirq_pending 08
[Thu Dec 30 13:27:19 2021] NOHZ: local_softirq_pending 08
[Thu Dec 30 13:29:14 2021] NOHZ: local_softirq_pending 08
the Question is and based on above messages:
is it - most likely cause is hard drive dying if old age ?
if yes , what we should to do - replacing the disk/s ?
references - https://access.redhat.com/solutions/35465
|
“Dying of old age” implies that the drive is old, which we can’t determine from the information in the logs.
However I’m assuming this is in a professional setting; if so, in my opinion, any disk medium error should trigger a disk replacement. The “critical medium error” message indicates that this is a disk error, not related to a failure between the disk and the system (e.g. a cable failure). The logs in your question only show a single failed sector, so it might well be a localised failure, but it’s not worth taking the chance if you rely on your data storage.
If there’s just one (or a few) failed sectors, you can try remapping them to continue using the drive (temporarily); see smartctl retest bad sectors for example.
| HDD IO errors from kernel messages + is this definitely a HDD failure |
1,593,753,506,000 |
I found this in my logs:
kernel: gpio gpiochip0: (gpio_aaeon): tried to insert a GPIO chip with zero lines
kernel: gpiochip_add_data_with_key: GPIOs 0..-1 (gpio_aaeon) failed to register, -22
kernel: gpio-aaeon: probe of gpio-aaeon.0 failed with error -22
What does it mean and how should I solve it?
lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 21.04
Release: 21.04
Codename: hirsute
|
Those messages are produced by the kernel General Purpose I/O code, apparently on trying to load the module that is introduced by this patch or its more developed equivalent.
Short version:
The module finds the WMI programming interface it's looking for, but overlooks the fact that the interface reports no controllable GPIO lines in your hardware. It should have stopped the registration attempt and rejected the module installation with a -ENODEV error code. You can get rid of the messages by blacklisting the gpio-aaeon module, i.e. by creating a file named e.g. /etc/modprobe.d/no-aaeon-gpio-here.conf with the following contents:
blacklist gpio-aaeon
You have found a bug in that kernel module, and might want to report it to the Linux GPIO developers. Your hardware seems to present an interesting "corner case" for testing the gpio_aaeon module, which the developers apparently haven't considered. This is understandable as the module seems to be fairly new: the patch I linked above was posted in late May of this year.
Long version:
The GPIO subsystem is complaining that the gpio_aaeon module attempted to register a chip that does not actually have any General Purpose I/O lines to control, so there is no point registering such a chip to the GPIO subsystem.
The registering happens in the probe function of the module:
+static int __init aaeon_gpio_probe(struct platform_device *pdev)
+{
+ int err, i;
+ int dio_number = 0;
+ struct aaeon_gpio_data *data;
+ struct aaeon_gpio_bank *bank;
+
+ /* Prevent other drivers adding this platfom device */
+ if (!wmi_has_guid(AAEON_WMI_MGMT_GUID)) {
+ pr_debug("AAEON Management GUID not found\n");
+ return -ENODEV;
+ }
+
+ dio_number = aaeon_gpio_get_number();
+ if (dio_number < 0)
+ return -ENODEV;
+
+ data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL);
+ if (!data)
+ return -ENOMEM;
+
+ data->nr_bank = ARRAY_SIZE(aaeon_gpio_bank);
+ data->bank = aaeon_gpio_bank;
+ platform_set_drvdata(pdev, data);
+ bank = &data->bank[0];
+ bank->chip.parent = &pdev->dev;
+ bank->chip.ngpio = dio_number;
+ bank->data = data;
+ err = devm_gpiochip_add_data(&pdev->dev, &bank->chip, bank);
+ if (err)
+ pr_debug("Failed to register gpiochip %d: %d\n", i, err);
+
+ return err;
+}
Since the attempt to load the module did not simply end with the -ENODEV error, your system apparently has the WMI management API this driver is looking for... but when queried, the API actually says it has nothing to control.
In other words, the module can proceed into the dio_number = aaeon_gpio_get_number(); call, which ends up just calling a WMI method to get a single integer number, which is apparently the number of GPIO lines available for control through this API. The WMI method returns no error... but the number of lines it reports is apparently 0.
The function proceeds into allocating some memory and starts building up the structures required by the Linux GPIO subsystem to register the GPIO lines for control. Once it's done that, it calls the devm_gpiochip_add_data() function of the GPIO subsystem to register a new GPIO chip... but the GPIO subsystem performs some sanity checking and notices those structures actually specify there is 0 GPIO lines to control in the chip.
According to Elixir.bootlin.com Linux kernel cross-referencer, the devm_gpiochip_add_data() is a macro that just calls the devm_gpiochip_add_data_with_key() function with the two last parameters NULLed out. That, in turn, calls the gpiochip_add_data_with_key() function, which will produce the first error message you're seeing on line #628.
The other messages after that get produced as the chain of function calls gets unwound as each function returns an error code to its caller.
If the value returned by the aaeon_gpio_get_number() is really the number of GPIO lines the WMI API can control, then the test:
+ if (dio_number < 0)
+ return -ENODEV;
should actually be:
+ if (dio_number < 1)
+ return -ENODEV;
But if the number returned by the WMI API actually means something subtly different, like "the 0-based number of the last controllable GPIO line" (i.e. value 0 would mean "there is just one GPIO line #0 and no others"), then using the dio_number as the value of bank->chip.ngpio results in an off-by-one error and the module would miss the last controllable GPIO line in all systems that have this WMI API for GPIOs.
So either way, there is something that needs to be fixed.
| What gpio gpiochip0 kernel error means and how to solve? |
1,593,753,506,000 |
How can I make registered extensions for binfmt_misc persist across reboots?
Consider the following command, which performs a binfmt_misc registration:
echo ':golang:E::go::/tmp/test:OC' | sudo tee /proc/sys/fs/binfmt_misc/register
It needs to be executed as root, since only root is allowed to write to /proc/sys/fs/binfmt_misc/register. Thus, I cannot put such registrations into e.g. ~/.bashrc as an unprivileged user.
|
Since you’re using Debian, you could install binfmt-support and register your extension using update-binfmts:
sudo update-binfmts --install golang /tmp/test --extension go --credentials yes
If that works, you can store the binfmt_misc specification in a file under /usr/share/binfmts, which will ensure it’s loaded every time the system boots:
cat <<EOF | sudo tee /usr/share/binfmts/golang
package <local>
interpreter /tmp/test
extension go
credentials yes
EOF
To check the above works, run
sudo update-binfmts --import golang
Alternatively, you can use systemd’s support for binfmt_misc:
echo ':golang:E::go::/tmp/test:OC' | sudo tee /etc/binfmt.d/golang.conf
This will be loaded at boot by systemd-binfmt.service .
| How can I make registered extensions for `binfmt_misc` persist across reboots? |
1,593,753,506,000 |
I have a CentOS SFTP server which is critical for my company's operations. Currently, the server is running on a version of the Linux Kernel in which a vulnerability has been found:
Linux 3.10.0-1160.6.1.el7.x86_64 x86_64
I am trying to update CentOS with YUM, but it does not mention any missing update on the system. Upon research, a lot of articles out there point to the idea of adding the ElRepo repository for Kernel updates.
I wonder however why is this necessary? I know this is a well respected repository and all that, but if possible I would really rather stuck with the CentOS repositories only.
Shouldn't CentOS have the Kernel updates within its own repositories?
So to be objective on my question: How can I update my CentOS kernel in a 100% official way to get rid of this vulnerability? https://lists.centos.org/pipermail/centos-cr-announce/2020-October/012745.html
|
The Community Enterprise Linux Repository (ELRepo) focuses on kmod driver packages to enhance hardware support in EL6, EL7, and EL8 (including display, filesystem, hwmon, network, and storage drivers). Newer kernels are also available. Follow the ELRepo Home Page to install the elrepo-release package and import the GPG key.
If you are not specifically making use of a kernel mod/update (coming from ELREPO for example) then it quickly gets hard to justify in a business/legal sense why it would be used not to mention it's probably not addressing the actual vulnerability. For https://access.redhat.com/errata/RHSA-2020:4060 if that's not addressed in an update from ELREPO (where they focus on enhanced hardware support) or anyone else then their update is doing you no good; don't simply grab an update from a 3rd party simply because it's an "update" unless it actually addresses the problem at hand. You have to look into it.
So to be objective on my question: How can I update my CentOS kernel in a 100% official way to get rid of this vulnerability? Vanilla CentOS: Do we need the ElRepo repository? I have a CentOS SFTP server which is critical for my company's operations.
Have only the official CentOS repository [repositories] active on your system.
Have GPG enabled in /etc/yum.conf to make use of that protection mechanism.
only get the kernel update from the official centos repository, that is created upon installation [and automatic activation of the free distribution] of CentOS linux.
It of course depends on what you really mean by vanilla but no you would not need/require ELREPO or even EPEL for just SFTP if that's the server's only function. Having critical for my company's operations then in my opinion you should only be using the official centos repo, and not even epel or elrepo unless those address some specific business/security thing. Wwhat was already said: it take some time for CentOS maintainers to take RedHat sources and build them for CentOS is true. Kernel updates, especially going from RHEL/CentOS 7.8 to 7.9 for example, takes time on the CentOS side. CentOS comes from RHEL, RHEL will always release things first, then the CentOS folks get their hands on it and make it happen but there is a delay. That's the price of using a free distribution. Either unplug your CentOS server and wait for the update to come out if it's that severe would be the official way. If that delay is unacceptable then your company should be using a paid for RHEL subscription and not free CentOS.
Upon research a lot of articles out there point to the idea of adding the ElRepo repository for Kernel updates. I wonder however why is this necessary?
Bottom of centos repo link below: An example of what NOT to do, please do NOT follow such examples. Use a critical eye and some thought to see what is being proposed before adding to (and possibly breaking) your system. I'm taking that a little out of context, since ELREPO specifically is a good 3rd party repo it has it's place, but it's the principle of it.
https://wiki.centos.org/AdditionalResources/Repositories
| Vanilla CentOS: Do we need the ElRepo repository? |
1,593,753,506,000 |
I've been using CentOS 7 and its kernel version is 3.10. To check Kernel Version, I typed 'uname -r' and command showed 3.10.0-957.1.3.el7.x86_64
As far as I know, MemAvailable metric was introduced to Linux kernel version 3.14.
But, I ran /proc/meminfo and this command showed MemAvailable metric.
MemTotal: 3880620 kB
MemFree: 3440980 kB
MemAvailable: 3473820 kB
Why did my linux show MemAvailable metric? My Linux kernel is below 3.14
|
Your kernel identifies itself as 3.10 because that’s the baseline ABI which is maintained for RHEL 7 (and CentOS 7). The ABI is preserved so that, among other things, kernel modules built with an earlier release of RHEL 7 will continue working, as-is, in later releases.
However, when this is possible without breaking the ABI, useful kernel features are backported from new kernels to the RHEL kernel. This includes MemAvailable, which has even been backported to the “2.6.32” kernel in RHEL 6! The changes in each release’s kernel are detailed in the release notes; see for example the changes in RHEL 7.6’s kernel.
For an explanation of MemAvailable itself, see How can I get the amount of available memory portably across distributions?
| Why /proc/meminfo shows MemAvailable under Kernel Version 3.10? |
1,593,753,506,000 |
I have been asked to measure the contention of locks a write process is causing. I was looking at the data of the lockstat for that write process.
My questions are below:
Is contention related to the number of times threads wait for the particular lock, as it is taken by another thread or the time for which threads have to wait for that lock to get freed?
Is it correct to calculate the contention as a measure of both:
nsec (avg amount of time threads have to wait for the event to occur/lock to get freed) and
cnt (number of times event occurred)
from profiling data collected from lockstat for a particular lock? i.e
contention ~ nsec * cnt
|
Looking at the Linux kernel docs it looks like it it's waiting for the lock to get freed.
- HOW
Lockdep already has hooks in the lock functions and maps lock instances to
lock classes. We build on that (see Documentation/locking/lockdep-design.txt).
The graph below shows the relation between the lock functions and the various
hooks therein.
__acquire
|
lock _____
| \
| __contended
| |
| <wait>
| _______/
|/
|
__acquired
|
.
<hold>
.
|
__release
|
unlock
lock, unlock - the regular lock functions
__* - the hooks
<> - states
Source: https://www.kernel.org/doc/Documentation/locking/lockstat.txt
NOTE: Take a look at that link, it shows usage as well.
Measuring contention
By the way, you can/could use mutrace to calculate contention for a given executable as well. It's discussed here in this article titled: Measuring Lock Contention.
For example
$ LD_PRELOAD=/home/lennart/projects/mutrace/libmutrace.so gedit
mutrace: 0.1 sucessfully initialized.
mutrace: 10 most contended mutexes:
Mutex # Locked Changed Cont. tot.Time[ms] avg.Time[ms] max.Time[ms] Type
35 368268 407 275 120,822 0,000 0,894 normal
5 234645 100 21 86,855 0,000 0,494 normal
26 177324 47 4 98,610 0,001 0,150 normal
19 55758 53 2 23,931 0,000 0,092 normal
53 106 73 1 0,769 0,007 0,160 normal
25 15156 70 1 6,633 0,000 0,019 normal
4 973 10 1 4,376 0,004 0,174 normal
75 68 62 0 0,038 0,001 0,004 normal
9 1663 52 0 1,068 0,001 0,412 normal
3 136553 41 0 61,408 0,000 0,281 normal
... ... ... ... ... ... ... ...
mutrace: Total runtime 9678,142 ms.
References
https://stackoverflow.com/questions/1963960/how-to-measure-lock-contention
| What is contention? [closed] |
1,593,753,506,000 |
I ask the following question as a followup to this question.
How come a Desktop Environment be one layer under a shell (kernel-DE-shell instead kernel-shell-DE)?
Why I ask this question
In Ubuntu, for example, both the Gnome Shell and Unity GUI for Gnome shell, are 2 layers above the Gnome Desktop Environment (DE), respectively.
My assumption
Maybe the order is different between CLI-only and CLI+GUI systems, that is, maybe in CLI only systems it is, for example:
kernel-shell(sh,Bash)-utilities.
and in CLI+GUI system it is, for example:
kernel-primary shell(sh,Bash)-DE-secondary shell(Gnome shell)-GUI(Unity).
|
There is no primary shell.
If you’re running the default GNOME 3 desktop, then the stack is
Kernel → X.org or Wayland → GNOME session manager (which starts a number of GNOME helper applications) → GNOME Shell (which uses a number of GNOME libraries)
If you’re running Unity, then the stack is
Kernel → X.org or Mir or Wayland → GNOME session manager → Unity (which also uses a number of GNOME libraries)
If you’re running a command-line shell in a virtual console or an old-school terminal, then the stack is
Kernel → login → shell
A desktop environment is a whole set of applications working together to provide a consistent experience to the user. The “shell” is one of those applications (the one which acts as the last layer in the interface to the user, i.e. the one which has first dibs on user-initiated events such as keystrokes).
| How come a Desktop Environment be one layer under a shell (kernel-DE-shell instead kernel-shell-DE)? [closed] |
1,593,753,506,000 |
I was browsing through kernel.org pages and reading changelogs from several different Linux kernel versions. I noticed the version number pattern is extremely awkward:
From Linux 2.6.x came Linux 3.0;
After it reached 3.19 it became 4.0;
The 4.x version is getting new versions at a surprisingly fast pace: Ubuntu 15.10 used 4.2 and 16.04 will use 4.4! In the meantime, 4.5 is already in the "release candidate" stage. But the 3.x kernel had such a slow version number progression!
What is happening? Did the Linux kernel suddenly get a few thousand new developers? Is there some special reason for the different version numbers among the releases?
|
The reason to move from 3.19 to 4.0 is just to keep things simple. There was a public poll and discussion about that. I believe this is the poll: https://plus.google.com/+LinusTorvalds/posts/jmtzzLiiejc
So yes, linux kernel is rapidly developed and those switching is just in order to keep things simple.
| Why are Linux versions so confusing? |
1,593,753,506,000 |
I understand that Oracle doesn't provide sources for Solaris any more, as was done via OpenSolaris in the past. However, they do offer live CD images.
How much does a system installed with such images provide? I'm interested in Solaris from an academic perspective, such as studying Solaris device driver model. Will I be able to write drivers with Oracle Solaris, assuming that toolchain and libraries are installed? I'm familiar with the pkg tool from OracleSolaris; does it provide all the necessary tools for this, or do I need to pay for a commercial Solaris license?
|
You shouldn't be using the live media for this at all. That creates a new in-memory instance of the OS on every boot, with nothing saved from the previous boot. This means that if you write any code and save it, it is being saved to a RAM disk that will go away when you reboot. You could save your changes to some other system and then copy them back on each boot, such as by using an SCM hosted on another box, but you'd still have to build your program from scratch on each reboot, quite a pain.
What you actually want here is the "Text Installer". This will let you set up a standalone persistent Oracle Solaris installation which you can use for software development and educational tinkering.
If you were looking at the live media because you don't want to overwrite your PC's OS and don't want to set up a separate disk/partition for Solaris, you can install it into a virtual machine, such as Oracle's own VirtualBox. I installed it in a Parallels VM on OS X here to answer this question; it works fine that way.
The text installer results in a fairly minimal classic Unix OS, much like FreeBSD, Ubuntu Server, or Arch Linux. You build up what you want on top of this using the OS's package installer, just as with those other OSes.
After installation, I recommend that you read Setting Up the Application Development Environment in Oracle® Solaris 11. You'll give commands like the following to install the tools, libraries, etc. that you need for your work:
$ sudo pkg install developer/gcc
You may need other packages, but GCC is the only thing that's actually required to build the sample driver in Oracle's Device Driver Tutorial:
$ cat > dummy.c
...paste text from first link above
$ gcc -D_KERNEL -c dummy.c
$ ld -r -o dummy dummy.c
Now you have the actual loadable driver which you can install in the normal way.
As to your question of whether everything you need is present, that's too open-ended a question to be definitively answered. However, I can tell you that this isn't a gimped OS. It's real Solaris. It should be able to do anything a commercial copy could do. The main thing you're missing is simply the right to use the resulting system in a commercial setting. It is possible that the commercial version of Solaris includes some proprietary Oracle tools, but the development version does include all the basics: compilers, OS interface headers, and libraries.
| Writing drivers for Oracle Solaris |
1,593,753,506,000 |
After using Alsa and PulseAudio for a while, I feel they are not yet strong enough for Audio capture and audio playback.
When I test with loud-speaker and microphone, there is self feedback-loop.
When I test with some internal PCI-Express card, Linux audio creates static white noise.
It seems that a lot of problems exist, without any permanent solution. I followed many suggestion like using external sound card to resolve such issues. But it turns out that there is something wrong with the Linux audio system, with either Alsa or PulseAudio.
The same hardware turns out to be very solid for audio capture and playback by when using Windows 7/8 or Mac OS X.
My concern is that Alsa and PulseAudio are not equal to CoreAudio from Mac. And Microsoft Windows also has their own audio platform.
What else can I use for Linux? Is it possible to get CoreAudio or another audio platform without using Alsa or PulseAudio?
My setup: (i have tried several)
The main goal is to send PC1 audio to PC2. But for the moment all the audio testing is done in local PC1.
Case 1) My PC1 is capturing the audio from its own mother-board sound card. Which creates static white noise, i can not kill this noise, its always there while using mother-board sound card speaker out and mic-in or without mic even there is a static noise.
Case 2) My PC1 is capturing the audio from external USB microphone, and My PC1 is also using an external USB creative sound blaster card.
In his case, i have less noise now. But there is a problem i can hear myself louder while having the loud speaker volume higher and microphone volume to normal level.
This case also get resolve while using Ear phone, instead of using Loud speaker
Case 3) My PC1 is capturing its mother-board sound card using general microphone. And speaker out is using external speaker box. I have static noise white noise, not removable.
All those case does not happen when i use he same PC1 with Windows XP/ 7/8 or Mac OS X. Only this happening while using Alsa or Pulse audio.
For the moment i am using External USB microphone and External USB sound card to avoid the noise. Still without having any solution to remove the self feedback loop.
|
I'm not exactly sure what you meant with "ALSA or PulseAudio", I assume you meant PulseAudio over ALSA. I'm also in the dark, in regards to your distribution, so I'm prevented from being very specific. If you provide your distro + version, I can let you know if this problem has known workarounds. GNU/Linux audio has improved, but it's not on level with CoreAudio. Windows Audio is closer, but still sounds much clearer ... and embarrassingly also performs better. Regardless, You have a few options to test out.
Disable PulseAudio: I know some will cry murder, but it has helped even in 2012.
Route With JACK2: You can remove white noise in realtime, if necessary.
Consider OSSv4 to replace ALSA: architectural decisions aside, it plainly works better
Some will argue against my (any) audio recommendations, but from an audiophile who records, These have helped me at times. Audio can often be one of those 'controversial' FOSS subjects.
BTW: You should also consider filing a bug report, with your respective distribution.
| How to have CoreAudio from Mac to Linux/Unix? |
1,593,753,506,000 |
Can process execute new program without the kernel knowing? Usually, when the program is executing kernel gives to it its own process after receiving the syscall (such as exec() or fork()). In this case, everything goes through the kernel, which finally starts the program with, for example, ELF handler. In practice, of course, when running a new program, you want a separate process for it, but what if it's not necessary? Can a program/process xhathat transfer (if it doesn't already have it) a executable binary from the file system (yes, with syscalls) to its own virtual memory area and start executing it inside its process? In that case, the kernel would still only know that the program/process xhathat is doing something, but it wouldn't separately know this program executed by xhathat?
As to "without the kernel knowing"....what does that mean??
What I mean by that is that the kernel doesn't actually "start/execute the program" (when normally the kernel _always_ does that, whether it's a binary or an interpreted script containing the shebang), only indirectly. Yes, it loads a new program from mass storage into the memory area of this xhathat process, but does not start/execute it and is not aware of its start/execute. It doesn't do exec(), fork(), etc. kernel system calls for various starts/executions. When the kernel doesn't consciously launch a program, the program also doesn't show up among processes, for example (because it's just a "process/execution" inside the xhathat process). Can you catch up? As far as I can see, this is also possible with binary compiled programs (and of course also interpreted programs). This only came to mind when I realized that bash or systemd or whatever that "starts/execute" the programs/process, normally the kernel actually does the start/execute (even shebang scripts, as I stated earlier). However, after learning this, I had to wonder if it always has to be this way? I take your answer that it need not be so; although of course it is usually better to do that (that the kernel starts all programs/processes) and that is what is done.
By the way, What all (simply put) kernel is needed to do what I'm describing? The only thing I came up with was downloading this new program from the mass storage, but what if the program was already in the central memory and it didn't need to be downloaded from the mass storage separately? Could the process just directly start executing a new program/"process" without any interaction with the kernel?
|
Yes, this is possible. The already-running process needs to load (or map) the new program at the appropriate locations in the process’ virtual address space, load the dynamic loader if necessary, set up the required data structures on the stack, and jump to the new program’s entry point in memory. (Many of these operations involve the kernel, but nothing specific to loading a new program.)
Processes can’t create entirely new address spaces without fork-style help from the kernel, but that’s typically not much of a problem because the initiating program shouldn’t expect to regain control after the new program runs, and therefore it doesn’t matter that the two programs share their address space.
See the grugq’s Design and Implementation of Userland Exec for a more detailed explanation.
| Can process execute new program without the kernel knowing? |
1,593,753,506,000 |
When I login to a TTY/text terminal, I get a prompt that looks something like:
Debian GNU/Linux bookworm/sid ...
login:
I'm running testing, and purely for aesthetic purposes, I don't want it to say sid here. (Who would? I don't want to be reminded of this Pixar character every time I use my computer.) Is there a way to change this without kernel hacking? Is there a way to change it at all?
|
That message is stored in /etc/issue and /etc/issue.net; you can edit them to display whatever you want instead.
These files are intended to be edited by administrators, and your changes will be preserved on upgrade (but you may be asked what to do with them).
| Change Debian TTY/text terminal login OS version name |
1,593,753,506,000 |
The latest official release of Linux kernel released by Centos is kernel-3.10.0-1160.45.1.el7.x86_64.rpm which is updated on 15th October 2021.
Furthermore, the kernel version recommended by CVE-2021-4326 is provided by a third party repository named ElRepo which means that the recommended kernel update is not yet supported/released officially by Centos repositories? Or how secure is to update the kernel from any other source then centos official repo.
Although did try to update the kernel of one of our dev environment server to the latest recommended kernel version i.e 5.15.2. This resulted in a broken operating system where after reboot, the system landed in Kernel emergency mode as it was unable to boot from the updated kernel and couldn’t configure the boot partitions automatically.
Currently, our production servers are running on Linux Kernel 3.10.0-1160.21.1.el7.x86_64 which can be updated to the latest stable release 3.10.0-1160.45.1.el7.x86_64.
Under all these observations, Should we to stick with official Centos updates only as updating the kernel from third party sources may break operating system functionality in production environment.
|
The CentOS 7 and RHEL 7 kernels aren’t affected by CVE-2021-43267, so there’s no need to do anything.
| How to fix CVE-2021-43267 in centos 7 |
1,593,753,506,000 |
Despite having a data bus size of 64 bit, the address bus size of modern AMD64-compatible CPUs is/was 48 bit for some time which allows using 48-bit long virtual memory addresses with a maximum of addressable virtual memory of 2^48 => 256 TB.
Intel says [1] that since the Ice Lake CPU architecture, their CPUs support 5-Level Paging with 57-bit long virtual memory addresses. Linux supports this since Kernel 4.14 [2].
Does this mean that CPUs that support 5-Level Paging with 57-bit long virtual memory addresses implement a 57-bit long address bus?
The background of my question is that around 10-15 years ago, it was not a problem to learn about the address bus and data bus size of modern CPUs, but since approximately ten years, it is not simple to find information about the address bus size.
[1] https://software.intel.com/content/www/us/en/develop/download/5-level-paging-and-5-level-ept-white-paper.html
[2] https://www.kernel.org/doc/html/latest/x86/x86_64/5level-paging.html
|
No, they implement (at most) a 52-bit address bus. 4- and 5-level paging is described in section 4.5 of the Intel® 64 and IA-32 Architectures Software Developer Manuals, Volume 3A:
5-level paging translates 57-bit linear addresses to 52-bit physical addresses.
As far as I’m aware, current Intel CPUs support at most 6TiB of RAM per socket (see for example the 8362), which is less than 243; so I suspect that, even though the address bus covers more than physical memory, there are fewer than 52 address pads on the CPU (in socket 4189).
| Do CPUs that support 5-Level Papging implement a 57-bit long address bus? [closed] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.